Research

Our Approach to Research

The aim of the research we're doing at How.TheyCanTalk.Org is to use a rigorous scientific approach to determine whether, and if so, how and how much non-humans are able to express themselves in language-like ways.

Throughout history there have been numerous occasions when people have reported non-human animals displaying surprising language-like abilities. One of the more famous examples is the horse Clever Hans, a case that shows us that we need to be careful when we consider extraordinary claims. Hans appeared to be able to successfully perform arithmetic calculations on request, and able to answer questions such as "what's 2+2?" or "how many people are in this room?" by tapping his hoof the appropriate number of times. After a scientific investigation, however, it was found that Hans' ability had little to do with his capacity for understanding human speech, and instead had everything to do with his ability to read human behavior. The explanation was that Hans would start his response to a questioner by tapping his hoof, and then notice that those around him got more tense as the number taps approached the correct value and then relaxing suddenly that number was reached, at which point Hans would stop.

Note that Hans' apparent ability wasn't a result of fraud: there are no indications that his trainer knew how or why Hans was able to do what he did. The case of Clever Hans shows us that non-human animals can, without deliberate training, learn to respond in ways that appear impressive, providing answers only because they are responding to the subtle cues that we may be unintentionally providing. And so the Clever Hans effect, as it's now called, provides an exceptionally valuable lesson: non-human animals' abilities to recognize small changes in the physical state of a human "conversation partner" can be a major confound in studies of non-human animal cognition.

On the other side of this interaction are the possible confounds that we humans can bring. Non-human animals respond to the behavioral signals of our expectations, and then to this we add our own considerable capacity for interpretation. We often can't help but apply a filter to everything we perceive, seeing faces in clouds and using our decades of experience with words to transform a dog pressing the 'love you' button into an expression of human-like affection. When we combine eager-to-please dogs with our own meaning-making minds, it's easy to be fooled into concluding that something language-like is happening even when it might not be.

With this in mind, rigorous scientific practices have begun to be applied to the study of canine language use. In a ground-breaking study published in 2004, Kaminsky and colleagues demonstrated that a dog named Rico could correctly identify over 200 different toys by spoken name, and even perform 'fast-mapping' in which a word is learned from a single training example. In this study, to avoid the Clever Hans effect, the researchers were not visible to Rico while Rico was selecting the toy the experimenter had requested. Retired professor Dr. John Pilley used a similar technique in his dog cognition research, asking his Border Collie Chaser to identify objects while he and Chaser were on opposite sides of an opaque screen, making each of them blind to any non-language hints or cues that could be giving Chaser a hidden advantage.

With all this in mind, then, the aim of our research here is to figure out: is what we're seeing clever dogs or merely Clever Hans? Can we explain the surprising button pressing behavior we're seeing using a simple first-order associative learning model, or will we have to reconsider the idea that language is an ability that is 'uniquely human'? And do we see any change in the type and complexity of communications that non-human animals (and dogs in particular) generate once they are able to use concepts that have been associated with buttons?

To help us accomplish this, we have built How.TheyCanTalk.Org, a site that combines two mutually-reinforcing aims:

  1. helping people teach dogs and other non-human animals how to use buttons, and

  2. studying the phenomenon of non-human animal button-based communication.

Why take such an approach? Because the most striking examples of impressive non-human language abilities -- the ones displayed by individuals such as Rico, Chaser, Koko (a gorilla), Kanzi (a chimpanzee), and Alex (a grey parrot) -- have all been a product of smart, patient, and dedicated human instruction.

Moreover, as performance improves, so too does the strength of the evidence. Larger effect sizes permit smaller sample sizes. Or, as pithily put by Sandra Blakeslee and V. S. Ramachandran in Phantoms in the Brain (1998) “you only need one talking pig.”

In this vein, our approach is less focused on the cognition of the average member of a species than on the potential demonstrated by the existence of the exceptional: human progress has arisen not because we each have powerful minds, but because, after nearly 300,000 years, we figured out how to capture and disseminate rare strokes of genius.

As such, we are keenly interested in individual differences, idiosyncratic behaviors, and the types of extraordinary skills and communication that a particular human and dog pairing is able to bring about.

Our research will proceed in three phases.

Phase 1: Initial data collection

Piloted: March 2020

Deployed: September 2020

Here we begin the process of collecting basic information about learners and their learning context in conjunction with regular logging of instances of dog word button use. The data we are collecting here will let us understand how age, breed, sex, teaching technique, teaching speed, and vocabulary choice affect button learning.

In this stage, regular logging of button use is our primary method of studying learner progress. As such, if you are participating in this research, we will be relying entirely on you to submit updates describing when and in what context your learner pressed buttons.

While we welcome all updates, the ones of greatest scientific interest are:

  1. When a word button is first introduced

  2. When a word button is first used in a way that’s contextually appropriate, and

  3. When a word button is first used within a multi-button expression

If you are a participant you will receive all the data you submit about yourself and your learner(s). We will also be providing ways for you to use this data to be able to see your learner's progress over time, as well as how your learner compares to others. This is information that can provide you with valuable insights on how to improve or accelerate your learner's progress.

Phase 2: Video collection and analysis

Pilot: July 2021

Deployed: In progress

We believe that the best data on sound button use will come from cameras that are continuously recording every time learners use word buttons. This makes it possible to see how button use changes over time, as well as the impact of button use on interaction with the person involved. We recommend that all participants use at least one video camera to capture every time their learner uses the sound board (capable ones are available for as little as $20 USD). We will be requesting and analyzing footage from Bunny and others in order to more reliably and precisely measure the behavior and communication they produce.

Phase 3: Interactive studies

Pilot: November 2021

Deployed: May 2022

Based heavily on the insights gained in phases 1 and 2, we will be piloting direct, controlled tests of learner sound button use and understanding that aim to determine how language-like learners' sound button use is. We anticipate that these will be done with a smaller number of participants.

Who is involved