Meaning Without Words: Gestures and Visual Animations Reveal Cognitive Origins of Linguistic Meaning

Summary: Gestures and visual animation assist in allowing us to understand the cognitive origins of linguistic meanings.

Source: NYU

Gestures and visual animations can help reveal the cognitive origins of meaning, indicating that our minds can assign a linguistic structure to new informational content “on the fly”—even if it is not linguistic in nature.

These conclusions stem from two studies, one in linguistics and the other in experimental psychology, appearing in Natural Language & Linguistic Theory and Proceedings of the National Academy of Sciences (PNAS).

“These results suggest that far less is encoded in words than was originally thought,” explains Philippe Schlenker, a senior researcher at Institut Jean-Nicod within France’s National Center for Scientific Research (CNRS) and a Global Distinguished Professor at New York University, who wrote the first study and co-authored the second. “Rather, our mind has a ‘meaning engine’ that can apply to linguistic and non-linguistic material alike.

“Taken together, these findings provide new insights into the cognitive origins of linguistic meaning.”

Contemporary linguistics has established that language conveys information through a highly articulated typology of inferences. For instance, I have a dog asserts that I own a dog, but it also suggests (or “implicates”) that I have no more than one: the hearer assumes that if I had two dogs, I would have said so (as I have two dogs is more informative).

Unlike asserted content, implicated content isn’t targeted by negation. I don’t have a dog thus means that I don’t have any dog, not that I don’t have exactly one dog. There are further inferential types characterized by further properties: the sentence I spoil my dog still conveys that I have a dog, but now this is neither asserted nor implicated; rather, it is “presupposed”—i.e. taken for granted in the conversation. Unlike asserted and implicated information, presuppositions are preserved in negative statements, and thus I don’t spoil my dog still presupposes that I have a dog.

A fundamental question of contemporary linguistics is: Which of these inferences come from arbitrary properties of words stored in our mental dictionary and which result from general, productive processes?

In the Natural Language & Linguistic Theory work and the PNAS study, written by Lyn Tieu of Australia’s Western Sydney University, Schlenker, and CNRS’s Emmanuel Chemla, the authors argue that nearly all inferential types result from general, and possibly non-linguistic, processes.

Their conclusion is based on an understudied type of sentence containing gestures that replace normal words. For instance, in the sentence You should UNSCREW-BULB, the capitalized expression encodes a gesture of unscrewing a bulb from the ceiling. While the gesture may be seen for the first time (and thus couldn’t be stored in our mental dictionary), it is understood due to its visual content.

This makes it possible to test how its informational content (i.e. unscrewing a bulb that’s on the ceiling) is divided on the fly among the typology of inferences. In this case, the unscrewing action is asserted, but the presence of a bulb on the ceiling is presupposed, as shown by the fact that the negation (You shouldn’t UNSCREW-BULB) preserves this information. By systematically investigating such gestures, the Natural Language & Linguistic Theory study reaches a ground-breaking conclusion: nearly all inferential types (eight in total) can be generated on the fly, suggesting that all are due to productive processes.

This shows a woman talking with hand gestures
A fundamental question of contemporary linguistics is: Which of these inferences come from arbitrary properties of words stored in our mental dictionary and which result from general, productive processes? The image is adapted from the NYU news release.

The PNAS study investigates four of these inferential types with experimental methods, confirming the results of the linguistic study. But it also goes one step further by replacing the gestures with visual animations embedded in written texts, thus answering two new questions: First, can the results be reproduced for visual stimuli that subjects cannot possibly have seen in a linguistic context, given that people routinely speak with gestures but not with visual animations? Second, can entirely non-linguistic material be structured by the same processes?

Both answers are positive.

In a series of experiments, approximately 100 subjects watched videos of sentences in which some words were replaced either by gestures or by visual animations. They were asked how strongly they derived various inferences that are the hallmarks of different inferential types (for instance, inferences derived in the presence of negation). The subjects’ judgments displayed the characteristic signature of four classic inferential types (including presuppositions and implicated content) in gestures but also in visual animations: the informational content of these non-standard expressions was, as expected, divided on the fly by the experiments’ subjects among well-established slots of the inferential typology.

About this neuroscience research article

Media Contacts:
James Devitt – NYU
Image Source:
The image is adapted from the NYU news release.

Original Research: Open access
“Gestural semantics: Replicating the typology of linguistic inferences with pro- and post-speech gestures”. Philippe Schlenker. Natural Language & Linguistic Theory. doi:10.1007/s11049-018-9414-3

Closed access
“Linguistic inferences without words”. Lyn Tieu, Philippe Schlenker, and Emmanuel Chemla. PNAS. doi:10.1073/pnas.1821018116


Gestural semantics: Replicating the typology of linguistic inferences with pro- and post-speech gestures

We argue that a large part of the typology of linguistic inferences can be replicated with gestures, including some that one might not have seen before. While gesture research often focuses on co-speech gestures, which co-occur with spoken words, our study is based on pro-speech gestures (which fully replace spoken words) and post-speech gestures (which follow expressions they modify). We argue that pro-speech gestures can trigger several types of inferences besides entailments: presuppositions and anti-presuppositions (derived from Maximize Presupposition), scalar implicatures and ‘Blind Implicatures,’ homogeneity inferences that are characteristic of definite plurals, and some expressive inferences that are characteristic of pejorative terms. We further argue that post-speech gestures trigger inferences that are very close to the supplements contributed by appositive relative clauses. We show in each case that we are not dealing with a translation into spoken language because the fine-grained meanings obtained are tied to the iconic properties of the gestures. Our results argue for a generative mechanism that assigns new meanings a specific place in a rich inferential typology, which might have consequences for the structure of semantic theory and the nature of acquisition algorithms.


Linguistic inferences without words

Contemporary semantics has uncovered a sophisticated typology of linguistic inferences, characterized by their conversational status and their behavior in complex sentences. This typology is usually thought to be specific to language and in part lexically encoded in the meanings of words. We argue that it is neither. Using a method involving “composite” utterances that include normal words alongside novel nonlinguistic iconic representations (gestures and animations), we observe successful “one-shot learning” of linguistic meanings, with four of the main inference types (implicatures, presuppositions, supplements, homogeneity) replicated with gestures and animations. The results suggest a deeper cognitive source for the inferential typology than usually thought: Domain-general cognitive algorithms productively divide both linguistic and nonlinguistic information along familiar parts of the linguistic typology.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive the latest neuroscience headlines and summaries sent to your email daily from
We hate spam and only use your email to contact you about newsletters. We do not sell email addresses. You can cancel your subscription any time.