Artificial Neural Network Learns to Use Human Language

A computer simulation of a cognitive model entirely made up of artificial neurons learns to communicate through dialogue starting from a state of tabula rasa.

A group of researchers from the University of Sassari and the University of Plymouth have developed a cognitive model able to learn to communicate using human language starting from a state of “tabula rasa”, only through communication with a human interlocutor. The model is called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning) and it is described in an article published in the international scientific journal PLOS ONE. This research sheds light on the neural processes that underlie the development of language.

How does our brain develop the ability to perform complex cognitive functions, such as those needed for language and reasoning? This is a question that certainly we are all asking ourselves, to which the researchers are not yet able to give a complete answer. We know that in the human brain there are about one hundred billion neurons that communicate by means of electrical signals. We learned a lot about the mechanisms of production and transmission of electrical signals among neurons. There are also experimental techniques, such as functional magnetic resonance imaging, which allow us to understand which parts of the brain are most active when we are involved in different cognitive activities. But a detailed knowledge of how a single neuron works and what are the functions of the various parts of the brain is not enough to give an answer to the initial question.

We might think that the brain works in a similar way to a computer: after all, even computers work through electrical signals. In fact, many researchers have proposed models based on the analogy brain-is-like-a-computer since the late ’60s. However, apart from the structural differences, there are profound differences between the brain and a computer, especially in learning and information processing mechanisms. Computers work through programs developed by human programmers. In these programs there are coded rules that the computer must follow in handling the information to perform a given task. However there is no evidence of the existence of such programs in our brain. In fact, today many researchers believed that our brain is able to develop higher cognitive skills simply by interacting with the environment, starting from very little innate knowledge. The ANNABELL model appears to confirm this perspective.

ANNABELL does not have pre-coded language knowledge; it learns only through communication with a human interlocutor, thanks to two fundamental mechanisms, which are also present in the biological brain: synaptic plasticity and neural gating. Synaptic plasticity is the ability of the connection between two neurons to increase its efficiency when the two neurons are often active simultaneously, or nearly simultaneously. This mechanism is essential for learning and for long-term memory. Neural gating mechanisms are based on the properties of certain neurons (called bistable neurons) to behave as switches that can be turned “on” or “off” by a control signal coming from other neurons. When turned on, the bistable neurons transmit the signal from a part of the brain to another, otherwise they block it. The model is able to learn, due to synaptic plasticity, to control the signals that open and close the neural gates, so as to control the flow of information among different areas.

Illustration of two talking heads.
The ANNABELL model is a cognitive architecture entirely made up of interconnected artificial neurons, able to learn to communicate using human language starting from a state of ‘tabula rasa’ only through communication with a human interlocutor. Credit: Bruno Golosio.

The cognitive model has been validated using a database of about 1500 input sentences based on literature about early language development. The model has responded by producing a total of about 500 sentences in output which contains nouns, verbs, adjectives, pronouns, and other word classes, demonstrating the ability to express a wide range of capabilities in human language processing.

About this language and AI research

Funding: This work was supported by the Regione Autonoma della Sardegna, Engineering and Physical Sciences Research Council, Framework Programme for Research and Technological Development Projects POETICON++.

Source: Bruno Golosio – University of Sassari
Image Source: The image is credited to Bruno Golosio
Original Research: Full open access research for “A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language” by Bruno Golosio, Angelo Cangelosi, Olesya Gamotina, and Giovanni Luca Masala in PLOS ONE. Published online November 11 2015 doi:10.1371/journal.pone.0140866


Abstract

A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language

Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.

“A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language” by Bruno Golosio, Angelo Cangelosi, Olesya Gamotina, and Giovanni Luca Masala in PLOS ONE. Published online November 11 2015 doi:10.1371/journal.pone.0140866

Feel free to share this neuroscience article.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.