Summary: Working memory for language processing can be provided by the down-regulation of neural excitability in response to external input.
Source: Max Planck Institute
Did the man bite the dog, or was it the other way around? When processing an utterance, words need to be assembled into the correct interpretation within working memory. One aspect of comprehension is to establish ‘who did what to whom’. This process of unification takes much longer than basic events in neurobiology, like neuronal spikes or synaptic signaling. Hartmut Fitz, lead investigator at the Neurocomputational Models of Language group at the Max Planck Institute for Psycholinguistics, and his colleagues propose an account where adaptive features of single neurons supply memory that is sufficiently long-lived to bridge this temporal gap and support language processing.
Together with researchers Marvin Uhlmann, Dick van den Broek, Peter Hagoort, Karl Magnus Petersson (all Max Planck Institute for Psycholinguistics) and Renato Duarte (Jülich Research Centre, Germany), Fitz studied working memory in spiking networks through an innovative combination of experimental language research with methods from computational neuroscience.
In a sentence comprehension task, circuits of biological neurons and synapses were exposed to sequential language input which they had to map onto semantic relations that characterize the meaning of an utterance. For example, ‘the cat chases a dog’ means something different than ‘the cat is chased by a dog’ even though both sentences contain similar words. The various cues to meaning need to be integrated within working memory to derive the correct message. The researchers varied the neurobiological features in computationally simulated networks and compared the performance of different versions of the model. This allowed them to pinpoint which of these features implemented the memory capacity required for sentence comprehension.
Towards a computational neurobiology of language
They found that working memory for language processing can be provided by the down-regulation of neuronal excitability in response to external input. “This suggests that working memory could reside within single neurons, which contrasts with other theories where memory is either due to short-term synaptic changes or arises from network connectivity and excitatory feedback”, says Fitz.
Their model shows that this neuronal memory is context-dependent, and sensitive to serial order which makes it ideally suitable for language. Additionally, the model was able to establish binding relations between words and semantic roles with high accuracy.
“It is crucial to try and build language models that are directly grounded in basic neurobiological principles,” declares Fitz. “This work shows that we can meaningfully study language at the neurobiological level of explanation, using a causal modelling approach that may eventually allow us to develop a computational neurobiology of language.”
Neuronal spike-rate adaptation supports working memory in language processing
Language processing involves the ability to store and integrate pieces of information in working memory over short periods of time. According to the dominant view, information is maintained through sustained, elevated neural activity. Other work has argued that short-term synaptic facilitation can serve as a substrate of memory. Here we propose an account where memory is supported by intrinsic plasticity that downregulates neuronal firing rates. Single neuron responses are dependent on experience, and we show through simulations that these adaptive changes in excitability provide memory on timescales ranging from milliseconds to seconds. On this account, spiking activity writes information into coupled dynamic variables that control adaptation and move at slower timescales than the membrane potential. From these variables, information is continuously read back into the active membrane state for processing. This neuronal memory mechanism does not rely on persistent activity, excitatory feedback, or synaptic plasticity for storage. Instead, information is maintained in adaptive conductances that reduce firing rates and can be accessed directly without cued retrieval. Memory span is systematically related to both the time constant of adaptation and baseline levels of neuronal excitability. Interference effects within memory arise when adaptation is long lasting. We demonstrate that this mechanism is sensitive to context and serial order which makes it suitable for temporal integration in sequence processing within the language domain. We also show that it enables the binding of linguistic features over time within dynamic memory registers. This work provides a step toward a computational neurobiology of language.