Summary: An algorithm used for the internet may help researchers learn about the human brain, researchers report.
Source: Salk Institute.
Although we spend a lot of our time online nowadays–streaming music and video, checking email and social media, or obsessively reading the news–few of us know about the mathematical algorithms that manage how our content is delivered. But deciding how to route information fairly and efficiently through a distributed system with no central authority was a priority for the Internet’s founders. Now, a Salk Institute discovery shows that an algorithm used for the Internet is also at work in the human brain, an insight that improves our understanding of engineered and neural networks and potentially even learning disabilities.
“The founders of the Internet spent a lot of time considering how to make information flow efficiently,” says Salk Assistant Professor Saket Navlakha, coauthor of the new study that appears online in Neural Computation on February 9, 2017. “Finding that an engineered system and an evolved biological one arise at a similar solution to a problem is really interesting.”
In the engineered system, the solution involves controlling information flow such that routes are neither clogged nor underutilized by checking how congested the Internet is. To accomplish this, the Internet employs an algorithm called “additive increase, multiplicative decrease” (AIMD) in which your computer sends a packet of data and then listens for an acknowledgement from the receiver: If the packet is promptly acknowledged, the network is not overloaded and your data can be transmitted through the network at a higher rate. With each successive successful packet, your computer knows it’s safe to increase its speed by one unit, which is the additive increase part. But if an acknowledgement is delayed or lost your computer knows that there is congestion and slows down by a large amount, such as by half, which is the multiplicative decrease part. In this way, users gradually find their “sweet spot,” and congestion is avoided because users take their foot off the gas, so to speak, as soon as they notice a slowdown. As computers throughout the network utilize this strategy, the whole system can continuously adjust to changing conditions, maximizing overall efficiency.
Navlakha, who develops algorithms to understand complex biological networks, wondered if the brain, with its billions of distributed neurons, was managing information similarly. So, he and coauthor Jonathan Suen, a postdoctoral scholar at Duke University, set out to mathematically model neural activity.
Because AIMD is one of a number of flow-control algorithms, the duo decided to model six others as well. In addition, they analyzed which model best matched physiological data on neural activity from 20 experimental studies. In their models, AIMD turned out to be the most efficient at keeping the flow of information moving smoothly, adjusting traffic rates whenever paths got too congested. More interestingly, AIMD also turned out to best explain what was happening to neurons experimentally.
It turns out the neuronal equivalent of additive increase is called long-term potentiation. It occurs when one neuron fires closely after another, which strengthens their synaptic connection and makes it slightly more likely the first will trigger the second in the future. The neuronal equivalent of multiplicative decrease occurs when the firing of two neurons is reversed (second before first), which weakens their connection, making the first much less likely to trigger the second in the future. This is called long-term depression. As synapses throughout the network weaken or strengthen according to this rule, the whole system adapts and learns.
“While the brain and the Internet clearly operate using very different mechanisms, both use simple local rules that give rise to global stability,” says Suen. “I was initially surprised that biological neural networks utilized the same algorithms as their engineered counterparts, but, as we learned, the requirements for efficiency, robustness, and simplicity are common to both living organisms and the networks we have built.”
Understanding how the system works under normal conditions could help neuroscientists better understand what happens when these results are disrupted, for example, in learning disabilities. “Variations of the AIMD algorithm are used in basically every large-scale distributed communication network,” says Navlakha. “Discovering that the brain uses a similar algorithm may not be just a coincidence.”
About this neuroscience research article
Funding: The work was funded by the Department of Defense, Army Research Office.
Source: Salk Institute Image Source: NeuroscienceNews.com image is credited to Salk Institute. Original Research:Abstract for “Using Inspiration from Synaptic Plasticity Rules to Optimize Traffic Flow in Distributed Engineered Networks ” byJonathan Y. Suen, Saket Navlakha in Neural Computation. Published online February 9 2017 doi:10.1162/NECO_a_00945
Cite This NeuroscienceNews.com Article
[cbtabs][cbtab title=”MLA”]Salk Institute “The Internet and Your Brain Are More Alike Than You Think.” NeuroscienceNews. NeuroscienceNews, 9 February 2017. <https://neurosciencenews.com/brain-internet-6091/>.[/cbtab][cbtab title=”APA”]Salk Institute (2017, February 9). The Internet and Your Brain Are More Alike Than You Think. NeuroscienceNew. Retrieved February 9, 2017 from https://neurosciencenews.com/brain-internet-6091/[/cbtab][cbtab title=”Chicago”]Salk Institute “The Internet and Your Brain Are More Alike Than You Think.” https://neurosciencenews.com/brain-internet-6091/ (accessed February 9, 2017).[/cbtab][/cbtabs]
Using Inspiration from Synaptic Plasticity Rules to Optimize Traffic Flow in Distributed Engineered Networks
Controlling the flow and routing of data is a fundamental problem in many distributed networks, including transportation systems, integrated circuits, and the Internet. In the brain, synaptic plasticity rules have been discovered that regulate network activity in response to environmental inputs, which enable circuits to be stable yet flexible. Here, we develop a new neuro-inspired model for network flow control that depends only on modifying edge weights in an activity-dependent manner. We show how two fundamental plasticity rules, long-term potentiation and long-term depression, can be cast as a distributed gradient descent algorithm for regulating traffic flow in engineered networks. We then characterize, both by simulation and analytically, how different forms of edge-weight-update rules affect network routing efficiency and robustness. We find a close correspondence between certain classes of synaptic weight-update rules derived experimentally in the brain and rules commonly used in engineering, suggesting common principles to both.
“Using Inspiration from Synaptic Plasticity Rules to Optimize Traffic Flow in Distributed Engineered Networks ” byJonathan Y. Suen, Saket Navlakha in Neural Computation. Published online February 9 2017 doi:10.1162/NECO_a_00945