New CSAIL genomics work suggests vocalizing birds could tell us more about speech disorders.
Think that sparrow whistling outside your bedroom window is nothing more than pleasant background noise?
A new paper from a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) suggests that we can apply what we know about songbirds to our understanding of human speech production — and, therefore, come closer to identifying and potentially even reducing the prevalence of disorders like stuttering and Huntington’s Disease.
In a paper published in Science this month, CSAIL postdoc Andreas Pfenning and collaborators at Duke University compared genetic maps of brain tissue from three groups: humans, vocal-learning birds, and non-vocal-learning birds and primates.
Their results showed that there are more than 50 different genes that display similar activity patterns in humans and vocal-learning birds — patterns that are distinct from those in the brains of animals incapable of vocal-learning. That is, if a gene was more active in humans, it was more active in songbirds, but not in non-songbirds.
These findings dramatically advance existing research that previously only identified one gene (“FOXP2”) involved in both human and avian language centers. Pfenning says the work shows that genetic experiments involving birds could help scientists learn more about which genes might be involved in different speech conditions in humans.
Pfenning, who received his PhD from Duke in 2012, says he was hopeful that such correlations would be found, especially given that the manner in which birds learn specific song patterns is so similar to how humans learn to form words.
“Studying fine motor behavior is vital for a lot of neurological disorders in humans, but traditional research subjects like mice are difficult to quantify for those kinds of actions,” says Pfenning. “With birdsong, meanwhile, there are far more exact metrics, like the precision of the pitch, the timing/rhythm of the notes and even the higher-level ‘grammar’ of different songs.”
The researchers utilized multiple massive datasets for the study, including the avian genome, the songbird genome that was completed in 2010, and the Allen Brain Atlas that was used for humans and primates. The work is part of nearly 30 studies published this month by the Avian Genome Consortium, which seeks to sequence the genomes of all 48 major bird groups — only three of which had been sequenced before the consortium got to work in 2010.
Beyond the paper’s implications for specific speech disorders, Pfenning is optimistic that further research could help illuminate how our languages have evolved in the bigger picture.
“Are there common features in the evolution patterns of different animals that can tell us more about the history of human language?” he asks. “Our study is an exciting first step, and we’re just scraping the surface of what’s possible.”
Contact: Adam Conner-Simons – CSAIL/MIT
Source: CSAIL/MIT press release
Image Source: The image is adapted from the CSAIL/MIT press release
Original Research: Abstract for “Convergent transcriptional specializations in the brains of humans and song-learning birds” by Andreas R. Pfenning, Erina Hara, Osceola Whitney, Miriam V. Rivas, Rui Wang, Petra L. Roulhac, Jason T. Howard, Morgan Wirthlin, Peter V. Lovell, Ganeshkumar Ganapathy, Jacquelyn Mouncastle, M. Arthur Moseley, J. Will Thompson, Erik J. Soderblom, Atsushi Iriki, Masaki Kato, M. Thomas P. Gilbert, Guojie Zhang, Trygve Bakken, Angie Bongaarts, Amy Bernard, Ed Lein, Claudio V. Mello, Alexander J. Hartemink, and Erich D. Jarvis in Science. Published online December 12 2014 doi:10.1126/science.1256846