AI Helps Decode the Language of DNA

Summary: Researchers have developed GROVER, an AI language model trained on human DNA, to decode the complex information in our genome. GROVER treats DNA as a language, learning its rules and context to extract biological meanings, such as gene promoters and protein binding sites.

This innovative approach could revolutionize genomics and personalized medicine by unlocking hidden layers of genetic information. The findings suggest that DNA functions are encoded in sequences, offering new insights into disease predispositions and treatments.

Key Facts:

  1. AI Language Model: GROVER uses language model techniques to interpret DNA, treating sequences as a linguistic structure to reveal genetic functions.
  2. Genetic Insights: The model identifies gene promoters, protein binding sites, and epigenetic information, enhancing understanding of DNA’s non-coding regions.
  3. Potential Applications: GROVER has the potential to advance genomics and personalized medicine, offering insights into human biology and disease.

Source: TUD

DNA contains foundational information needed to sustain life. Understanding how this information is stored and organized has been one of the greatest scientific challenges of the last century.

With GROVER, a new large language model trained on human DNA, researchers could now attempt to decode the complex information hidden in our genome.

This shows DNA and a computer code.
“DNA is the code of life. Why not treat it like a language?” says Dr. Poetsch. Credit: Neuroscience News

Developed by a team at the Biotechnology Center (BIOTEC) of Dresden University of Technology, GROVER treats human DNA as a text, learning its rules and context to draw functional information about the DNA sequences.

This new tool, published in Nature Machine Intelligence, has the potential to transform genomics and accelerate personalized medicine.

Since the discovery of the double helix, scientists have sought to understand the information encoded in DNA. 70 years later, it is clear that the information hidden in the DNA is multilayered. Only 1-2 % of the genome consists of genes, the sequences that code for proteins.

“DNA has many functions beyond coding for proteins. Some sequences regulate genes, others serve structural purposes, most sequences serve multiple functions at once. Currently, we don’t understand the meaning of most of the DNA.

“When it comes to understanding the non-coding regions of the DNA, it seems that we have only started to scratch the surface. This is where AI and large language models can help,” says Dr. Anna Poetsch, research group leader at the BIOTEC.

DNA as a Language

Large language models, like GPT, have transformed our understanding of language. Trained exclusively on text, the large language models developed the ability to use the language in many contexts.

“DNA is the code of life. Why not treat it like a language?” says Dr. Poetsch. The Poetsch team trained a large language model on a reference human genome. The resulting tool named GROVER, or “Genome Rules Obtained via Extracted Representations”, can be used to extract biological meaning from the DNA.

“GROVER learned the rules of DNA. In terms of language, we are talking about grammar, syntax, and semantics. For DNA this means learning the rules governing the sequences, the order of the nucleotides and sequences, and the meaning of the sequences. Like GPT models learning human languages, GROVER has basically learned how to ‘speak’ DNA,” explains Dr. Melissa Sanabria, the researcher behind the project.

The team showed that GROVER can not only accurately predict the following DNA sequences but can also be used to extract contextual information that has biological meaning, e.g., identify gene promoters or protein binding sites on DNA. GROVER also learns processes that are generally considered to be “epigenetic”, i.e., regulatory processes that happen on top of the DNA rather than being encoded.

“It is fascinating that by training GROVER with only the DNA sequence, without any annotations of functions, we are actually able to extract information on biological function. To us, it shows that the function, including some of the epigenetic information, is also encoded in the sequence,” says Dr. Sanabria.

The DNA Dictionary

“DNA resembles language. It has four letters that build sequences and the sequences carry a meaning. However, unlike a language, DNA has no defined words,” says Dr. Poetsch. DNA consists of four letters (A, T, G, and C) and genes, but there are no predefined sequences of different lengths that combine to build genes or other meaningful sequences.

To train GROVER, the team had to first create a DNA dictionary. They used a trick from compression algorithms. “This step is crucial and sets our DNA language model apart from the previous attempts,” says Dr. Poetsch.

“We analyzed the whole genome and looked for combinations of letters that occur most often. We started with two letters and went over the DNA, again and again, to build it up to the most common multi-letter combinations.

“In this way, in about 600 cycles, we have fragmented the DNA into ‘words’ that let GROVER perform the best when it comes to predicting the next sequence,” explains Dr. Sanabria.

The Promise of AI in Genomics

GROVER promises to unlock the different layers of genetic code. DNA holds key information on what makes us human, our disease predispositions, and our responses to treatments.

“We believe that understanding the rules of DNA through a language model is going to help us uncover the depths of biological meaning hidden in the DNA, advancing both genomics and personalized medicine,” says Dr. Poetsch.

About this AI and genetics research news

Author: Benjamin Griebe
Source: TUD
Contact:Benjamin Griebe – TUD
Image: The image is credited to Neuroscience News

Original Research: Open access.
DNA language model GROVER learns sequence context in the human genome” by Anna Poetsch et al. Nature Machine Intelligence


Abstract

DNA language model GROVER learns sequence context in the human genome

Deep-learning models that learn a sense of language on DNA have achieved a high level of performance on genome biological tasks. Genome sequences follow rules similar to natural language but are distinct in the absence of a concept of words.

We established byte-pair encoding on the human genome and trained a foundation language model called GROVER (Genome Rules Obtained Via Extracted Representations) with the vocabulary selected via a custom task, next-k-mer prediction.

The defined dictionary of tokens in the human genome carries best the information content for GROVER. Analysing learned representations, we observed that trained token embeddings primarily encode information related to frequency, sequence content and length.

Some tokens are primarily localized in repeats, whereas the majority widely distribute over the genome. GROVER also learns context and lexical ambiguity. Average trained embeddings of genomic regions relate to functional genomics annotation and thus indicate learning of these structures purely from the contextual relationships of tokens.

This highlights the extent of information content encoded by the sequence that can be grasped by GROVER.

On fine-tuning tasks addressing genome biology with questions of genome element identification and protein–DNA binding, GROVER exceeds other models’ performance. GROVER learns sequence context, a sense for structure and language rules. Extracting this knowledge can be used to compose a grammar book for the code of life.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.