模型:
InstaDeepAI/nucleotide-transformer-500m-human-ref
The Nucleotide Transformers are a collection of foundational language models that were pre-trained on DNA sequences from whole-genomes. Compared to other approaches, our models do not only integrate information from single reference genomes, but leverage DNA sequences from over 3,200 diverse human genomes, as well as 850 genomes from a wide range of species, including model and non-model organisms. Through robust and extensive evaluation, we show that these large models provide extremely accurate molecular phenotype prediction compared to existing methods
Part of this collection is the nucleotide-transformer-500m-human-ref , a 500M parameters transformer pre-trained on the human reference genome. The model is made available both in Tensorflow and Pytorch.
Developed by: InstaDeep, NVIDIA and TUM
Until its next release, the transformers library needs to be installed from source with the following command in order to use the models:
pip install --upgrade git+https://github.com/huggingface/transformers.git
A small snippet of code is given here in order to retrieve both logits and embeddings from a dummy DNA sequence.
from transformers import AutoTokenizer, AutoModelForMaskedLM import torch # Import the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-human-ref") model = AutoModelForMaskedLM.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-human-ref") # Create a dummy dna sequence and tokenize it sequences = ['ATTCTG' * 9] tokens_ids = tokenizer.batch_encode_plus(sequences, return_tensors="pt")["input_ids"] # Compute the embeddings attention_mask = tokens_ids != tokenizer.pad_token_id torch_outs = model( tokens_ids, attention_mask=attention_mask, encoder_attention_mask=attention_mask, output_hidden_states=True ) # Compute sequences embeddings embeddings = torch_outs['hidden_states'][-1].detach().numpy() print(f"Embeddings shape: {embeddings.shape}") print(f"Embeddings per token: {embeddings}") # Compute mean embeddings per sequence mean_sequence_embeddings = torch.sum(attention_mask.unsqueeze(-1)*embeddings, axis=-2)/torch.sum(attention_mask, axis=-1) print(f"Mean sequence embeddings: {mean_sequence_embeddings}")
The nucleotide-transformer-500m-human-ref model was pretrained on the GRCh38 human reference genome , which is available as a HuggingFace dataset here , consisting of 3B nucleotides, making up for roughly 500M 6-mers tokens.
The DNA sequences are tokenized using the Nucleotide Transformer Tokenizer, which tokenizes sequences as 6-mers tokenizer when possible, otherwise tokenizing each nucleotide separately as described in the Tokenization section of the associated repository. This tokenizer has a vocabulary size of 4105. The inputs of the model are then of the form:
<CLS> <ACGTGT> <ACGTGC> <ACGGAC> <GACTAG> <TCAGCA>
The tokenized sequence have a maximum length of 1,000.
The masking procedure used is the standard one for Bert-style training:
The model was trained with 8 A100 80GB on 300B tokens, with an effective batch size of 1M tokens. The sequence length used was 1000 tokens. The Adam optimizer [38] was used with a learning rate schedule, and standard values for exponential decay rates and epsilon constants, β1 = 0.9, β2 = 0.999 and ε=1e-8. During a first warmup period, the learning rate was increased linearly between 5e-5 and 1e-4 over 16k steps before decreasing following a square root decay until the end of training.
@article{dalla2023nucleotide, title={The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics}, author={Dalla-Torre, Hugo and Gonzalez, Liam and Mendoza Revilla, Javier and Lopez Carranza, Nicolas and Henryk Grywaczewski, Adam and Oteri, Francesco and Dallago, Christian and Trop, Evan and Sirelkhatim, Hassan and Richard, Guillaume and others}, journal={bioRxiv}, pages={2023--01}, year={2023}, publisher={Cold Spring Harbor Laboratory} }