Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository . This model is case-sensitive: it makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
You can use this model directly with a pipeline for masked language modeling:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='roberta-large') >>> unmasker("Hello I'm a <mask> model.") [{'sequence': "<s>Hello I'm a male model.</s>", 'score': 0.3317350447177887, 'token': 2943, 'token_str': 'Ġmale'}, {'sequence': "<s>Hello I'm a fashion model.</s>", 'score': 0.14171843230724335, 'token': 2734, 'token_str': 'Ġfashion'}, {'sequence': "<s>Hello I'm a professional model.</s>", 'score': 0.04291723668575287, 'token': 2038, 'token_str': 'Ġprofessional'}, {'sequence': "<s>Hello I'm a freelance model.</s>", 'score': 0.02134818211197853, 'token': 18150, 'token_str': 'Ġfreelance'}, {'sequence': "<s>Hello I'm a young model.</s>", 'score': 0.021098261699080467, 'token': 664, 'token_str': 'Ġyoung'}]
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('roberta-large') model = RobertaModel.from_pretrained('roberta-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
and in TensorFlow:
from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('roberta-large') model = TFRobertaModel.from_pretrained('roberta-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input)
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='roberta-large') >>> unmasker("The man worked as a <mask>.") [{'sequence': '<s>The man worked as a mechanic.</s>', 'score': 0.08260300755500793, 'token': 25682, 'token_str': 'Ġmechanic'}, {'sequence': '<s>The man worked as a driver.</s>', 'score': 0.05736079439520836, 'token': 1393, 'token_str': 'Ġdriver'}, {'sequence': '<s>The man worked as a teacher.</s>', 'score': 0.04709019884467125, 'token': 3254, 'token_str': 'Ġteacher'}, {'sequence': '<s>The man worked as a bartender.</s>', 'score': 0.04641604796051979, 'token': 33080, 'token_str': 'Ġbartender'}, {'sequence': '<s>The man worked as a waiter.</s>', 'score': 0.04239227622747421, 'token': 38233, 'token_str': 'Ġwaiter'}] >>> unmasker("The woman worked as a <mask>.") [{'sequence': '<s>The woman worked as a nurse.</s>', 'score': 0.2667474150657654, 'token': 9008, 'token_str': 'Ġnurse'}, {'sequence': '<s>The woman worked as a waitress.</s>', 'score': 0.12280137836933136, 'token': 35698, 'token_str': 'Ġwaitress'}, {'sequence': '<s>The woman worked as a teacher.</s>', 'score': 0.09747499972581863, 'token': 3254, 'token_str': 'Ġteacher'}, {'sequence': '<s>The woman worked as a secretary.</s>', 'score': 0.05783602222800255, 'token': 2971, 'token_str': 'Ġsecretary'}, {'sequence': '<s>The woman worked as a cleaner.</s>', 'score': 0.05576248839497566, 'token': 16126, 'token_str': 'Ġcleaner'}]
This bias will also affect all fine-tuned versions of this model.
The RoBERTa model was pretrained on the reunion of five datasets:
Together theses datasets weight 160GB of text.
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with <s> and the end of one by </s>
The details of the masking procedure for each sentence are the following:
15% of the tokens are masked.
In 80% of the cases, the masked tokens are replaced by <mask> .
In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The optimizer used is Adam with a learning rate of 4e-4, β 1 = 0.9 \beta_{1} = 0.9 β 1 = 0 . 9 , β 2 = 0.98 \beta_{2} = 0.98 β 2 = 0 . 9 8 and ϵ = 1 e − 6 \epsilon = 1e-6 ϵ = 1 e − 6 , a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning rate after.
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
---|---|---|---|---|---|---|---|---|
90.2 | 92.2 | 94.7 | 96.4 | 68.0 | 96.4 | 90.9 | 86.6 |
@article{DBLP:journals/corr/abs-1907-11692, author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov}, title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach}, journal = {CoRR}, volume = {abs/1907.11692}, year = {2019}, url = {http://arxiv.org/abs/1907.11692}, archivePrefix = {arXiv}, eprint = {1907.11692}, timestamp = {Thu, 01 Aug 2019 08:59:33 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }