模型:
roberta-large-mnli
Model Description: roberta-large-mnli is the RoBERTa large model fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. The model is a pretrained model on English language text using a masked language modeling (MLM) objective.
Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so:
from transformers import pipeline classifier = pipeline('zero-shot-classification', model='roberta-large-mnli')
You can then use this pipeline to classify sequences into any of the class names you specify. For example:
sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(sequence_to_classify, candidate_labels)
This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the GitHub repo for examples) and zero-shot sequence classification.
Misuse and Out-of-scope UseThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021) ). The RoBERTa large model card notes that: "The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral."
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
sequence_to_classify = "The CEO had a strong handshake." candidate_labels = ['male', 'female'] hypothesis_template = "This text speaks about a {} profession." classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
This model was fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. Also see the MNLI data card for more information.
As described in the RoBERTa large model card :
The RoBERTa model was pretrained on the reunion of five datasets:
Together theses datasets weight 160GB of text.
Also see the bookcorpus data card and the wikipedia data card for additional information.
Training Procedure PreprocessingAs described in the RoBERTa large model card :
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with <s> and the end of one by </s>
The details of the masking procedure for each sentence are the following:
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
PretrainingAlso as described in the RoBERTa large model card :
The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The optimizer used is Adam with a learning rate of 4e-4, β 1 = 0.9 \beta_{1} = 0.9 β 1 = 0 . 9 , β 2 = 0.98 \beta_{2} = 0.98 β 2 = 0 . 9 8 and ϵ = 1 e − 6 \epsilon = 1e-6 ϵ = 1 e − 6 , a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning rate after.
The following evaluation information is extracted from the associated GitHub repo for RoBERTa .
Testing Data, Factors and MetricsThe model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:
Dataset: Part of GLUE (Wang et al., 2019) , the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the Multi-Genre Natural Language Inference (MNLI) corpus. See the GLUE data card or Wang et al. (2019) for further information.
The Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus (Bowman et al., 2015) as 550k examples of auxiliary training data.
Dataset: XNLI (Conneau et al., 2018) , the extension of the Multi-Genre Natural Language Inference (MNLI) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the XNLI data card or Conneau et al. (2018) for further information.
GLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI
XNLI test results:
Task | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
91.3 | 82.91 | 84.27 | 81.24 | 81.74 | 83.13 | 78.28 | 76.79 | 76.64 | 74.17 | 74.05 | 77.5 | 70.9 | 66.65 | 66.81 |
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019) . We present the hardware type and hours used based on the associated paper .
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
@article{liu2019roberta, title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach}, author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov}, journal={arXiv preprint arXiv:1907.11692}, year = {2019}, }