模型:
bigscience/T0_3B
How do I pronounce the name of the model? T0 should be pronounced "T Zero" (like in "T5 for zero-shot") and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"!
Official repository : bigscience-workshop/t-zero
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy" , and the model will hopefully generate "Positive" .
A few other examples that you can try:
We make available the models presented in our paper along with the ablation models. We recommend using the T0pp (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
Model | Number of parameters |
---|---|
T0 | 11 billion |
T0p | 11 billion |
T0pp | 11 billion |
T0_single_prompt | 11 billion |
T0_original_task_only | 11 billion |
T0_3B | 3 billion |
Here is how to use the model in PyTorch:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp") model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp") inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0]))
If you want to use another checkpoint, please replace the path in AutoTokenizer and AutoModelForSeq2SeqLM .
Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.
T0* models are based on T5 , a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on C4 . We use the publicly available language model-adapted T5 checkpoints which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.
At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section.
Training details:
We trained different variants T0 with different mixtures of datasets.
Model | Training datasets |
---|---|
T0 | - Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop - Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES - Closed-Book QA: Hotpot QA*, Wiki QA - Structure-To-Text: Common Gen, Wiki Bio - Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp - Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum - Topic Classification: AG News, DBPedia, TREC - Paraphrase Identification: MRPC, PAWS, QQP |
T0p | Same as T0 with additional datasets from GPT-3's evaluation suite: - Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag - Extractive QA: SQuAD v2 - Closed-Book QA: Trivia QA, Web Questions |
T0pp | Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets): - BoolQ - COPA - MultiRC - ReCoRD - WiC - WSC |
T0_single_prompt | Same as T0 but only one prompt per training dataset |
T0_original_task_only | Same as T0 but only original tasks templates |
T0_3B | Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model |
For reproducibility, we release the data we used for training (and evaluation) in the P3 dataset . Prompts examples can be found on the dataset page.
*: We recast Hotpot QA as closed-book QA due to long input sequence length.
We evaluate our models on a suite of held-out tasks:
Task category | Datasets |
---|---|
Natural language inference | ANLI, CB, RTE |
Coreference resolution | WSC, Winogrande |
Word sense disambiguation | WiC |
Sentence completion | COPA, HellaSwag, Story Cloze |
We also evaluate T0, T0p and T0pp on the a subset of the BIG-bench benchmark :
Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics:
Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases.
To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the Diverse Natural Language Inference Collection ( Poliak et al., 2018 ) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts.
Dataset | Model | Average (Acc.) | Median (Acc.) |
CrowS-Pairs | T0 | 59.2 | 83.8 |
T0p | 57.6 | 83.8 | |
T0pp | 62.7 | 64.4 | |
T0_single_prompt | 57.6 | 69.5 | |
T0_original_task_only | 47.1 | 37.8 | |
T0_3B | 56.9 | 82.6 | |
WinoGender | T0 | 84.2 | 84.3 |
T0p | 80.1 | 80.6 | |
T0pp | 89.2 | 90.0 | |
T0_single_prompt | 81.6 | 84.6 | |
T0_original_task_only | 83.7 | 83.8 | |
T0_3B | 69.7 | 69.4 |
To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts.
Model | Subset | Average (Acc.) | Median (Acc.) | ||||
Pro | Anti | Pro - Anti | Pro | Anti | Pro - Anti | ||
T0 | Type 1 | 68.0 | 61.9 | 6.0 | 71.7 | 61.9 | 9.8 |
Type 2 | 79.3 | 76.4 | 2.8 | 79.3 | 75.0 | 4.3 | |
T0p | Type 1 | 66.6 | 57.2 | 9.4 | 71.5 | 62.6 | 8.8 |
Type 2 | 77.7 | 73.4 | 4.3 | 86.1 | 81.3 | 4.8 | |
T0pp | Type 1 | 63.8 | 55.9 | 7.9 | 72.7 | 63.4 | 9.3 |
Type 2 | 66.8 | 63.0 | 3.9 | 79.3 | 74.0 | 5.3 | |
T0_single_prompt | Type 1 | 73.7 | 60.5 | 13.2 | 79.3 | 60.6 | 18.7 |
Type 2 | 77.7 | 69.6 | 8.0 | 80.8 | 69.7 | 11.1 | |
T0_original_task_only | Type 1 | 78.1 | 67.7 | 10.4 | 81.8 | 67.2 | 14.6 |
Type 2 | 85.2 | 82.3 | 2.9 | 89.6 | 85.4 | 4.3 | |
T0_3B | Type 1 | 82.3 | 70.1 | 12.2 | 83.6 | 62.9 | 20.7 |
Type 2 | 83.8 | 76.5 | 7.3 | 85.9 | 75 | 10.9 |
@misc{sanh2021multitask, title={Multitask Prompted Training Enables Zero-Shot Task Generalization}, author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush}, year={2021}, eprint={2110.08207}, archivePrefix={arXiv}, primaryClass={cs.LG} }