模型:
albert-base-v2
使用掩码语言建模(MLM)目标在英语语言上进行预训练的模型。它在 this paper 年提出,并于 this repository 年首次发布。与所有ALBERT模型一样,这个模型是非大小写敏感的:它不区分英语和英语。
免责声明:发布ALBERT的团队没有为这个模型编写模型卡,因此这个模型卡是由Hugging Face团队编写的。
ALBERT是一个以自监督的方式在大量英语数据上进行预训练的transformers模型。这意味着它只是在原始文本上进行预训练,并没有以任何方式人工标记这些文本(这就是为什么它可以使用大量公开可用的数据),预训练的过程是使用自动化的方式从这些文本中生成输入和标签。更具体地说,它采用了两个目标进行预训练:
通过这种方式,该模型学习了英语语言的内在表示,然后可以用于提取对下游任务有用的特征:例如,如果你有一个带标签的句子数据集,你可以使用ALBERT模型生成的特征作为输入训练一个标准分类器。
ALBERT的特殊之处在于它在其Transformer中共享其层。因此,所有层都具有相同的权重。使用重复层可以减小内存占用,但计算成本与具有相同隐藏层数的BERT-like架构保持相似,因为它必须通过相同数量的(重复)层进行迭代。
这是基础版模型的第二个版本。由于具有不同的丢弃率、额外的训练数据和更长的训练,版本2与版本1不同。在几乎所有下游任务中,它的结果更好。
该模型具有以下配置:
您可以直接使用原始模型进行掩码语言建模或下一句预测,但它主要用于在下游任务上进行微调。请查看 model hub 以寻找您感兴趣的任务的微调版本。
请注意,该模型主要用于通过使用整个句子(可能经过掩码处理)来做出决策的任务,例如序列分类、标记分类或问题回答。对于文本生成等任务,您应该看一下类似GPT2的模型。
您可以直接使用此模型进行掩码语言建模的管道:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-base-v2') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ]
以下是使用此模型获取给定文本特征的方法(使用PyTorch):
from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = AlbertModel.from_pretrained("albert-base-v2") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
使用 TensorFlow:
from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = TFAlbertModel.from_pretrained("albert-base-v2") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input)
即使用于该模型的训练数据可能被认为是相当中立的,但该模型可能具有偏见的预测:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-base-v2') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ]
这种偏见也会影响该模型的所有微调版本。
ALBERT模型是在 BookCorpus 上进行预训练的,该数据集包含11,038本未发表的图书和10,310,321个句子(不包括列表、表格和标题)。
文本经过小写处理并使用SentencePiece进行标记化,词汇表大小为30,000。模型的输入形式如下:
[CLS] Sentence A [SEP] Sentence B [SEP]
ALBERT的训练过程遵循BERT的设置。
每个句子的屏蔽过程的详细情况如下:
当在下游任务上进行微调时,ALBERT模型达到以下结果:
Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |
---|---|---|---|---|---|---|
V2 | ||||||
ALBERT-base | 82.3 | 90.2/83.2 | 82.1/79.3 | 84.6 | 92.9 | 66.8 |
ALBERT-large | 85.7 | 91.8/85.2 | 84.9/81.8 | 86.5 | 94.9 | 75.2 |
ALBERT-xlarge | 87.9 | 92.9/86.4 | 87.9/84.1 | 87.9 | 95.4 | 80.7 |
ALBERT-xxlarge | 90.9 | 94.6/89.1 | 89.8/86.9 | 90.6 | 96.8 | 86.8 |
V1 | ||||||
ALBERT-base | 80.1 | 89.3/82.3 | 80.0/77.1 | 81.6 | 90.3 | 64.0 |
ALBERT-large | 82.4 | 90.6/83.9 | 82.3/79.4 | 83.5 | 91.7 | 68.5 |
ALBERT-xlarge | 85.5 | 92.5/86.1 | 86.1/83.1 | 86.4 | 92.4 | 74.8 |
ALBERT-xxlarge | 91.0 | 94.8/89.3 | 90.2/87.4 | 90.8 | 96.9 | 86.5 |
@article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }