模型:
facebook/bart-base
BART模型是在英语语言上预训练的。它是由Lewis等人在 BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension 论文中提出,并在 this repository 首次发布。
注意:发布BART的团队没有为该模型撰写模型卡片,因此这个模型卡片是由Hugging Face团队撰写的。
BART是一个具有双向(类似BERT)编码器和自回归(类似GPT)解码器的transformer编码器-解码器(seq2seq)模型。BART通过(1)使用任意噪声函数破坏文本,和(2)学习一个模型来重构原始文本进行预训练。
BART在文本生成(如摘要,翻译)的精细调整方面特别有效,但也适用于理解任务(如文本分类,问题回答)。
您可以将原始模型用于文本补全。然而,该模型主要用于在受监督的数据集上进行精细调整 。请查看 model hub 以查找您感兴趣的任务的精细调整版本。
以下是在PyTorch中使用此模型的方法:
from transformers import BartTokenizer, BartModel
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model = BartModel.from_pretrained('facebook/bart-base')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}