模型:

uer/bart-chinese-6-960-cluecorpussmall

中文

Chinese BART

Model description

This model is pre-trained by UER-py .

How to use

You can use this model directly with a pipeline for text2text generation :

>>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/bart-chinese-6-960-cluecorpussmall")
>>> model = BartForConditionalGeneration.from_pretrained("uer/bart-chinese-6-960-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)  
>>> text2text_generator("中国的首都是[MASK]京", max_length=50, do_sample=False)
    [{'generated_text': '中 国 的 首 都 是 北 京'}]

Training data

CLUECorpusSmall Common Crawl and some short messages are used as training data.

Training procedure

The model is pre-trained by UER-py on Tencent Cloud . We pre-train 1,000,000 steps with a sequence length of 512.

we convert the pre-trained model into Huggingface's format:

python3 scripts/convert_bart_from_uer_to_huggingface.py --input_model_path cluecorpussmall_bart_medium_seq512_model.bin-250000 \                                                                
                                                        --output_model_path pytorch_model.bin \                                                                                            
                                                        --layers_num 6

BibTeX entry and citation info

@article{lewis2019bart,
  title={Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension},
  author={Lewis, Mike and Liu, Yinhan and Goyal, Naman and Ghazvininejad, Marjan and Mohamed, Abdelrahman and Levy, Omer and Stoyanov, Ves and Zettlemoyer, Luke},
  journal={arXiv preprint arXiv:1910.13461},
  year={2019}
}

@article{zhao2019uer,
  title={UER: An Open-Source Toolkit for Pre-training Models},
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
  journal={EMNLP-IJCNLP 2019},
  pages={241},
  year={2019}
}