模型:

DeepPavlov/distilrubert-base-cased-conversational

英文

distilrubert-base-cased-conversational

会话型DistilRuBERT(俄语,大小写敏感,6层,768隐藏层,12个头,总参数量为135.4M) 是在OpenSubtitles[1]、 Dirty Pikabu 和Taiga语料库的社交媒体部分[2](作为 Conversational RuBERT )上训练的。

我们的DistilRuBERT受到[3]、[4]的极大启发。具体而言,我们使用了以下内容:

  • KL损失(教师模型和学生模型输出对数之间的KL散度损失)
  • MLM损失(标签和学生模型输出对数之间的Masked Language Modeling损失)
  • 余弦嵌入损失(教师模型前后两个隐藏状态的平均值与学生模型一个隐藏状态之间的余弦嵌入损失)

模型的训练时间约为100小时,在8个nVIDIA Tesla P100-SXM2.0 16Gb上进行训练。

为了评估推理速度的改进情况,我们在随机序列上运行了教师模型和学生模型,其中seq_len=512,batch_size=16(用于吞吐量),batch_size=1(用于延迟)。所有测试都在Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz和nVIDIA Tesla P100-SXM2.0 16Gb上进行。

Citation

如果您认为该模型对您的研究有用,我们请求您引用 this 这篇论文:

@misc{https://doi.org/10.48550/arxiv.2205.02340,
  doi = {10.48550/ARXIV.2205.02340},
  
  url = {https://arxiv.org/abs/2205.02340},
  
  author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
  
  keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {arXiv.org perpetual, non-exclusive license}
}

[1]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)

[2]: Shavrina T., Shapovalova O. (2017) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: "TAIGA" SYNTAX TREE CORPUS AND PARSER. in proc. of "CORPORA2017", international conference , Saint-Petersbourg, 2017.

[3]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.

[4]: https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation