模型:
deepset/gbert-large
Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper , we outline the steps taken to train our model and show that it outperforms its predecessors.
Paper: here Architecture: BERT large Language: German
GermEval18 Coarse: 80.08 GermEval18 Fine: 52.48 GermEval14: 88.16
See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator
Branden Chan: branden.chan@deepset.ai Stefan Schweter: stefan@schweter.eu Timo Möller: timo.moeller@deepset.ai
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
For more info on Haystack, visit our GitHub repo and Documentation .
We also have a Discord community open to everyone!
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
对以上内容翻译成中文,不要翻译大写的英文, 保留a标签以及所有属性,按照此约束返回翻译后的中文于2020年10月发布,这是由原始的German BERT(也称为“bert-base-german-cased”)和dbmdz BERT(也称为bert-base-german-dbmdz-cased)的创建者共同训练的德语BERT语言模型。在我们的 paper 中,我们概述了训练模型的步骤,并证明它胜过了之前的模型。
论文: here 架构:BERT large 语言:德语
GermEval18 Coarse: 80.08 GermEval18 Fine: 52.48 GermEval14: 88.16
另请参见:deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator
Branden Chan:branden.chan@deepset.ai Stefan Schweter:stefan@schweter.eu Timo Möller:timo.moeller@deepset.ai
deepset 是开源NLP框架 Haystack 背后的公司,旨在帮助您构建可用于问答、摘要、排名等的生产就绪NLP系统。
我们的其他工作:
要了解有关Haystack的更多信息,请访问我们的 GitHub 存储库和 Documentation 。
我们还拥有 Discord community open to everyone!
Twitter | LinkedIn | Discord | GitHub Discussions | Website
顺便说一句: we're hiring!