模型:
deepset/gbert-large-sts
Language model: gbert-large-sts
Language: German Training data: German STS benchmark train and dev set Eval data: German STS benchmark test set Infrastructure : 1x V100 GPU Published : August 12th, 2021
batch_size = 16 n_epochs = 4 warmup_ratio = 0.1 learning_rate = 2e-5 lr_schedule = LinearWarmup
Stay tuned... and watch out for new papers on arxiv.org ;)
We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems.
Some of our work:
Get in touch: Twitter | LinkedIn | Website
By the way: we're hiring!