模型:
deepset/gbert-base-germandpr-ctx_encoder
Language model: gbert-base-germandpr Language: German Training data: GermanDPR train set (~ 56MB) Eval data: GermanDPR test set (~ 6MB) Infrastructure : 4x V100 GPU Published : Apr 26th, 2021
See https://deepset.ai/germanquad for more details and dataset download.
batch_size = 40 n_epochs = 20 num_training_steps = 4640 num_warmup_steps = 460 max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder learning_rate = 1e-6 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 num_hard_negatives = 2
During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set. The dev split contained 1030 question/answer pairs. Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results. Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier. After fixing the hyperparameters we trained the model on the full GermanDPR train set.
We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.
You can load the model in haystack as a retriever for doing QA at scale:
retriever = DensePassageRetriever( document_store=document_store, query_embedding_model="deepset/gbert-base-germandpr-question_encoder" passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder" )
We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems.
Some of our work:
Get in touch: Twitter | LinkedIn | Website
By the way: we're hiring!