模型:
mrm8488/spanbert-large-finetuned-squadv1
SpanBERT created by Facebook Research and fine-tuned on SQuAD 1.1 for Q&A downstream task ( by them ).
SpanBERT: Improving Pre-training by Representing and Predicting Spans
You can get the fine-tuning script here
python code/run_squad.py \ --do_train \ --do_eval \ --model spanbert-large-cased \ --train_file train-v1.1.json \ --dev_file dev-v1.1.json \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 4 \ --max_seq_length 512 \ --doc_stride 128 \ --eval_metric f1 \ --output_dir squad_output \ --fp16
SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | |
---|---|---|---|---|
F1 | F1 | avg. F1 | F1 | |
BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 |
SpanBERT (base) | 92.4* | 83.6* | 77.4 | 68.2 |
BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 |
SpanBERT (large) | 94.6 (this) | 88.7 | 79.6 | 70.8 |
Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers.
Fast usage with pipelines :
from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-large-finetuned-squadv1", tokenizer="SpanBERT/spanbert-large-cased" ) qa_pipeline({ 'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately", 'question': "How has been working Manuel Romero lately?" }) # Output: {'answer': 'very hard in the repository hugginface/transformers', 'end': 82, 'score': 0.327230326857725, 'start': 31}
Created by Manuel Romero/@mrm8488
Made with ♥ in Spain