模型:
valhalla/longformer-base-4096-finetuned-squadv1
This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task.
Longformer model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it
Longformer is a BERT-like model for long documents.
The pre-trained model can handle sequences with upto 4096 tokens.
This model was trained on google colab v100 GPU. You can find the fine-tuning colab here .
Few things to keep in mind while training longformer for QA task, by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The LongformerForQuestionAnswering model automatically does that for you. To allow it to do that
Metric | # Value |
---|---|
Exact Match | 85.1466 |
F1 | 91.5415 |
import torch from transformers import AutoTokenizer, AutoModelForQuestionAnswering, tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1") model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1") text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this." question = "What has Huggingface done ?" encoding = tokenizer(question, text, return_tensors="pt") input_ids = encoding["input_ids"] # default is local attention everywhere # the forward method will automatically set global attention on question tokens attention_mask = encoding["attention_mask"] start_scores, end_scores = model(input_ids, attention_mask=attention_mask) all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist()) answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1] answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # output => democratized NLP
The LongformerForQuestionAnswering isn't yet supported in pipeline . I'll update this card once the support has been added.