This model is a fine-tuned version of ukr-models/xlm-roberta-base-uk on the UA-SQuAD dataset.
Link to training scripts - https://github.com/robinhad/ukrainian-qa It achieves the following results on the evaluation set:
More information needed
from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering model_name = "robinhad/ukrainian-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) qa_model = pipeline("question-answering", model=model.to("cpu"), tokenizer=tokenizer) question = "Де ти живеш?" context = "Мене звати Сара і я живу у Лондоні" qa_model(question = question, context = context)
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.4526 | 1.0 | 650 | 1.3631 |
1.3317 | 2.0 | 1300 | 1.2229 |
1.0693 | 3.0 | 1950 | 1.2184 |
0.6851 | 4.0 | 2600 | 1.3171 |
0.5594 | 5.0 | 3250 | 1.3893 |
0.4954 | 6.0 | 3900 | 1.4778 |