This model is a fine-tuned version of dbmdz/bert-base-italian-cased on the wiki_neural dataset. It achieves the following results on the evaluation set:
Token classification for italian language experiment, NER.
from transformers import pipeline ner_pipeline = pipeline("ner", model="nickprock/bert-italian-finetuned-ner", aggregation_strategy="simple") text = "La sede storica della Olivetti è ad Ivrea" output = ner_pipeline(text)
The model can be used on token classification, in particular NER. It is fine tuned on italian language.
The dataset used is wikiann
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.0297 | 1.0 | 11050 | 0.0323 | 0.9324 | 0.9420 | 0.9372 | 0.9908 |
0.0173 | 2.0 | 22100 | 0.0324 | 0.9445 | 0.9514 | 0.9479 | 0.9915 |
0.0057 | 3.0 | 33150 | 0.0361 | 0.9438 | 0.9542 | 0.9490 | 0.9918 |