UPDATE, 15.10.2021: Check out our new zero-shot classifiers, much more lightweight and even outperforming this one: zero-shot SELECTRA small and zero-shot SELECTRA medium .
This model is a fine-tuned version of the spanish BERT model with the Spanish portion of the XNLI dataset. You can have a look at the training script for details of the training.
You can use this model with Hugging Face's zero-shot-classification pipeline :
from transformers import pipeline classifier = pipeline("zero-shot-classification", model="Recognai/bert-base-spanish-wwm-cased-xnli") classifier( "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo", candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"], hypothesis_template="Este ejemplo es {}." ) """output {'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo', 'labels': ['cultura', 'sociedad', 'economia', 'salud', 'deportes'], 'scores': [0.38897448778152466, 0.22997373342514038, 0.1658431738615036, 0.1205764189362526, 0.09463217109441757]} """
Accuracy for the test set:
XNLI-es | |
---|---|
bert-base-spanish-wwm-cased-xnli | 79.9% |