模型:
ClassCat/gpt2-small-catalan-v2
transformers==4.19.2
This model uses GPT2 base model settings, but the size of embedding dimensions are half the size of them.
Using BPE tokenizer with vocabulary size 50,000.
from transformers import pipeline unmasker = pipeline('fill-mask', model='ClassCat/gpt2-small-catalan-v2') unmasker("Ell està una mica")