模型:
hyunwoongko/kobart
With the addition of chatting data, the model is trained to handle the semantics of sequences longer than KoBART.
from transformers import PreTrainedTokenizerFast, BartModel tokenizer = PreTrainedTokenizerFast.from_pretrained('hyunwoongko/kobart') model = BartModel.from_pretrained('hyunwoongko/kobart')
NSMC