This repository provides an extra-small-sized Japanese GPT-2 model. The model was trained using code from Github repository rinnakk/japanese-pretrained-models by rinna Co., Ltd.
from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt2-xsmall", use_fast=False) tokenizer.do_lower_case = True # due to some bug of tokenizer config loading model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt2-xsmall")
A 6-layer, 512-hidden-size transformer-based language model.
The model was trained on Japanese CC-100 and Japanese Wikipedia to optimize a traditional language modelling objective on 8\*V100 GPUs for around 4 days. It reaches around 28 perplexity on a chosen validation set from CC-100.
The model uses a sentencepiece -based tokenizer, the vocabulary was trained on the Japanese Wikipedia using the official sentencepiece training script.