This repository provides a small-sized Japanese GPT-2 model. The model was trained using code from Github repository rinnakk/japanese-pretrained-models by rinna Co., Ltd.
from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt2-small", use_fast=False) tokenizer.do_lower_case = True # due to some bug of tokenizer config loading model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt2-small")
A 12-layer, 768-hidden-size transformer-based language model.
The model was trained on Japanese CC-100 and Japanese Wikipedia to optimize a traditional language modelling objective on 8\*V100 GPUs for around 15 days. It reaches around 21 perplexity on a chosen validation set from CC-100.
The model uses a sentencepiece -based tokenizer, the vocabulary was trained on the Japanese Wikipedia using the official sentencepiece training script.