模型:
pszemraj/flan-t5-base-instruct-dolly_hhrlhf
This model is a fine-tuned version of google/flan-t5-base on the pszemraj/dolly_hhrlhf-text2text dataset.
text2text models fine-tuned on a modified dataset for text2text generation based on the relatively more permissive mosaicml/dolly_hhrlhf dataset.
Basic usage in Python:
# pip install -q transformers accelerate import torch from transformers import pipeline, GenerationConfig model_name = "pszemraj/flan-t5-base-instruct-dolly_hhrlhf" assistant = pipeline( "text2text-generation", model_name, device=0 if torch.cuda.is_available() else -1, ) cfg = GenerationConfig.from_pretrained(model_name) # pass an 'instruction' as the prompt to the pipeline prompt = "Write a guide on how to become a ninja while working a 9-5 job." result = assistant(prompt, generation_config=cfg)[0]["generated_text"] print(result)
* using the generation config is optional, can subsitute with other generation params.
The following hyperparameters were used during training: