模型:
h2oai/h2ogpt-gm-oasst1-en-1024-12b
该模型是使用 H2O LLM Studio 进行训练的。
要在支持GPU的机器上使用transformers库使用该模型,请确保您已安装transformers和torch库。
pip install transformers==4.28.1 pip install torch==2.0.0
import torch from transformers import pipeline generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-1024-12b", torch_dtype=torch.float16, trust_remote_code=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"])
您可以在预处理步骤之后打印一个样本提示,以查看它如何被馈送给分词器:
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
或者,如果您不希望使用trust_remote_code=True,可以下载h2oai_pipeline.py,将其存储在笔记本旁边,并从加载的模型和分词器构建流水线:
import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-1024-12b", padding_side="left" ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-1024-12b", torch_dtype=torch.float16, device_map={"": "cuda:0"} ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"])
您还可以根据加载的模型和分词器自行构建流水线,并考虑预处理步骤:
from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-1024-12b" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer)
GPTNeoXForCausalLM( (gpt_neox): GPTNeoXModel( (embed_in): Embedding(50688, 5120) (layers): ModuleList( (0-35): 36 x GPTNeoXLayer( (input_layernorm): LayerNorm((5120,), eps=1e-05, elementwise_affine=True) (post_attention_layernorm): LayerNorm((5120,), eps=1e-05, elementwise_affine=True) (attention): GPTNeoXAttention( (rotary_emb): RotaryEmbedding() (query_key_value): Linear(in_features=5120, out_features=15360, bias=True) (dense): Linear(in_features=5120, out_features=5120, bias=True) ) (mlp): GPTNeoXMLP( (dense_h_to_4h): Linear(in_features=5120, out_features=20480, bias=True) (dense_4h_to_h): Linear(in_features=20480, out_features=5120, bias=True) (act): GELUActivation() ) ) ) (final_layer_norm): LayerNorm((5120,), eps=1e-05, elementwise_affine=True) ) (embed_out): Linear(in_features=5120, out_features=50688, bias=False) )
该模型是使用H2O LLM Studio和配置文件cfg.yaml进行训练的。请访问 H2O LLM Studio 了解如何训练您自己的大型语言模型。
使用 EleutherAI lm-evaluation-harness 的模型验证结果。
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-1024-12b --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 0.3345 | ± | 0.0138 |
acc_norm | 0.3754 | ± | 0.0142 | ||
arc_easy | 0 | acc | 0.6435 | ± | 0.0098 |
acc_norm | 0.5800 | ± | 0.0101 | ||
boolq | 1 | acc | 0.5098 | ± | 0.0087 |
hellaswag | 0 | acc | 0.5150 | ± | 0.0050 |
acc_norm | 0.6951 | ± | 0.0046 | ||
openbookqa | 0 | acc | 0.3080 | ± | 0.0207 |
acc_norm | 0.3980 | ± | 0.0219 | ||
piqa | 0 | acc | 0.7704 | ± | 0.0098 |
acc_norm | 0.7704 | ± | 0.0098 | ||
winogrande | 0 | acc | 0.6622 | ± | 0.0133 |
在使用此存储库中提供的大型语言模型之前,请仔细阅读此免责声明。您使用该模型即表示同意以下条款和条件。
使用本存储库中提供的大型语言模型即表示您同意接受并遵守本免责声明中概述的条款和条件。如果您不同意本免责声明的任何部分,您应避免使用该模型和由其生成的任何内容。