模型:
TheBloke/wizard-vicuna-13B-SuperHOT-8K-GPTQ
Chat & support: my new Discord server
Want to contribute? TheBloke's Patreon page
These files are GPTQ 4bit model files for June Lee's Wizard Vicuna 13B merged with Kaio Ken's SuperHOT 8K .
It is the result of quantising to 4bit using GPTQ-for-LLaMa .
This is an experimental new GPTQ which offers up to 8K context size
The increased context is tested to work with ExLlama , via the latest release of text-generation-webui .
It has also been tested from Python code using AutoGPTQ, and trust_remote_code=True .
Code credits:
Please read carefully below to see how to use it.
GGML versions are not yet provided, as there is not yet support for SuperHOT in llama.cpp. This is being investigated and will hopefully come soon.
Please make sure you're using the latest version of text-generation-webui
First make sure you have AutoGPTQ and Einops installed:
pip3 install einops auto-gptq
Then run the following code. Note that in order to get this to work, config.json has been hardcoded to a sequence length of 8192.
If you want to try 4096 instead to reduce VRAM usage, please manually edit config.json to set max_position_embeddings to the value you want.
from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import argparse model_name_or_path = "TheBloke/wizard-vicuna-13B-SuperHOT-8K-GPTQ" model_basename = "wizard-vicuna-13b-superhot-8k-GPTQ-4bit-128g.no-act.order" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device_map='auto', use_triton=use_triton, quantize_config=None) model.seqlen = 8192 # Note: check the prompt template is correct for this model. prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text'])
Provided in the repo is llama_rope_scaled_monkey_patch.py , written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as trust_remote_code=True . I have not tested this, and it should be superseded by using trust_remote_code=True , but I include it for completeness and for interest.
wizard-vicuna-13b-superhot-8k-GPTQ-4bit-128g.no-act.order.safetensors
This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
For further support, and discussions on these models and AI in general, join us at:
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
Special thanks to : Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
Patreon special mentions : zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in the github blog . Tests have shown that the model does indeed leverage the extended context at 8K.
You will need to use either the monkeypatch or, if you are already using the monkeypatch, change the scaling factor to 0.25 and the maximum sequence length to 8192
Looking for Merged & Quantized Models?I trained the LoRA with the following configuration:
Chat & support: my new Discord server
Want to contribute? TheBloke's Patreon page
# Wizard-Vicuna-13B-HFThis is a float16 HF format repo for junelee's wizard-vicuna 13B .
June Lee's repo was also HF format. The reason I've made this is that the original repo was in float32, meaning it required 52GB disk space, VRAM and RAM.
This model was converted to float16 to make it easier to load and manage.
For further support, and discussions on these models and AI in general, join us at:
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
Patreon special mentions : Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
Github page: https://github.com/melodysdreamj/WizardVicunaLM
I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage.
The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order.
gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | link | |
---|---|---|---|---|---|
Q1 | 95 | 90 | 85 | 88 | link |
Q2 | 95 | 97 | 90 | 89 | link |
Q3 | 85 | 90 | 80 | 65 | link |
Q4 | 90 | 85 | 80 | 75 | link |
Q5 | 90 | 85 | 80 | 75 | link |
Q6 | 92 | 85 | 87 | 88 | link |
Q7 | 95 | 90 | 85 | 92 | link |
Q8 | 90 | 85 | 75 | 70 | link |
Q9 | 92 | 85 | 70 | 60 | link |
Q10 | 90 | 80 | 75 | 85 | link |
Q11 | 90 | 85 | 75 | 65 | link |
Q12 | 85 | 90 | 80 | 88 | link |
Q13 | 90 | 95 | 88 | 85 | link |
Q14 | 94 | 89 | 90 | 91 | link |
Q15 | 90 | 85 | 88 | 87 | link |
91 | 88 | 82 | 80 |
We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques.
Turning a single command into a rich conversation is what we've done here .
After creating the training data, I later trained it according to the Vicuna v1.1 training method .
First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5.
After that, we applied the following model using Vicuna's fine-tuning format.
Trained with 8 A100 GPUs for 35 hours.
You can see the dataset we used for training and the 13b model in the huggingface.
If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations.
The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free.
JUNE LEE - He is active in Songdo Artificial Intelligence Study and GDG Songdo.