模型:
TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g
Chat & support: my new Discord server
Want to contribute? TheBloke's Patreon page
# Vicuna 13B 1.1 GPTQ 4bit 128gThis is a 4-bit GPTQ version of the Vicuna 13B 1.1 model .
It was created by merging the deltas provided in the above repo with the original Llama 13B model, using the code provided on their Github page .
It was then quantized to 4bit using GPTQ-for-LLaMa .
Check out this Google Colab provided by eucdee : Google Colab for Vicuna 1.1
I have the following Vicuna 1.1 repositories available:
13B models:
7B models:
Open the text-generation-webui UI as normal.
If you get gibberish output, it is because you are using the safetensors file without updating GPTQ-for-LLaMA.
If you use the safetensors file you must have the latest version of GPTQ-for-LLaMA inside text-generation-webui.
If you don't want to update, or you can't, use the pt file instead.
Either way, please read the instructions below carefully.
Two model files are provided. Ideally use the safetensors file. Full details below:
Details of the files provided:
vicuna-13B-1.1-GPTQ-4bit-128g.compat.no-act-order.pt
vicuna-13B-1.1-GPTQ-4bit-128g.latest.safetensors
File vicuna-13B-1.1-GPTQ-4bit-128g.compat.no-act-order.pt can be loaded the same as any other GPTQ file, without requiring any updates to oobaboogas text-generation-webui .
Instructions on using GPTQ 4bit files in text-generation-webui are here .
The other safetensors model file was created using --act-order to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
If you want to use the act-order safetensors files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
# Clone text-generation-webui, if you don't already have it git clone https://github.com/oobabooga/text-generation-webui # Make a repositories directory mkdir text-generation-webui/repositories cd text-generation-webui/repositories # Clone the latest GPTQ-for-LLaMa code inside text-generation-webui git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
Then install this model into text-generation-webui/models and launch the UI as follows:
cd text-generation-webui python server.py --model vicuna-13B-1.1-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch:
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda cd GPTQ-for-LLaMa python setup_cuda.py install
Then link that into text-generation-webui/repositories as described above.
Or just use vicuna-13B-1.1-GPTQ-4bit-128g.compat.no-act-order.pt as mentioned above, which should work without any upgrades to text-generation-webui.
For further support, and discussions on these models and AI in general, join us at:
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
Patreon special mentions : Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
Model type: Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. It is an auto-regressive language model, based on the transformer architecture.
Model date: Vicuna was trained between March 2023 and April 2023.
Organizations developing the model: The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
Paper or resources for more information: https://vicuna.lmsys.org/
License: Apache License 2.0
Where to send questions or comments about the model: https://github.com/lm-sys/FastChat/issues
Primary intended uses: The primary use of Vicuna is research on large language models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
70K conversations collected from ShareGPT.com.
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.