模型:
TheBloke/Project-Baize-v2-13B-GPTQ
Chat & support: my new Discord server
Want to contribute? TheBloke's Patreon page
These files are GPTQ 4bit model files for Project Baize V2 13B .
It is the result of quantising to 4bit using GPTQ-for-LLaMa .
Open the text-generation-webui UI as normal.
Compatible file - Baize-v2-13B-4bit-128g.no-act-order.safetensors
In the main branch - the default one - you will find Baize-v2-13B-4bit-128g.no-act-order.safetensors
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
It was created without the --act-order parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.
python llama.py /workspace/ggml/TheBloke_Project-Baize-v2-13B-GGML/HF wikitext2 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/ggml/TheBloke_Project-Baize-v2-13B-GGML/gptq/Baize-v2-13B-4bit-128g.no-act-order.safetensors
For further support, and discussions on these models and AI in general, join us at:
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
Patreon special mentions : Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
Baize is an open-source chat model trained with LoRA . It uses 100k dialogs generated by letting ChatGPT chat with itself. We also use Alpaca's data to improve its performance. We have released 7B, 13B and 30B models. Please refer to the paper for more details.
Baize (pronounced as By-zor; Simplified Chinese 白泽, Traditional Chinese 白澤, Japanese 白沢, はくたく) is a mythical creature in Chinese folklore, who speaks human languages and knows everything. This is exactly what we expect from a chat model.
⚠️ All model weights and data are for research use ONLY . Commercial use is strictly prohibited . We accept NO responsibility or liability for any use of our data, code or weights.
This is the repo for the Baize project, which aims to build a chat model with LLaMA. This repository contains:
Now you can use Baize with Fastchat for the CLI and API provided by Fastchat!
First, install the latest version of Fastchat:
pip install git+https://github.com/huggingface/peft.git pip install git+https://github.com/lm-sys/FastChat.git
(For v1 models only): Merge Baize's LoRA weights into LLaMA. Take 7B checkpoint as an example.
# Note you have to include "baize" in the target directory so Fastchat can recognize Baize. python3 -m fastchat.model.apply_lora --base huggyllama/llama-7b --target ./model_weights/baize-7b --lora project-baize/baize-lora-7B
Now, run the CLI in your terminal! More options and configs can be found here .
# Optional: Add `--style rich` for better style. python -m fastchat.serve.cli --model-path ./model_weights/baize-7b
You can use Baize with OpenAI API or Hugging Face API following the instruction here .
You can either host it on your local machine or access the online demo . The demo fetches the LLaMA model and the LoRA weights from the Hugging Face model hub, then runs a user-friendly Gradio interface for chatting.
First, make sure your Python version is 3.8, and then install the required packages using the command below:
cd demo pip install -r requirements.txt
You can host the model on your local machine using the following command:
# We assume you have obtained access to use LLaMA. The following LLaMA weights are from a 3rd party. base_model=huggyllama/llama-7b lora_model=project-baize/baize-lora-7B python app.py $base_model $lora_modelGPU VRAM Requirements
Inference (without int8) | |
---|---|
Baize-7B | 16GB |
Baize-13B | 28GB |
Baize-30B | 67GB |
If you have a GPU with smaller VRAM, you can do inference with int8 , by passing the 8bit argument:
python app.py $base_model $lora_model 8bit
pip install -r requirements.txt
You can use our released data or collect the data from ChatGPT using the following command:
num_process=10 # The number of processes to collect data max_total_tokens=500000 # Set maximum numbers of tokens to collect data api_key=xxxxxxxxxxxxxxxxx # Set your openai api key for ((i=0; i<$num_process; i++)) do python collect.py $api_key $max_total_tokens $i $num_process stackoverflow & python collect.py $api_key $max_total_tokens $i $num_process quora & python collect.py $api_key $max_total_tokens $i $num_process medical & done
After collecting data, you use the following command to preprocess data:
python preprocess.py stackoverflow python preprocess.py quora python preprocess.py medical
If there's a specific dataset you want to use as seeds for ChatGPT self-chatting, you can simply modify collect.py to load your own data.
The fine-tuning code is designed to run on an A100-80G GPU. The finetune.py script accepts three parameters: foundation model size (i.e., 7B, 13B, or 30B), batch size, learning rate and datasets. Note the total batch size is fixed to 64 (can be modified here ) and the batch size here is the per device batch size before gradient accumulation. Set it to a smaller value if you are training on a GPU with smaller VRAM.
# For the 7B model (takes about 9 hours) python finetune.py 7b 32 0.0002 alpaca,stackoverflow,quora # For the 13B model (takes about 16 hours) python finetune.py 13b 16 0.0001 alpaca,stackoverflow,quora # For the 30B model (takes about 36 hours) python finetune.py 30b 8 0.00005 alpaca,stackoverflow,quoraGPU VRAM Consumption
With the settings ABOVE:
Training (with int8) | |
---|---|
Baize-7B | 26GB |
Baize-13B | 25GB |
Baize-30B | 42GB |
Got a question? See this issue .
Now you can easily merge the trained LoRA weights into a LLaMA model so you can use it with everything that supports standard Hugging Face API!
Here's an example for merging baize-lora-7B into LLaMA-7B.
python merge_lora.py \ --base huggyllama/llama-7b \ --target ~/model_weights/baize-7b \ --lora project-baize/baize-lora-7B
@article{xu2023baize, title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data}, author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian}, journal={arXiv preprint arXiv:2304.01196}, year={2023} }