模型:
TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GPTQ
Chat & support: my new Discord server
Want to contribute? TheBloke's Patreon page
These files are GPTQ 4bit model files for Eric Hartford's Wizard Vicuna 7B Uncensored merged with Kaio Ken's SuperHOT 8K .
It is the result of quantising to 4bit using GPTQ-for-LLaMa .
This is an experimental new GPTQ which offers up to 8K context size
The increased context is tested to work with ExLlama , via the latest release of text-generation-webui .
It has also been tested from Python code using AutoGPTQ, and trust_remote_code=True .
Code credits:
Please read carefully below to see how to use it.
Please make sure you're using the latest version of text-generation-webui
First make sure you have AutoGPTQ and Einops installed:
pip3 install einops auto-gptq
Then run the following code. Note that in order to get this to work, config.json has been hardcoded to a sequence length of 8192.
If you want to try 4096 instead to reduce VRAM usage, please manually edit config.json to set max_position_embeddings to the value you want.
from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import argparse model_name_or_path = "TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GPTQ" model_basename = "wizard-vicuna-7b-uncensored-superhot-8k-GPTQ-4bit-128g.no-act.order" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device_map='auto', use_triton=use_triton, quantize_config=None) model.seqlen = 8192 # Note: check the prompt template is correct for this model. prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text'])
Provided in the repo is llama_rope_scaled_monkey_patch.py , written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as trust_remote_code=True . I have not tested this, and it should be superseded by using trust_remote_code=True , but I include it for completeness and for interest.
wizard-vicuna-7b-uncensored-superhot-8k-GPTQ-4bit-128g.no-act.order.safetensors
This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
For further support, and discussions on these models and AI in general, join us at:
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
Special thanks to : Luke from CarbonQuill, Aemon Algiz.
Patreon special mentions : RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in the github blog .
Looking for Merged & Quantized Models?Make some please :)
Using the monkey-patch?You will NEED to apply the monkeypatch or, if you are already using the monkeypatch, change the scaling factor to 0.25 and the maximum sequence length to 8192
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the llama_rope_scaled_monkey_patch.py into your working directory and call the exported function replace_llama_rope_with_scaled_rope at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
Using Oobabooga with Exllama?Switch your loader to exllama or exllama_hf Add the arguments max_seq_len 8192 and compress_pos_emb 4 . While the model may work well with compress_pos_emb 2 , it was trained on 4, so that is what I advocate for you to use
Example in the command-line:
In the UI, you will see the loader option in the Models tab. Once you select either exllama or exllama_hf , the max_seq_len and compress_pos_emb settings will appear.
Training DetailsI trained the LoRA with the following configuration:
This is wizard-vicuna-13b trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.