模型:
GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k
GALACTICA 6.7B fine-tuned on the Evol-Instruct 70k dataset.
The model card from the original Galactica repo can be found here , and the original paper here .
The HF dataset for Evol-Instruct-70k can be found here , and the original GitHub repo for WizardLM is here .
The GALACTICA models are trained on a large-scale scientific corpus and are designed to perform scientific tasks.
The GALACTICA model card specifies that the primary indended users of the GALACTICA models are researchers studying language models applied to the scientific domain, and it cautions against production use of GALACTICA without safeguards due to the potential for the model to produce inaccurate information. The original GALACTICA models are available under a non-commercial CC BY-NC 4.0 license, and models based on the Evol-Instruct-70k dataset are additionally subject to the OpenAI Terms of Service .
Example OutputsFull Prompt:
Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Write out Maxwell's equations and explain each one. ### Response:
Response:
Maxwell's equations are a set of four equations that describe the behavior of electromagnetic fields. These equations were developed by James Clerk Maxwell in 1864 and are widely used in the study of electromagnetism.
∇ ∙ E = ρ ϵ 0 \nabla \bullet \textbf{E} = \frac{\rho}{\epsilon_0} ∇ ∙ E = ϵ 0 ρ
where E is the electric field, ρ is the charge density, ε0 is the electric constant, and ∇ is the divergence operator.
∇ × E = − ∂ B ∂ t \nabla \times \textbf{E} = -\frac{\partial\textbf{B}}{\partial t} ∇ × E = − ∂ t ∂ B
where B is the magnetic field, t is time, and ∇ × is the curl operator.
∇ ∙ B = 0 \nabla \bullet \textbf{B} = 0 ∇ ∙ B = 0
where B is the magnetic field.
∇ × B = μ 0 J + μ 0 ϵ 0 ∂ E ∂ t \nabla \times \textbf{B} = \mu_0\textbf{J} + \mu_0\epsilon_0\frac{\partial \textbf{E}}{\partial t} ∇ × B = μ 0 J + μ 0 ϵ 0 ∂ t ∂ E
where J is the current density, μ0 is the magnetic constant, and ε0 is the electric constant.
These equations form the foundation of all electromagnetic theory and are used to describe the behavior of electromagnetic fields in a wide range of applications, from radio waves to high-energy particles.
This model was trained by fine-tuning pre-trained GALACTICA 6.7B on the Evol-Instruct-70k dataset. GALACTICA models were trained on 106 billion tokens of open-access scientific text and data, including papers, textbooks, scientific websites, encyclopedias, and more. Fine-tuning the base GALACTICA models on the 70k instruction-response pairs in the Evol-Instruct-70k dataset allows users to query the resulting model in an instruct-response fashion.
The GALACTICA Evol-Instruct-70K weights are made available for use with the transformers library.
Click to expand# pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k") model = AutoModelForCausalLM.from_pretrained("GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k", device_map="auto", torch_dtype=torch.float16) # the evol-instruct models were fine-tuned with the same hidden prompts as the Alpaca project no_input_prompt_template = ("Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response:") prompt = "Write out Maxwell's equations and explain the meaning of each one." formatted_prompt = no_input_prompt_template.format_map({'instruction': prompt}) tokenized_prompt = tokenizer(formatted_prompt, return_tensors="pt").input_ids.to(model.device) out_tokens = model.generate(tokenized_prompt) print(tokenizer.batch_decode(out_tokens, skip_special_tokens=False, clean_up_tokenization_spaces=False))
GALACTICA 6.7B Evol-Instruct was fine-tuned in about 22 hours using 8 A100 80GB GPUS, 16-bit mixed-precision, an effective batch-size of 64, and with a maximum context window of 2048 tokens. This model was trained using full-shard data parallelism.
Qualitative evaluation suggests that the evol-instruct-70k fine-tuned Galactica models are signficantly more controllable and attentive to user prompts than the Alpaca fine-tuned GALPACA models.
@inproceedings{GALACTICA, title={GALACTICA: A Large Language Model for Science}, author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic}, year={2022} }
@misc{xu2023wizardlm, title={WizardLM: Empowering Large Language Models to Follow Complex Instructions}, author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang}, year={2023}, eprint={2304.12244}, archivePrefix={arXiv}, primaryClass={cs.CL} }