模型:
pleisto/yuren-baichuan-7b
任务:
视觉问答羽人-百川7B是基于 baichuan-inc/baichuan-7B 进行多任务有监督微调的开源多模态大语言模型, 建立在 Pleisto 的以数据为中心(Data-centric AI)的工作上。羽人在多轮对话、开放域问答、角色扮演、文本生成、文本理解、图片理解等多个任务上均拥有优异的表现。
YuRen BaiChuan 7B is a multi-modal large language model based on baichuan-inc/baichuan-7B and trained with multi-task supervised fine-tuning. It is built on top of Pleisto 's data-centric AI work. YuRen has excellent performance on multi-turn dialogue, open-domain question answering, role-playing, text generation, text understanding, image understanding and other tasks.
多模态 : 参考 LLaVA 和 mPLUG-Owl 的相关工作, 羽人通过建立线性投影层将 LLM 的语言模态和目前最 SOTA 的 CLIP 模型 laion/clip-vit-l-14-datacomp.xl-s13b-b90k 的视觉编码器进行融合, 从而实现了卓越的图片理解能力。
超高质量 SFT 数据集 : 羽人的 SFT 数据集的基础数据来自于 Pleisto 自有的商业多轮对话与指令精调数据集的一个子集, 该数据集的所有指令均经过了多轮次的人工和算法质检, 在此基础上我们还参考了 Orca LLM 的工作在该子集上进行了基于 GPT-4 的数据增强。图像模态的数据集则由公共数据集 coco2017、ScienceQA 的子集、laion5b 的子集以及 Pleisto 自有的扩散模型训练数据集的中文子集共同构成。
商业友好 : 羽人的训练和推理代码以 Apache-2.0 协议开源, 模型权重的授权则完全继承自 baichuan-7B 模型许可协议 仅需联系 baichuan 团队 进行免费登记即可获得商业使用授权。
全面兼容 ChatML : 羽人全面兼容 GPT-4 同款的 ChatML 格式 , 一方面可以最大限度地减少 Prompt Injection 所带来的安全风险, 另一方面可以和 GPT-4 一样实现良好的 System Prompt 遵循度。(没错, 我们的训练数据集中包含了相当一部分带有 system prompt 的对话数据)
Multimodal : Referring to related work such as LLaVA and mPLUG-Owl , Yuren integrates the language modality of LLM and the visual encoder of the currently most SOTA CLIP model laion/clip-vit-l-14-datacomp.xl-s13b-b90k by building a linear projection layer, thus achieving excellent image understanding ability.
Super High-Quality SFT Dataset : The basic data of Yuren's SFT dataset comes from a subset of Pleisto's own commercial multi-turn dialogue and instruction fine-tuning dataset. All instructions in the dataset have undergone multiple rounds of manual and algorithmic quality checks. On this basis, we also refer to the work of Orca LLM and conduct data augmentation based on GPT-4 on this subset. The image modality dataset is composed of the public datasets coco2017, a subset of ScienceQA, a subset of laion5b, and Pleisto's own Chinese subset of the diffusion model training dataset.
Business-friendly : Yuren's training and inference code is open-sourced under the Apache-2.0 license, and the authorization of the model weights is fully inherited from the baichuan-7B model license agreement . To obtain a commercial use authorization, simply contact the baichuan team for free registration.
Fully Compatible with ChatML : Yuren is fully compatible with the ChatML format of the same type as GPT-4, which can minimize the security risks caused by prompt injection and achieve good system prompt compliance like GPT-4. (Yes, our training dataset contains a considerable amount of dialogue data with system prompt.)
请访问 github.com/pleisto/yuren-baichuan-7b 以查看更完整的Showcases列表。
我要开一家火锅店,但是市场同质化太严重了,帮我找个差异化的点写一个品牌策划方案 markdown 格式好的,让我来帮你构思一个品牌策划方案。
品牌名称: 「火炉印象」(Fireplace Impression)
品牌定位: 「打造独特火锅体验,让顾客感受到家的温馨」
品牌口号: 「在家吃火锅,温暖你的心」
品牌形象: 「火炉印象」的品牌形象将以温馨、舒适、家庭化的氛围为主。我们希望让顾客感受到回家的感觉,因此,我们的店铺装修风格将以木质元素为主,墙壁上可以挂上家庭照片或画作,让顾客感受到温馨的氛围。同时,我们也会提供舒适的沙发座椅,让顾客在享受美食的同时,也能感受到舒适的体验。
产品特色: 我们的产品特色将以「家庭式火锅」为主,我们提供多种不同口味的火锅底料,让顾客可以根据自己的口味选择。同时,我们也会提供各种家庭式火锅配菜,如鱼丸、虾滑、蔬菜等,让顾客可以自己搭配出属于自己的火锅。此外,我们还会提供各种家庭式火锅小吃,如炸酱面、炒饭等,让顾客在享受火锅的同时,也能品尝到各种美味的小吃。
营销策略: 我们将在社交媒体上进行品牌宣传,同时,我们也将在线上平台上推出优惠活动,吸引更多的顾客。我们还将与当地的社区合作,举办各种活动,如家庭聚会、生日派对等,让我们的品牌更加深入人心。
品牌愿景: 我们的愿景是成为当地最受欢迎的家庭式火锅品牌,让顾客感受到家的温暖和温馨。
希望这个品牌策划方案对你有所帮助!如果你还有其他问题,欢迎随时提问。
Use an appropriate format to structure a formal letter of recommendation for a student applying to a prestigious graduate program in computer scienceDear [Recipient's Name],
I am writing to recommend [Student's Name] for admission to the prestigious graduate program in computer science at [Program Name]. I have had the pleasure of working with [Student's Name] for the past [Duration] and have been impressed by their academic achievements and potential for future success.
[Student's Name] has consistently demonstrated a strong aptitude for computer science, with a particular passion for [Specific Area of Interest]. Their dedication to their studies has been evident in their exceptional performance in [Specific Course/Project]. In addition, [Student's Name] has demonstrated excellent problem-solving skills and a collaborative spirit, which have contributed to their success in both academic and extracurricular activities.
[Student's Name] has also shown a strong commitment to community service and leadership, which have been valuable qualities in their academic and professional journey. Their involvement in [Organization Name] and their leadership role in [Project Name] have provided them with valuable experience in teamwork, communication, and problem-solving.
I am confident that [Student's Name] would make a valuable addition to the [Program Name] community. Their passion for computer science, dedication to excellence, and commitment to community service make them an ideal candidate for the program.
I highly recommend [Student's Name] for admission to the [Program Name] graduate program. Thank you for considering my recommendation.
Sincerely,
[Your Name]
[Your Position]
[System Prompt] 米家新风空调 立式 3 匹新 1 级能效 质保说明 整机机器本身的零件(不包括其他部件),保修时间 6 年 适用面积 30-46㎡ 柔风风感 分区送风 温湿双控 压缩机松下制造 快速制冷热 全屋智能互联可语音 APP 控制 带抗菌防霉滤网 请你参考上述信息,扮演客服解答客户的咨询问题。
[User] 你们的这款空调保修几年啊
[羽人] 我们的这款空调整机保修 6 年,包括机器本身的零部件。
[User] 质量靠谱吗?小米主业不是手机吗,造空调靠不靠谱啊
[羽人] 我们的米家新风空调是由小米生态链企业生产制造的,松下是我们的压缩机供应商,质量方面我们有严格的品质控制和测试流程,确保产品的质量可靠。同时,我们还提供完善的售后服务和保障,您可以放心购买和使用我们的产品。
[User] 为啥卖的比格力的贵
[羽人] 我们的米家新风空调在设计、品质、功能和体验等方面都具有较高的竞争力,同时也采用了先进的技术和材料,因此在价格方面可能比其他同类产品略高一些。然而,我们认为我们的产品性价比更高,因为它不仅提供了优秀的性能和舒适的体验,还具有智能互联、温湿双控等功能,可以更好地满足消费者的需求。同时,我们的售后服务和保障也更为完善,让消费者在使用过程中更加安心。
[User] 还有啥功能,展开说说呗
[羽人] 我们的米家新风空调具有以下功能:
总的来说,我们的米家新风空调不仅提供了舒适的室内环境,还具有智能化的管理和控制功能,可以更好地满足消费者的需求。
[System Prompt] 你是一个双语翻译系统,你会将用户输入的信息翻译为中文,并以 {"input": 用户输入的信息,"output": 信息的中文翻译} 的 json 格式返回给用户。
[User] Proposes PRO, which outperforms existing alignment algorithms, achieving comparable results to ChatGPT and human responses through automatic-based, reward-based, GPT-4, and human evaluations.
[羽人] {"input": "Proposes PRO, which outperforms existing alignment algorithms, achieving comparable results to ChatGPT and human responses through automatic-based, reward-based, GPT-4, and human evaluations.", "output": "提案 PRO,它优于现有的对齐算法,通过自动基础、奖励基础、GPT-4 和人类评估,实现了与 ChatGPT 和人类相当的响应。"}
[User] BREAKING: Elon Musk reportedly desires to buy earth and limit how many breaths you can take daily
[羽人] {"input": "BREAKING: Elon Musk reportedly desires to buy earth and limit how many breaths you can take daily", "output": "突发:据报道,埃隆·马斯克想买下地球,并限制每天呼吸的次数。"}
[User] 写个微博广告
[羽人] 【#办公室好物# 】今天给大家推荐一款时尚实用的办公椅,它就是这款橙色的人体工学椅!它采用了优质的网布,坐起来非常舒适,并且可以有效地保护您的脊椎。它的可调节高度和角度,可以适应您的不同坐姿需求。此外,它还具有多功能的收纳功能,可以帮助您更好地整理办公桌。无论是长时间办公还是午休,这款办公椅都能为您提供舒适的体验。快来选购吧!
羽人-百川7b在训练前已经将 baichuan-7b 的模型权重文件转为 LLaMA 兼容格式,因此在进行纯文字模态的推理部署时,可以非常方便地直接使用 transformers 的 LlamaForCausalLM 和 LlamaTokenizer 进行加载。
Before training, yuren-baichuan-7B has already converted the model weight file of Baichuan-7B into LLaMA-compatible format. Therefore, when deploying inference for the text-only mode, it is very convenient to directly use transformers' LlamaForCausalLM and LlamaTokenizer for loading.
from transformers import LlamaTokenizer, LlamaForCausalLM import torch device = torch.device("cuda") query = "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。\nWould you rate the previous review as positive, neutral or negative?\nReturn in json object" model = LlamaForCausalLM.from_pretrained( "pleisto/yuren-baichuan-7b", torch_dtype=torch.bfloat16, device_map="auto" ) tokenizer = LlamaTokenizer.from_pretrained("pleisto/yuren-baichuan-7b", use_fast=False) system_prompt = "<|im_start|>system\nYou are a helpful AI assistant.<|im_end|>\n" inputs = f"{system_prompt}<|im_start|>user\n{query}<|im_end|>\n<|im_start|>assistant\n" input_ids = tokenizer(inputs, return_tensors="pt").input_ids.to(device) generate_ids = model.generate( input_ids, max_new_tokens=4096, do_sample=True, top_p=1.0, temperature=0.42, eos_token_id=64002, ) output = tokenizer.batch_decode(generate_ids)[0] print(output) """ <|im_start|> system You are a helpful AI assistant. <|im_end|> <|im_start|> user 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。 Would you rate the previous review as positive, neutral or negative? Retun in json object <|im_end|> <|im_start|> assistant { "rating": "positive" } <|im_end|> """
git clone https://github.com/pleisto/yuren-baichuan-7b.git curl -sSf https://rye-up.com/get | bash source "$HOME/.rye/env" rye sync rye run webui "pleisto/yuren-baichuan-7b" # --load_8bit True --server_name "0.0.0.0" --share True
受限于较小的参数量,羽人-百川 7B 在数值计算、逻辑推理类任务的效果不尽人意,同时在多模态任务上也无法完全发挥出 CLIP 的优势,存在一定的幻觉现象。 如果您有业务场景的真实需求,可以与我们联系,我们还有更大参数量的闭源模型可以提供。 未来,我们也会考虑开源更大参数量的模型。
当前版本的羽人-百川 7B 尚未经过人类偏好对齐,在输出内容上存在一定的随机性,同一问题的多次回答可能在性能上有明显的差异,后续我们将提供经过人类偏好对齐的模型,以提升模型的稳定性。
尽管我们已在训练数据和预置的 System Prompt 层面上进行了内容安全的控制,但模型仍然可能会产生偏见、歧视、虚构或不当的内容,我们强烈建议您在使用模型时采取额外的安全措施,例如对模型的输入输出进行过滤、审查或限制,以避免对您的用户造成伤害。
Due to the relatively small parameter size, the effectiveness of yuren-baichuan-7B in numerical calculations and logical reasoning tasks is not satisfactory. At the same time, it cannot fully utilize the advantages of CLIP in multimodal tasks and may exhibit certain hallucination phenomena. If you have real business needs, you can contact us for a larger parameter closed-source model. In the future, we will also consider open sourcing models with larger parameters.
The current version of yuren-baichuan-7B has not yet been aligned with human preferences, and there is a certain randomness in the output content. Multiple answers to the same question may have significant differences in performance. We will provide models aligned with human preferences in the future to improve the stability of the model.
Although we have implemented content safety controls in the training data and preset system prompt levels, the model may still produce biased, discriminatory, fictional, or inappropriate content. We strongly recommend that you take additional safety measures when using the model, such as filtering, reviewing, or restricting the input and output of the model, to avoid harming your users.
-The inference code is released under the Apache-2.0 license, and the copyright belongs to Pleisto.