数据集:
Anthropic/hh-rlhf
This repository provides access to two different kinds of data:
Disclaimer : The data (especially the harmlessness preference data and the red team data) contain content that may be offensive or upsetting. Topics include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially upsetting subject matter. Please only engage with the data in accordance with your own personal risk tolerance. The data are intended for research purposes, especially research that can make models less harmful. The views expressed in the data do not reflect the views of Anthropic or any of its employees. As mentioned above, these data are not intended for training dialogue agents as this will likely lead to harmful model behavior.
Each of these datasets are described further below.
The data are described in the paper: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback . If you find the data useful, please cite the paper. The data format is very simple -- each line of the jsonl files contains a pair of texts, one "chosen" and one "rejected".
For helpfulness , the data are grouped into train/test splits in three tranches: from our base models (context-distilled 52B language models), via rejection sampling (mostly with best-of-16 sampling) against an early preference model, and a dataset sampled during our iterated "online" process.
For harmlessness , the data are only collected for our base models, but otherwise formatted in the same way.
Details about the data collection process and crowdworker population can be found in the paper, specifically in section 2 and appendix D.
The data are described in the paper: Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned . If you find the data useful, please cite the paper. At a high level these data can be examined to understand what succesful red team attempts look like.
Details about the data and data collection procedures can be found in the Datasheet in the appendix of the paper.
Each line of the jsonl file contains a dictionary with the following fields:
Each of the above datasets is located in a separate sub-directory. To load an individual subset, use the data_dir argument of the load_dataset() function as follows:
from datasets import load_dataset # Load all helpfulness/harmless subsets (share the same schema) dataset = load_dataset("Anthropic/hh-rlhf") # Load one of the harmless subsets dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base") # Load the red teaming subset dataset = load_dataset("Anthropic/hh-rlhf", data_dir="red-team-attempts")
The original authors host this dataset on GitHub here: https://github.com/anthropics/hh-rlhf You can submit inquiries to: redteam@anthropic.com