Update README.md
Browse files
README.md
CHANGED
|
@@ -20,12 +20,13 @@ language:
|
|
| 20 |
|
| 21 |
## Summary
|
| 22 |
|
| 23 |
-
SHP is a dataset of **385K collective human preferences** over responses to questions/instructions in 18 different subject areas, from cooking to legal advice
|
| 24 |
-
|
| 25 |
|
| 26 |
-
Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (collectively).
|
| 27 |
SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B.
|
| 28 |
If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility from being written first.
|
|
|
|
| 29 |
|
| 30 |
How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
|
| 31 |
Most notably, all the data in SHP is naturally occurring and human-written, whereas the responses in HH-RLHF are machine-written, giving us two very different distributions that can complement each other.
|
|
@@ -36,7 +37,7 @@ Most notably, all the data in SHP is naturally occurring and human-written, wher
|
|
| 36 |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
|
| 37 |
|
| 38 |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
|
| 39 |
-
|
| 40 |
It also contains data from more domains:
|
| 41 |
|
| 42 |
| Dataset | Size | Comments + Scores | Preferences | Number of Domains |
|
|
@@ -101,19 +102,16 @@ where the fields are:
|
|
| 101 |
|
| 102 |
## Dataset Design
|
| 103 |
|
|
|
|
|
|
|
| 104 |
The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*.
|
| 105 |
For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users.
|
| 106 |
-
The score of a post/comment is 1 plus the number of upvotes it gets from users, minus the number of downvotes it gets.
|
| 107 |
-
The value of a score is relative; in subreddits(posts) with more traffic, there will be more higher-scoring posts(comments).
|
| 108 |
-
Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure.
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
### Subreddit Selection
|
| 112 |
|
| 113 |
SHP contains a train, validation, and test split for comments scraped from 18 different subreddits. We chose subreddits based on:
|
| 114 |
1. whether they were well-known (subscriber count >= 50K)
|
| 115 |
-
2. whether posts were expected to pose a question or instruction
|
| 116 |
-
3. whether
|
|
|
|
| 117 |
|
| 118 |
The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
|
| 119 |
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%:
|
|
@@ -140,7 +138,11 @@ Since different posts have different numbers of comments, the number of preferen
|
|
| 140 |
| legaladvice | 21170 | 1106 | 1011 | 23287 |
|
| 141 |
| ALL | 348718 | 18436 | 18409 | 385563 |
|
| 142 |
|
| 143 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
| 144 |
|
| 145 |
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
|
| 146 |
1. A was written *no later than* B and A has a higher score than B.
|
|
@@ -203,15 +205,20 @@ We encourage you to use SteamSHP for NLG evaluation, for building reward models
|
|
| 203 |
|
| 204 |
## Biases and Limitations
|
| 205 |
|
|
|
|
|
|
|
| 206 |
Although we filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
|
| 207 |
The data does not reflect the views of the dataset creators.
|
| 208 |
-
Reddit users on these subreddits are also not representative of the broader population.
|
|
|
|
| 209 |
One should keep that in mind before using any models trained on this data.
|
| 210 |
|
| 211 |
-
|
| 212 |
-
There are exceptions to this, such as the `askhistorians` subreddit, which is heavily moderated and answers are expected to provide citations.
|
| 213 |
|
| 214 |
-
|
|
|
|
|
|
|
|
|
|
| 215 |
|
| 216 |
|
| 217 |
## Contact
|
|
|
|
| 20 |
|
| 21 |
## Summary
|
| 22 |
|
| 23 |
+
SHP is a dataset of **385K collective human preferences** over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
|
| 24 |
+
The preferences are meant to reflect the helpfulness of one response over another, and are intended to be used for training RLHF reward models and NLG evaluation models (e.g., [SteamSHP](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-xl)).
|
| 25 |
|
| 26 |
+
Each example is a Reddit post with a question/instruction and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (collectively).
|
| 27 |
SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B.
|
| 28 |
If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility from being written first.
|
| 29 |
+
We chose data where the preference label is intended to reflect which response is more *helpful* rather than which is less *harmful*, the latter being the focus of much past work.
|
| 30 |
|
| 31 |
How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
|
| 32 |
Most notably, all the data in SHP is naturally occurring and human-written, whereas the responses in HH-RLHF are machine-written, giving us two very different distributions that can complement each other.
|
|
|
|
| 37 |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
|
| 38 |
|
| 39 |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
|
| 40 |
+
SHP uses the timestamp information to infer preferences, while ELI5 only provides comments and scores -- the latter are not enough to infer preferences since comments made earlier tend to get higher scores from more visibility.
|
| 41 |
It also contains data from more domains:
|
| 42 |
|
| 43 |
| Dataset | Size | Comments + Scores | Preferences | Number of Domains |
|
|
|
|
| 102 |
|
| 103 |
## Dataset Design
|
| 104 |
|
| 105 |
+
### Domain Selection
|
| 106 |
+
|
| 107 |
The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*.
|
| 108 |
For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
|
| 110 |
SHP contains a train, validation, and test split for comments scraped from 18 different subreddits. We chose subreddits based on:
|
| 111 |
1. whether they were well-known (subscriber count >= 50K)
|
| 112 |
+
2. whether posts were expected to pose a question or instruction
|
| 113 |
+
3. whether responses were valued based on how *helpful* they were
|
| 114 |
+
4. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
|
| 115 |
|
| 116 |
The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
|
| 117 |
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%:
|
|
|
|
| 138 |
| legaladvice | 21170 | 1106 | 1011 | 23287 |
|
| 139 |
| ALL | 348718 | 18436 | 18409 | 385563 |
|
| 140 |
|
| 141 |
+
### Data Selection
|
| 142 |
+
|
| 143 |
+
The score of a post/comment is 1 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
|
| 144 |
+
The value of a score is relative; in subreddits(posts) with more traffic, there will be more higher-scoring posts(comments).
|
| 145 |
+
Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure, which is why using timestamp information is essential when inferring preferences.
|
| 146 |
|
| 147 |
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
|
| 148 |
1. A was written *no later than* B and A has a higher score than B.
|
|
|
|
| 205 |
|
| 206 |
## Biases and Limitations
|
| 207 |
|
| 208 |
+
### Biases
|
| 209 |
+
|
| 210 |
Although we filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
|
| 211 |
The data does not reflect the views of the dataset creators.
|
| 212 |
+
Reddit users on these subreddits are also not representative of the broader population.
|
| 213 |
+
Although subreddit-specific demographic information is not available, Reddit users overall are disproportionately male and from developed, Western, and English-speaking countries ([Pew Research](https://www.pewresearch.org/internet/2013/07/03/6-of-online-adults-are-reddit-users/)).
|
| 214 |
One should keep that in mind before using any models trained on this data.
|
| 215 |
|
| 216 |
+
### Limitations
|
|
|
|
| 217 |
|
| 218 |
+
The preference label in SHP is intended to reflect how *helpful* one response is relative to another, given an instruction/question.
|
| 219 |
+
However, the more preferred response is not necessarily the more factual one.
|
| 220 |
+
Though some comments do provide citations to justify their response, most do not.
|
| 221 |
+
There are exceptions to this, such as the `askhistorians` subreddit, which is heavily moderated and answers are expected to provide citations.
|
| 222 |
|
| 223 |
|
| 224 |
## Contact
|