Daniel Paleka
commited on
Commit
·
e333211
1
Parent(s):
6b5ad6b
Fix wording and grammar issues
Browse files
README.md
CHANGED
|
@@ -11,11 +11,15 @@ pretty_name: WildChat-2k-TypeTopic
|
|
| 11 |
|
| 12 |
## Dataset Description
|
| 13 |
|
| 14 |
-
**WildChat-2k-TypeTopic** is a manually curated subset of 1,880 real-world user prompts from the [WildChat dataset](https://huggingface.co/datasets/allenai/WildChat), featuring
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
1. Filter out (using an LLM filter) prompts that:
|
| 21 |
* are not in English
|
|
@@ -26,10 +30,10 @@ WildChat-2k-TypeTopic is a curated subset of single-message user prompts is cons
|
|
| 26 |
* are more than 800 characters long
|
| 27 |
|
| 28 |
2. Deduplicate using `text-embedding-3-large` embeddings.
|
| 29 |
-
3. Classify
|
| 30 |
-
4. Manual
|
| 31 |
|
| 32 |
-
WildChat-2k-TypeTopic
|
| 33 |
|
| 34 |
### Key Features
|
| 35 |
|
|
|
|
| 11 |
|
| 12 |
## Dataset Description
|
| 13 |
|
| 14 |
+
**WildChat-2k-TypeTopic** is a manually curated subset of 1,880 real-world user prompts from the [WildChat dataset](https://huggingface.co/datasets/allenai/WildChat), featuring annotations for both **task type** (e.g. knowledge recall, problem solving, creative, lists) and **topic category** (e.g. personal assistance, math, ai, household)
|
| 15 |
|
| 16 |
+
## Why this dataset?
|
| 17 |
|
| 18 |
+
Suppose you want to answer a research question such as "What kind of user prompt does the LLM like doing most?" or "[What is the implicit utility function of the LLM](https://arxiv.org/abs/2502.08640) for answering different user prompts?" or "[What kind of user prompts do models bail on](https://arxiv.org/abs/2509.04781)"? The first step is to find a dataset of user prompts.
|
| 19 |
+
|
| 20 |
+
[WildChat-1M](https://arxiv.org/abs/2405.01470) is the most frequently used dataset of user prompts to LLMs; unfortunately everyone who has ever looked into it knows it is full of nonsensical prompts, typos, non-English, NSFW stuff, and other noise; and that the distribution of prompts that users ask for is very detailed in some domains (e.g. creative writing) and very sparse in others.
|
| 21 |
+
|
| 22 |
+
WildChat-2k-TypeTopic is a curated subset of single-message user prompts, constructed as follows:
|
| 23 |
|
| 24 |
1. Filter out (using an LLM filter) prompts that:
|
| 25 |
* are not in English
|
|
|
|
| 30 |
* are more than 800 characters long
|
| 31 |
|
| 32 |
2. Deduplicate using `text-embedding-3-large` embeddings.
|
| 33 |
+
3. Classify into 16 task types and 25 topic categories, then subsample ~2000 tasks to preserve representation of all types and categories.
|
| 34 |
+
4. Manual review to remove anything problematic according to the described criteria.
|
| 35 |
|
| 36 |
+
WildChat-2k-TypeTopic may be useful for figuring out **what kinds of user tasks LLMs prefer doing**.
|
| 37 |
|
| 38 |
### Key Features
|
| 39 |
|