Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Daniel Paleka commited on
Commit
c9cda00
·
1 Parent(s): 9a73e1f

Add WildChat-2k-TypeTopic dataset

Browse files

Initial upload of curated dataset with 1,880 annotated prompts from WildChat, featuring task type and topic classifications.

Files changed (3) hide show
  1. .gitignore +25 -0
  2. README.md +160 -0
  3. wildchat1880.jsonl +0 -0
.gitignore ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+
8
+ # Virtual environments
9
+ venv/
10
+ env/
11
+ ENV/
12
+
13
+ # IDEs
14
+ .vscode/
15
+ .idea/
16
+ *.swp
17
+ *.swo
18
+
19
+ # OS
20
+ .DS_Store
21
+ Thumbs.db
22
+
23
+ # Project specific - keep only dataset and README
24
+ /*.py
25
+ /*.html
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: cc-by-4.0
5
+ size_categories:
6
+ - 1K<n<10K
7
+ pretty_name: WildChat-2k-TypeTopic
8
+ ---
9
+
10
+ # WildChat-2k-TypeTopic
11
+
12
+ ## Dataset Description
13
+
14
+ **WildChat-2k-TypeTopic** is a manually curated subset of 1,880 real-world user prompts from the [WildChat dataset](https://huggingface.co/datasets/allenai/WildChat), featuring dual-layer annotations for both **task type** (e.g. knowledge recall, problem solving, creative, lists) and **topic category** (e.g. personal assistance, math, ai, household)
15
+
16
+ [WildChat-1M](https://arxiv.org/abs/2405.01470) is the most frequently used dataset user prompts to LLMs; unfortunately everyone who has ever looked into it knows it is full of nonsensical prompts, typos, non-English, NSFW stuff, and other noise; and that the distribution of prompts that users ask for is very detailed in some domains (e.g. creative writing) and very sparse in others.
17
+
18
+ WildChat-2k-TypeTopic is a curated subset of single-message user prompts is constructed as follows:
19
+
20
+ 1. Filter out (using an LLM filter) prompts that:
21
+ – are not in English
22
+ – are not meaningful tasks (e.g., random character strings, “hello”)
23
+ – are incomplete (e.g., “Fix this code” with no code provided)
24
+ – are infeasible for text-only LLMs (e.g., “Describe a time when you worked in a team”, “an image of a cat”)
25
+ – are clearly part of multi-turn conversations (e.g., text-based game setups)
26
+ – are more than 800 characters long
27
+
28
+ 2. Deduplicate using `text-embedding-3-large` embeddings.
29
+ 3. Classify them into 16 task types and 25 topic categories; and subsample 2000 tasks to preserve representation of all types and categories;
30
+ 4. Manual filtering and reclassification of to remove everything problematic according to the described criteria;
31
+
32
+ WildChat-2k-TypeTopic dataset may be useful for figuring out **what kind of user task LLMs prefer doing**.
33
+
34
+ ### Key Features
35
+
36
+ - **1,880 annotated prompts** from real user interactions
37
+ - **15 task type categories** (e.g., creative, coding, explanation, problem_solving)
38
+ - **24 topic categories** (e.g., programming_other, creative_writing, personal_assistance)
39
+ - **Short prompts **: 12-800 characters (median: 116)
40
+ - **Quality filtered**: All entries are coherent English prompts, as opposed to WildChat
41
+
42
+ ## Dataset Structure
43
+
44
+ ### Data Format
45
+
46
+ The dataset is provided in JSONL format (newline-delimited JSON), with each entry containing:
47
+
48
+ ```json
49
+ {
50
+ "id": "wildchat2k_0003",
51
+ "text": "I want to learn how to understand and speak spanish, can you use the pareto principle, which identifies 20% of the topic that will yield 80% of the desired results, to create a learning plan for me?",
52
+ "type": "planning_design",
53
+ "topic": "languages",
54
+ "q_metadata": {},
55
+ "makes_sense": true,
56
+ "is_english": true
57
+ }
58
+ ```
59
+
60
+ ### Fields
61
+
62
+ - **id** (string): Unique identifier
63
+ - **text** (string): The user prompt/query
64
+ - **type** (string): Task classification (15 categories)
65
+ - **topic** (string): Subject matter classification (24 categories)
66
+ - **q_metadata** (object): Additional metadata (reserved for future use)
67
+ - **makes_sense** (boolean): Quality indicator
68
+ - **is_english** (boolean): Language indicator
69
+
70
+ ### Task Types (15 categories)
71
+
72
+ | Task Type | Count | % | Description |
73
+ |-----------|-------|---|-------------|
74
+ | knowledge_recall | 351 | 18.67% | Factual questions and information retrieval |
75
+ | creative | 320 | 17.02% | Creative writing and content generation |
76
+ | explanation | 305 | 16.22% | Requests for explanations and teaching |
77
+ | problem_solving | 123 | 6.54% | Mathematical and logical problems |
78
+ | lists | 123 | 6.54% | List generation tasks |
79
+ | rewriting | 115 | 6.12% | Text rewriting and paraphrasing |
80
+ | coding | 93 | 4.95% | Programming and code generation |
81
+ | analysis | 86 | 4.57% | Analytical tasks |
82
+ | messaging | 84 | 4.47% | Email and message writing |
83
+ | planning_design | 68 | 3.62% | Planning and design tasks |
84
+ | translation | 52 | 2.77% | Translation requests |
85
+ | summarization | 51 | 2.71% | Summary generation |
86
+ | roleplay | 44 | 2.34% | Roleplay and character simulation |
87
+ | decision_making | 37 | 1.97% | Decision support tasks |
88
+ | evaluation | 28 | 1.49% | Evaluation and assessment |
89
+
90
+ ### Topic Categories (24 categories)
91
+
92
+ | Topic | Count | % | Description |
93
+ |-------|-------|---|-------------|
94
+ | personal_assistance | 227 | 12.07% | Personal productivity and communication |
95
+ | creative_writing | 196 | 10.43% | Fiction, stories, creative content |
96
+ | programming_other | 155 | 8.24% | Programming and software development |
97
+ | popular_culture | 137 | 7.29% | Entertainment, media, celebrities |
98
+ | technology_other | 106 | 5.64% | General technology topics |
99
+ | languages | 105 | 5.59% | Language learning and linguistics |
100
+ | gaming | 101 | 5.37% | Video games and gaming culture |
101
+ | math | 84 | 4.47% | Mathematics |
102
+ | medicine_fitness | 80 | 4.26% | Health and fitness |
103
+ | philosophy_religion | 72 | 3.83% | Philosophy and religious topics |
104
+ | household | 57 | 3.03% | Household and domestic topics |
105
+ | science_other | 55 | 2.93% | General science topics |
106
+ | politics_events | 54 | 2.87% | Politics and current events |
107
+ | literature | 53 | 2.82% | Literary works and analysis |
108
+ | history | 53 | 2.82% | Historical topics |
109
+ | ai | 48 | 2.55% | Artificial intelligence topics |
110
+ | geography | 43 | 2.29% | Geography and locations |
111
+ | humanities_other | 41 | 2.18% | Other humanities topics |
112
+ | biology | 39 | 2.07% | Biological sciences |
113
+ | hardware | 38 | 2.02% | Computer hardware and electronics |
114
+ | sports | 37 | 1.97% | Sports and athletics |
115
+ | cybersecurity | 36 | 1.91% | Cybersecurity and information security |
116
+ | physics | 34 | 1.81% | Physics |
117
+ | chemistry | 29 | 1.54% | Chemistry |
118
+
119
+
120
+ ## Usage
121
+
122
+ ### Loading the Dataset
123
+
124
+ ```python
125
+ from datasets import load_dataset
126
+
127
+ # Load the full dataset
128
+ dataset = load_dataset("dpaleka/wildchat-2k-typetopic")
129
+
130
+ # Access the data
131
+ for item in dataset['train']:
132
+ print(f"Type: {item['type']}, Topic: {item['topic']}")
133
+ print(f"Text: {item['text']}\n")
134
+ ```
135
+
136
+
137
+ ## Citation
138
+
139
+ If you use this dataset, please cite the original WildChat paper:
140
+
141
+ ```bibtex
142
+ @inproceedings{zhao2024wildchat,
143
+ title={WildChat: 1M ChatGPT Interaction Logs in the Wild},
144
+ author={Zhao, Wenting and Havaldar, Savvas and Besmens, Harshita and Chiu, Ting-Hao 'Kenneth' and Pyatkin, Valentina and Lin, Bill Yuchen and Yu, Liwei and Liu, Alane Suhr and Zhang, Yejin and others},
145
+ booktitle={ICLR},
146
+ year={2024}
147
+ }
148
+ ```
149
+
150
+ For this specific annotated subset:
151
+
152
+ ```bibtex
153
+ @dataset{wildchat2k_typetopic,
154
+ title={WildChat-2k-TypeTopic: Curated Subset of Single-Message User Prompts},
155
+ author={Paleka, Daniel},
156
+ year={2025},
157
+ publisher={HuggingFace},
158
+ url={https://huggingface.co/datasets/dpaleka/wildchat-2k-typetopic}
159
+ }
160
+ ```
wildchat1880.jsonl ADDED
The diff for this file is too large to render. See raw diff