Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Daniel Paleka
Update README with latest changes
eb256fb
---
language:
- en
license: cc-by-4.0
size_categories:
- 1K<n<10K
pretty_name: WildChat-2k-TypeTopic
---
# WildChat-2k-TypeTopic: The Manually Curated Edition
## Dataset Description
**WildChat-2k-TypeTopic** is a manually curated subset of 1,880 real-world user prompts from the [WildChat dataset](https://huggingface.co/datasets/allenai/WildChat), featuring annotations for both **task type** (e.g. knowledge recall, problem solving, creative, lists) and **topic category** (e.g. personal assistance, math, ai, household)
## Why this dataset?
Suppose you want to answer a research question such as "What kind of user prompt does the LLM like doing most?" or "[What is the implicit utility function of the LLM](https://arxiv.org/abs/2502.08640) for answering different user prompts?" or "[What kind of user prompts do models bail on](https://arxiv.org/abs/2509.04781)"? The first step is to find a dataset of user prompts.
[WildChat-1M](https://arxiv.org/abs/2405.01470) is the most frequently used dataset of user prompts to LLMs; unfortunately everyone who has ever looked into it knows it is full of nonsensical prompts, typos, non-English, NSFW stuff, and other noise; and that the distribution of prompts that users ask for is very detailed in some domains (e.g. creative writing) and very sparse in others.
WildChat-2k-TypeTopic is a curated subset of single-message user prompts, constructed as follows:
1. Filter out (using an LLM filter) prompts that:
* are not in English
* are not meaningful tasks (e.g., random character strings, “hello”)
* are incomplete (e.g., “Fix this code” with no code provided)
* are infeasible for text-only LLMs (e.g., “Describe a time when you worked in a team”, “an image of a cat”)
* are clearly part of multi-turn conversations (e.g., text-based game setups)
* are more than 800 characters long
2. Deduplicate using `text-embedding-3-large` embeddings.
3. Classify into 16 task types and 25 topic categories, then subsample ~2000 tasks to preserve representation of all types and categories.
4. Manual review to remove anything problematic according to the described criteria.
### Key Features
- **1,880 annotated prompts** from real user interactions
- **15 task type categories** (e.g., creative, coding, explanation, problem_solving)
- **24 topic categories** (e.g., programming_other, creative_writing, personal_assistance)
- **Short prompts**: 12-800 characters (median: 116)
- **Quality filtered**: All entries are coherent English prompts, as opposed to WildChat
## Dataset Structure
### Data Format
The dataset is provided in JSONL format (newline-delimited JSON), with each entry containing:
```json
{
"id": "wildchat2k_0003",
"text": "I want to learn how to understand and speak spanish, can you use the pareto principle, which identifies 20% of the topic that will yield 80% of the desired results, to create a learning plan for me?",
"type": "planning_design",
"topic": "languages"
}
```
### Fields
- **id** (string): Unique identifier
- **text** (string): The user prompt/query
- **type** (string): Task classification (15 categories)
- **topic** (string): Subject matter classification (24 categories)
### Task Types (15 categories)
| Task Type | Count | % | Description |
|-----------|-------|---|-------------|
| knowledge_recall | 351 | 18.67% | Factual questions and information retrieval |
| creative | 320 | 17.02% | Creative writing and content generation |
| explanation | 305 | 16.22% | Requests for explanations and teaching |
| problem_solving | 123 | 6.54% | Mathematical and logical problems |
| lists | 123 | 6.54% | List generation tasks |
| rewriting | 115 | 6.12% | Text rewriting and paraphrasing |
| coding | 93 | 4.95% | Programming and code generation |
| analysis | 86 | 4.57% | Analytical tasks |
| messaging | 84 | 4.47% | Email and message writing |
| planning_design | 68 | 3.62% | Planning and design tasks |
| translation | 52 | 2.77% | Translation requests |
| summarization | 51 | 2.71% | Summary generation |
| roleplay | 44 | 2.34% | Roleplay and character simulation |
| decision_making | 37 | 1.97% | Decision support tasks |
| evaluation | 28 | 1.49% | Evaluation and assessment |
### Topic Categories (24 categories)
| Topic | Count | % | Description |
|-------|-------|---|-------------|
| personal_assistance | 227 | 12.07% | Personal productivity and communication |
| creative_writing | 196 | 10.43% | Fiction, stories, creative content |
| programming_other | 155 | 8.24% | Programming and software development |
| popular_culture | 137 | 7.29% | Entertainment, media, celebrities |
| technology_other | 106 | 5.64% | General technology topics |
| languages | 105 | 5.59% | Language learning and linguistics |
| gaming | 101 | 5.37% | Video games and gaming culture |
| math | 84 | 4.47% | Mathematics |
| medicine_fitness | 80 | 4.26% | Health and fitness |
| philosophy_religion | 72 | 3.83% | Philosophy and religious topics |
| household | 57 | 3.03% | Household and domestic topics |
| science_other | 55 | 2.93% | General science topics |
| politics_events | 54 | 2.87% | Politics and current events |
| literature | 53 | 2.82% | Literary works and analysis |
| history | 53 | 2.82% | Historical topics |
| ai | 48 | 2.55% | Artificial intelligence topics |
| geography | 43 | 2.29% | Geography and locations |
| humanities_other | 41 | 2.18% | Other humanities topics |
| biology | 39 | 2.07% | Biological sciences |
| hardware | 38 | 2.02% | Computer hardware and electronics |
| sports | 37 | 1.97% | Sports and athletics |
| cybersecurity | 36 | 1.91% | Cybersecurity and information security |
| physics | 34 | 1.81% | Physics |
| chemistry | 29 | 1.54% | Chemistry |
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("dpaleka/wildchat-2k-typetopic")
# Access the data
for item in dataset['train']:
print(f"Type: {item['type']}, Topic: {item['topic']}")
print(f"Text: {item['text']}\n")
```
## Citation
If you use this dataset, please cite the original WildChat paper:
```bibtex
@inproceedings{zhao2024wildchat,
title={WildChat: 1M ChatGPT Interaction Logs in the Wild},
author={Zhao, Wenting and Havaldar, Savvas and Besmens, Harshita and Chiu, Ting-Hao 'Kenneth' and Pyatkin, Valentina and Lin, Bill Yuchen and Yu, Liwei and Liu, Alane Suhr and Zhang, Yejin and others},
booktitle={ICLR},
year={2024}
}
```
For this specific annotated subset:
```bibtex
@dataset{wildchat2k_typetopic,
title={WildChat-2k-TypeTopic: Curated Subset of Single-Message User Prompts},
author={Paleka, Daniel},
year={2025},
publisher={HuggingFace},
url={https://huggingface.co/datasets/dpaleka/wildchat-2k-typetopic}
}
```