File size: 7,843 Bytes
52de1e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
5c4f2fd
 
52de1e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c4f2fd
52de1e3
5c4f2fd
52de1e3
 
 
 
 
5c4f2fd
52de1e3
 
 
 
5c4f2fd
52de1e3
 
 
 
 
5c4f2fd
52de1e3
 
 
 
 
 
5c4f2fd
52de1e3
5c4f2fd
52de1e3
 
 
 
5c4f2fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52de1e3
 
5c4f2fd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
---
viewer: false
tags: [uv-script, classification, vllm, structured-outputs, gpu-required]
---

# Dataset Classification with vLLM

Efficient text classification for Hugging Face datasets using vLLM with structured outputs. This script provides GPU-accelerated classification with guaranteed valid outputs through guided decoding.

## πŸš€ Quick Start

```bash
# Classify IMDB reviews
uv run classify-dataset.py \
  --input-dataset stanfordnlp/imdb \
  --column text \
  --labels "positive,negative" \
  --output-dataset user/imdb-classified
```

That's it! No installation, no setup - just `uv run`.

## πŸ“‹ Requirements

- **GPU Required**: This script uses vLLM for efficient inference
- Python 3.10+
- UV (will handle all dependencies automatically)
- vLLM >= 0.6.6 (for guided decoding support)

## 🎯 Features

- **Guaranteed valid outputs** using vLLM's guided decoding with outlines
- **Zero-shot classification** with structured generation
- **GPU-optimized** with vLLM's automatic batching for maximum efficiency
- **Default model**: HuggingFaceTB/SmolLM3-3B (fast 3B model, easily changeable)
- **Robust text handling** with preprocessing and validation
- **Three prompt styles** for different use cases
- **Automatic progress tracking** and detailed statistics
- **Direct Hub integration** - read and write datasets seamlessly

## πŸ’» Usage

### Basic Classification

```bash
uv run classify-dataset.py \
  --input-dataset <dataset-id> \
  --column <text-column> \
  --labels <comma-separated-labels> \
  --output-dataset <output-id>
```

### Arguments

**Required:**

- `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`)
- `--column`: Name of the text column to classify
- `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`)
- `--output-dataset`: Where to save the classified dataset

**Optional:**

- `--model`: Model to use (default: **`HuggingFaceTB/SmolLM3-3B`** - a fast 3B parameter model)
- `--prompt-style`: Choose from `simple`, `detailed`, or `reasoning` (default: `simple`)
- `--split`: Dataset split to process (default: `train`)
- `--max-samples`: Limit samples for testing
- `--temperature`: Generation temperature (default: 0.1)
- `--guided-backend`: Backend for guided decoding (default: `outlines`)
- `--hf-token`: Hugging Face token (or use `HF_TOKEN` env var)

### Prompt Styles

- **simple**: Direct classification prompt
- **detailed**: Emphasizes exact category matching
- **reasoning**: Includes brief analysis before classification

All styles benefit from structured output guarantees - the model can only output valid labels!

## πŸ“Š Examples

### Sentiment Analysis

```bash
uv run classify-dataset.py \
  --input-dataset stanfordnlp/imdb \
  --column text \
  --labels "positive,negative" \
  --output-dataset user/imdb-sentiment
```

### Support Ticket Classification

```bash
uv run classify-dataset.py \
  --input-dataset user/support-tickets \
  --column content \
  --labels "bug,feature_request,question,other" \
  --output-dataset user/tickets-classified \
  --prompt-style reasoning
```

### News Categorization

```bash
uv run classify-dataset.py \
  --input-dataset ag_news \
  --column text \
  --labels "world,sports,business,tech" \
  --output-dataset user/ag-news-categorized \
  --model meta-llama/Llama-3.2-3B-Instruct
```

## πŸš€ Running on HF Jobs

This script is optimized for [Hugging Face Jobs](https://huggingface.co/docs/hub/spaces-gpu-jobs) (requires Pro subscription or Team/Enterprise organization):

````bash
# Run on L4 GPU with vLLM image
hf jobs uv run \
  --flavor l4x1 \
  --image vllm/vllm-openai:latest \
  https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py \
  --input-dataset stanfordnlp/imdb \
  --column text \
  --labels "positive,negative" \
  --output-dataset user/imdb-classified

### GPU Flavors
- `t4-small`: Budget option for smaller models
- `l4x1`: Good balance for 7B models
- `a10g-small`: Fast inference for 3B models
- `a10g-large`: More memory for larger models
- `a100-large`: Maximum performance

## πŸ”§ Advanced Usage

### Using Different Models

By default, this script uses **HuggingFaceTB/SmolLM3-3B** - a fast, efficient 3B parameter model that's perfect for most classification tasks. You can easily use any other instruction-tuned model:

```bash
# Larger model for complex classification
uv run classify-dataset.py \
  --input-dataset user/legal-docs \
  --column text \
  --labels "contract,patent,brief,memo,other" \
  --output-dataset user/legal-classified \
  --model Qwen/Qwen2.5-7B-Instruct
````

### Large Datasets

vLLM automatically handles batching for optimal performance. For very large datasets, it will process efficiently without manual intervention:

```bash
uv run classify-dataset.py \
  --input-dataset user/huge-dataset \
  --column text \
  --labels "A,B,C" \
  --output-dataset user/huge-classified
```

## πŸ“ˆ Performance

- **SmolLM3-3B (default)**: ~50-100 texts/second on A10
- **7B models**: ~20-50 texts/second on A10
- vLLM automatically optimizes batching for best throughput

## 🀝 How It Works

1. **vLLM**: Provides efficient GPU batch inference
2. **Guided Decoding**: Uses outlines to guarantee valid label outputs
3. **Structured Generation**: Constrains model outputs to exact label choices
4. **UV**: Handles all dependencies automatically

The script loads your dataset, preprocesses texts, classifies each one using guided decoding to ensure only valid labels are generated, then saves the results as a new column in the output dataset.

## πŸ› Troubleshooting

### CUDA Not Available

This script requires a GPU. Run it on:

- A machine with NVIDIA GPU
- HF Jobs (recommended)
- Cloud GPU instances

### Out of Memory

- Use a smaller model
- Use a larger GPU (e.g., a100-large)

### Invalid/Skipped Texts

- Texts shorter than 3 characters are skipped
- Empty or None values are marked as invalid
- Very long texts are truncated to 4000 characters

### Classification Quality

- With guided decoding, outputs are guaranteed to be valid labels
- For better results, use clear and distinct label names
- Try the `reasoning` prompt style for complex classifications
- Use a larger model for nuanced tasks

### vLLM Version Issues

If you see `ImportError: cannot import name 'GuidedDecodingParams'`:

- Your vLLM version is too old (requires >= 0.6.6)
- The script specifies the correct version in its dependencies
- UV should automatically install the correct version

## πŸ”¬ Advanced Example: ArXiv ML Trends Analysis

For a more complex real-world example, we provide scripts to analyze ML research trends from ArXiv papers:

### Step 1: Prepare the Dataset

```bash
# Filter and prepare ArXiv CS papers from 2024
uv run prepare_arxiv_2024.py
```

This creates a filtered dataset of CS papers with combined title+abstract text.

### Step 2: Run Classification with Python API

```bash
# Use HF Jobs Python API to classify papers
uv run run_arxiv_classification.py
```

This script demonstrates:

- Using `run_uv_job()` from the Python API
- Classifying into modern ML trends (reasoning, agents, multimodal, robotics, etc.)
- Handling authentication and job monitoring

The classification categories include:

- `reasoning_systems`: Chain-of-thought, reasoning, problem solving
- `agents_autonomous`: Agents, tool use, autonomous systems
- `multimodal_models`: Vision-language, audio, multi-modal
- `robotics_embodied`: Robotics, embodied AI, manipulation
- `efficient_inference`: Quantization, distillation, edge deployment
- `alignment_safety`: RLHF, alignment, safety, interpretability
- `generative_models`: Diffusion, generation, synthesis
- `foundational_other`: Other foundational ML/AI research

## πŸ“ License

This script is provided as-is for use with the UV Scripts organization.