Files changed (1) hide show
  1. README.md +364 -1
README.md CHANGED
@@ -1,4 +1,367 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
3
  ---
4
- # Hugging Face Dataset Classification With Sieves
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - zero-shot-classification
5
+ - text-classification
6
+ tags:
7
+ - uv-script
8
+ - classification
9
+ - zero-shot
10
+ - structured-outputs
11
  ---
12
+
13
+ # Hugging Face Dataset Classification With Sieves
14
+
15
+
16
+ GPU-accelerated text classification for Hugging Face datasets with guaranteed valid outputs through structured
17
+ generation with [Sieves](https://github.com/MantisAI/sieves/), [Outlines](https://github.com/dottxt-ai/outlines) and
18
+ Hugging Face zero-shot pipelines.
19
+
20
+ This is a modified version of https://huggingface.co/datasets/uv-scripts/classification.
21
+
22
+ ## ๐Ÿš€ Quick Start
23
+
24
+ ```bash
25
+ # Classify IMDB reviews
26
+ uv run examples/create_classification_dataset.py \
27
+ --input-dataset stanfordnlp/imdb \
28
+ --column text \
29
+ --labels "positive,negative" \
30
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
31
+ --output-dataset user/imdb-classified
32
+ ```
33
+
34
+ That's it! No installation, no setup - just `uv run`.
35
+
36
+ ## ๐Ÿ“‹ Requirements
37
+
38
+ - **GPU Recommended**: Uses GPU-accelerated inference (CPU fallback available but slow)
39
+ - Python 3.12+
40
+ - UV (will handle all dependencies automatically)
41
+
42
+ **Python Package Dependencies** (automatically installed via UV):
43
+ - `sieves` with engines support (>= 0.17.4)
44
+ - `typer` (>= 0.12)
45
+ - `datasets`
46
+ - `huggingface-hub`
47
+
48
+ ## ๐ŸŽฏ Features
49
+
50
+ - **Guaranteed valid outputs** using structured generation with Outlines guided decoding
51
+ - **Zero-shot classification** without training data required
52
+ - **GPU-optimized** for maximum throughput and efficiency
53
+ - **Multi-label support** for documents with multiple applicable labels
54
+ - **Flexible model selection** - works with any instruction-tuned transformer model
55
+ - **Robust text handling** with preprocessing and validation
56
+ - **Automatic progress tracking** and detailed statistics
57
+ - **Direct Hub integration** - read and write datasets seamlessly
58
+ - **Label descriptions** support for providing context to improve accuracy
59
+ - **Optimized batching** with Sieves' automatic batch processing
60
+ - **Multiple guided backends** - supports `outlines` to handle any general language model on Hugging Face, and fast Hugging Face zero-shot classification pipelines
61
+
62
+ ## ๐Ÿ’ป Usage
63
+
64
+ ### Basic Classification
65
+
66
+ ```bash
67
+ uv run examples/create_classification_dataset.py \
68
+ --input-dataset <dataset-id> \
69
+ --column <text-column> \
70
+ --labels <comma-separated-labels> \
71
+ --model <model-id> \
72
+ --output-dataset <output-id>
73
+ ```
74
+
75
+ ### Arguments
76
+
77
+ **Required:**
78
+
79
+ - `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`)
80
+ - `--column`: Name of the text column to classify
81
+ - `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`)
82
+ - `--model`: Model to use (e.g., `HuggingFaceTB/SmolLM-360M-Instruct`)
83
+ - `--output-dataset`: Where to save the classified dataset
84
+
85
+ **Optional:**
86
+
87
+ - `--label-descriptions`: Provide descriptions for each label to improve classification accuracy
88
+ - `--multi-label`: Enable multi-label classification mode (creates multi-hot encoded labels)
89
+ - `--split`: Dataset split to process (default: `train`)
90
+ - `--max-samples`: Limit samples for testing
91
+ - `--shuffle`: Shuffle dataset before selecting samples (useful for random sampling)
92
+ - `--shuffle-seed`: Random seed for shuffling
93
+ - `--batch-size`: Batch size for inference (default: 64)
94
+ - `--max-tokens`: Maximum tokens to generate per sample (default: 200)
95
+ - `--hf-token`: Hugging Face token (or use `HF_TOKEN` env var)
96
+
97
+ ### Label Descriptions
98
+
99
+ Provide context for your labels to improve classification accuracy:
100
+
101
+ ```bash
102
+ uv run examples/create_classification_dataset.py \
103
+ --input-dataset user/support-tickets \
104
+ --column content \
105
+ --labels "bug,feature,question,other" \
106
+ --label-descriptions "bug:something is broken,feature:request for new functionality,question:asking for help,other:anything else" \
107
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
108
+ --output-dataset user/tickets-classified
109
+ ```
110
+
111
+ The model uses these descriptions to better understand what each label represents, leading to more accurate classifications.
112
+
113
+ ### Multi-Label Classification
114
+
115
+ Enable multi-label mode for documents that can have multiple applicable labels:
116
+
117
+ ```bash
118
+ uv run examples/create_classification_dataset.py \
119
+ --input-dataset ag_news \
120
+ --column text \
121
+ --labels "world,sports,business,science" \
122
+ --multi-label \
123
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
124
+ --output-dataset user/ag-news-multilabel
125
+ ```
126
+
127
+ ## ๐Ÿ“Š Examples
128
+
129
+ ### Sentiment Analysis
130
+
131
+ ```bash
132
+ uv run examples/create_classification_dataset.py \
133
+ --input-dataset stanfordnlp/imdb \
134
+ --column text \
135
+ --labels "positive,ambivalent,negative" \
136
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
137
+ --output-dataset user/imdb-sentiment
138
+ ```
139
+
140
+ ### Support Ticket Classification
141
+
142
+ ```bash
143
+ uv run examples/create_classification_dataset.py \
144
+ --input-dataset user/support-tickets \
145
+ --column content \
146
+ --labels "bug,feature_request,question,other" \
147
+ --label-descriptions "bug:code or product not working as expected,feature_request:asking for new functionality,question:seeking help or clarification,other:general comments or feedback" \
148
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
149
+ --output-dataset user/tickets-classified
150
+ ```
151
+
152
+ ### News Categorization
153
+
154
+ ```bash
155
+ uv run examples/create_classification_dataset.py \
156
+ --input-dataset ag_news \
157
+ --column text \
158
+ --labels "world,sports,business,tech" \
159
+ --model HuggingFaceTB/SmolLM-1.7B-Instruct \
160
+ --output-dataset user/ag-news-categorized
161
+ ```
162
+
163
+ ### Multi-Label News Classification
164
+
165
+ ```bash
166
+ uv run examples/create_classification_dataset.py \
167
+ --input-dataset ag_news \
168
+ --column text \
169
+ --labels "world,sports,business,tech" \
170
+ --multi-label \
171
+ --label-descriptions "world:global and international events,sports:sports and athletics,business:business and finance,tech:technology and innovation" \
172
+ --model HuggingFaceTB/SmolLM-1.7B-Instruct \
173
+ --output-dataset user/ag-news-multilabel
174
+ ```
175
+
176
+ This combines label descriptions with multi-label mode for comprehensive categorization of news articles.
177
+
178
+ ### ArXiv ML Research Classification
179
+
180
+ Classify academic papers into machine learning research areas:
181
+
182
+ ```bash
183
+ # Fast classification with random sampling
184
+ uv run examples/create_classification_dataset.py \
185
+ --input-dataset librarian-bots/arxiv-metadata-snapshot \
186
+ --column abstract \
187
+ --labels "llm,computer_vision,reinforcement_learning,optimization,theory,other" \
188
+ --label-descriptions "llm:language models and NLP,computer_vision:image and video processing,reinforcement_learning:RL and decision making,optimization:training and efficiency,theory:theoretical ML foundations,other:other ML topics" \
189
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
190
+ --output-dataset user/arxiv-ml-classified \
191
+ --split "train" \
192
+ --max-samples 100 \
193
+ --shuffle
194
+
195
+ # Multi-label for nuanced classification
196
+ uv run examples/create_classification_dataset.py \
197
+ --input-dataset librarian-bots/arxiv-metadata-snapshot \
198
+ --column abstract \
199
+ --labels "multimodal,agents,reasoning,safety,efficiency" \
200
+ --label-descriptions "multimodal:vision-language and cross-modal models,agents:autonomous agents and tool use,reasoning:reasoning and planning systems,safety:alignment and safety research,efficiency:model optimization and deployment" \
201
+ --multi-label \
202
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
203
+ --output-dataset user/arxiv-frontier-research \
204
+ --split "train[:1000]" \
205
+ --max-samples 50
206
+ ```
207
+
208
+ Multi-label mode is particularly valuable for academic abstracts where papers often span multiple topics and require careful analysis to determine all relevant research areas.
209
+
210
+ ## ๐Ÿš€ Running Locally vs Cloud
211
+
212
+ This script is optimized to run locally on GPU-equipped machines:
213
+
214
+ ```bash
215
+ # Local execution with your GPU
216
+ uv run examples/create_classification_dataset.py \
217
+ --input-dataset stanfordnlp/imdb \
218
+ --column text \
219
+ --labels "positive,negative" \
220
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
221
+ --output-dataset user/imdb-classified
222
+ ```
223
+
224
+ For cloud deployment, you can use Hugging Face Spaces or other GPU services by adapting the command to your environment.
225
+
226
+ ## ๐Ÿ”ง Advanced Usage
227
+
228
+ ### Random Sampling
229
+
230
+ When working with ordered datasets, use `--shuffle` with `--max-samples` to get a representative sample:
231
+
232
+ ```bash
233
+ # Get 50 random reviews instead of the first 50
234
+ uv run examples/create_classification_dataset.py \
235
+ --input-dataset stanfordnlp/imdb \
236
+ --column text \
237
+ --labels "positive,negative" \
238
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
239
+ --output-dataset user/imdb-sample \
240
+ --max-samples 50 \
241
+ --shuffle \
242
+ --shuffle-seed 123 # For reproducibility
243
+ ```
244
+
245
+
246
+ ### Using Different Models
247
+
248
+ By default, this script works with any instruction-tuned model. Here are some recommended options:
249
+
250
+ ```bash
251
+ # Lightweight model for fast classification
252
+ uv run examples/create_classification_dataset.py \
253
+ --input-dataset user/my-dataset \
254
+ --column text \
255
+ --labels "A,B,C" \
256
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
257
+ --output-dataset user/classified
258
+
259
+ # Larger model for complex classification
260
+ uv run examples/create_classification_dataset.py \
261
+ --input-dataset user/legal-docs \
262
+ --column text \
263
+ --labels "contract,patent,brief,memo,other" \
264
+ --model HuggingFaceTB/SmolLM3-3B-Instruct \
265
+ --output-dataset user/legal-classified
266
+
267
+ # Specialized zero-shot classifier
268
+ uv run examples/create_classification_dataset.py \
269
+ --input-dataset user/my-dataset \
270
+ --column text \
271
+ --labels "A,B,C" \
272
+ --model MoritzLaurer/deberta-v3-large-zeroshot-v2.0 \
273
+ --output-dataset user/classified
274
+ ```
275
+
276
+ ### Large Datasets
277
+
278
+ Configure `--batch-size` for more effective batch processing with large datasets:
279
+
280
+ ```bash
281
+ uv run examples/create_classification_dataset.py \
282
+ --input-dataset user/huge-dataset \
283
+ --column text \
284
+ --labels "A,B,C" \
285
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
286
+ --output-dataset user/huge-classified \
287
+ --batch-size 128
288
+ ```
289
+
290
+
291
+ ## ๐Ÿค How It Works
292
+
293
+ 1. **Sieves**: Provides a zero-shot task pipeline system for structured NLP workflows
294
+ 2. **Outlines**: Provides guided decoding to guarantee valid label outputs
295
+ 3. **UV**: Handles all dependencies automatically
296
+
297
+ The script loads your dataset, preprocesses texts, classifies each one with guaranteed valid outputs using Sieves'
298
+ `Classification` task, then saves the results as a new column in the output dataset.
299
+
300
+ ## ๐Ÿ› Troubleshooting
301
+
302
+ ### GPU Not Available
303
+
304
+ This script works best with a GPU but can run on CPU (much slower). To use GPU:
305
+
306
+ - Run on a machine with NVIDIA GPU
307
+ - Use cloud GPU instances (AWS, GCP, Azure, etc.)
308
+ - Use Hugging Face Spaces with GPU
309
+
310
+ ### Out of Memory
311
+
312
+ - Use a smaller model (e.g., SmolLM-360M instead of 3B)
313
+ - Reduce `--batch-size` (try 32, 16, or 8)
314
+ - Reduce `--max-tokens` for shorter generations
315
+
316
+ ### Invalid/Skipped Texts
317
+
318
+ - Texts shorter than 3 characters are skipped
319
+ - Empty or None values are marked as invalid
320
+ - Very long texts are truncated to 4000 characters
321
+
322
+ ### Classification Quality
323
+
324
+ - With Outlines guided decoding, outputs are guaranteed to be valid labels
325
+ - For better results, use clear and distinct label names
326
+ - Try `--label-descriptions` to provide context
327
+ - Use a larger model for nuanced tasks
328
+ - In multi-label mode, adjust the confidence threshold (defaults to 0.5)
329
+
330
+ ### Authentication Issues
331
+
332
+ If you see authentication errors:
333
+
334
+ - Run `huggingface-cli login` to cache your token
335
+ - Or set `export HF_TOKEN=your_token_here`
336
+ - Verify your token has read/write permissions on the Hub
337
+
338
+ ## ๐Ÿ”ฌ Advanced Workflows
339
+
340
+ ### Full Pipeline Workflow
341
+
342
+ Start with small tests, then run on the full dataset:
343
+
344
+ ```bash
345
+ # Step 1: Test with small sample
346
+ uv run examples/create_classification_dataset.py \
347
+ --input-dataset your-dataset \
348
+ --column text \
349
+ --labels "label1,label2,label3" \
350
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
351
+ --output-dataset user/test-classification \
352
+ --max-samples 100
353
+
354
+ # Step 2: If results look good, run on full dataset
355
+ uv run examples/create_classification_dataset.py \
356
+ --input-dataset your-dataset \
357
+ --column text \
358
+ --labels "label1,label2,label3" \
359
+ --label-descriptions "label1:description,label2:description,label3:description" \
360
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
361
+ --output-dataset user/final-classification \
362
+ --batch-size 64
363
+ ```
364
+
365
+ ## ๐Ÿ“ License
366
+
367
+ This example is provided as part of the [Sieves](https://github.com/MantisAI/sieves/) project.