| | --- |
| | viewer: false |
| | tags: [uv-script, classification, vllm, structured-outputs, gpu-required] |
| | --- |
| | |
| | # Dataset Classification with vLLM |
| |
|
| | Efficient text classification for Hugging Face datasets using vLLM with structured outputs. This script provides GPU-accelerated classification with guaranteed valid outputs through guided decoding. |
| |
|
| | ## π Quick Start |
| |
|
| | ```bash |
| | # Classify IMDB reviews |
| | uv run classify-dataset.py \ |
| | --input-dataset stanfordnlp/imdb \ |
| | --column text \ |
| | --labels "positive,negative" \ |
| | --output-dataset user/imdb-classified |
| | ``` |
| |
|
| | That's it! No installation, no setup - just `uv run`. |
| |
|
| | ## π Requirements |
| |
|
| | - **GPU Required**: This script uses vLLM for efficient inference |
| | - Python 3.10+ |
| | - UV (will handle all dependencies automatically) |
| | - vLLM >= 0.6.6 (for guided decoding support) |
| |
|
| | ## π― Features |
| |
|
| | - **Guaranteed valid outputs** using vLLM's guided decoding with outlines |
| | - **Zero-shot classification** with structured generation |
| | - **GPU-optimized** with vLLM's automatic batching for maximum efficiency |
| | - **Robust text handling** with preprocessing and validation |
| | - **Three prompt styles** for different use cases |
| | - **Automatic progress tracking** and detailed statistics |
| | - **Direct Hub integration** - read and write datasets seamlessly |
| |
|
| | ## π» Usage |
| |
|
| | ### Basic Classification |
| |
|
| | ```bash |
| | uv run classify-dataset.py \ |
| | --input-dataset <dataset-id> \ |
| | --column <text-column> \ |
| | --labels <comma-separated-labels> \ |
| | --output-dataset <output-id> |
| | ``` |
| |
|
| | ### Arguments |
| |
|
| | **Required:** |
| | - `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`) |
| | - `--column`: Name of the text column to classify |
| | - `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`) |
| | - `--output-dataset`: Where to save the classified dataset |
| |
|
| | **Optional:** |
| | - `--model`: Model to use (default: `HuggingFaceTB/SmolLM3-3B`) |
| | - `--prompt-style`: Choose from `simple`, `detailed`, or `reasoning` (default: `simple`) |
| | - `--split`: Dataset split to process (default: `train`) |
| | - `--max-samples`: Limit samples for testing |
| | - `--temperature`: Generation temperature (default: 0.1) |
| | - `--guided-backend`: Backend for guided decoding (default: `outlines`) |
| | - `--hf-token`: Hugging Face token (or use `HF_TOKEN` env var) |
| |
|
| | ### Prompt Styles |
| |
|
| | - **simple**: Direct classification prompt |
| | - **detailed**: Emphasizes exact category matching |
| | - **reasoning**: Includes brief analysis before classification |
| |
|
| | All styles benefit from structured output guarantees - the model can only output valid labels! |
| |
|
| | ## π Examples |
| |
|
| | ### Sentiment Analysis |
| | ```bash |
| | uv run classify-dataset.py \ |
| | --input-dataset stanfordnlp/imdb \ |
| | --column text \ |
| | --labels "positive,negative" \ |
| | --output-dataset user/imdb-sentiment |
| | ``` |
| |
|
| | ### Support Ticket Classification |
| | ```bash |
| | uv run classify-dataset.py \ |
| | --input-dataset user/support-tickets \ |
| | --column content \ |
| | --labels "bug,feature_request,question,other" \ |
| | --output-dataset user/tickets-classified \ |
| | --prompt-style reasoning |
| | ``` |
| |
|
| | ### News Categorization |
| | ```bash |
| | uv run classify-dataset.py \ |
| | --input-dataset ag_news \ |
| | --column text \ |
| | --labels "world,sports,business,tech" \ |
| | --output-dataset user/ag-news-categorized \ |
| | --model meta-llama/Llama-3.2-3B-Instruct |
| | ``` |
| |
|
| | ## π Running on HF Jobs |
| |
|
| | This script is optimized for [Hugging Face Jobs](https://huggingface.co/docs/hub/spaces-gpu-jobs) (requires Pro subscription or Team/Enterprise organization): |
| |
|
| | ```bash |
| | # Run on L4 GPU with vLLM image |
| | hf jobs uv run \ |
| | --flavor l4x1 \ |
| | --image vllm/vllm-openai:latest \ |
| | classify-dataset.py \ |
| | --input-dataset stanfordnlp/imdb \ |
| | --column text \ |
| | --labels "positive,negative" \ |
| | --output-dataset user/imdb-classified |
| | |
| | # Run on A10 GPU with custom model |
| | hf jobs uv run \ |
| | --flavor a10g-large \ |
| | --image vllm/vllm-openai:latest \ |
| | classify-dataset.py \ |
| | --input-dataset user/reviews \ |
| | --column review_text \ |
| | --labels "1,2,3,4,5" \ |
| | --output-dataset user/reviews-rated \ |
| | --model mistralai/Mistral-7B-Instruct-v0.3 \ |
| | --prompt-style detailed |
| | ``` |
| |
|
| | ### GPU Flavors |
| | - `t4-small`: Budget option for smaller models |
| | - `l4x1`: Good balance for 7B models |
| | - `a10g-small`: Fast inference for 3B models |
| | - `a10g-large`: More memory for larger models |
| | - `a100-large`: Maximum performance |
| |
|
| | ## π§ Advanced Usage |
| |
|
| | ### Using Different Models |
| |
|
| | The default model is SmolLM3-3B, but you can use any instruction-tuned model: |
| |
|
| | ```bash |
| | # Larger model for complex classification |
| | uv run classify-dataset.py \ |
| | --input-dataset user/legal-docs \ |
| | --column text \ |
| | --labels "contract,patent,brief,memo,other" \ |
| | --output-dataset user/legal-classified \ |
| | --model Qwen/Qwen2.5-7B-Instruct |
| | ``` |
| |
|
| | ### Large Datasets |
| |
|
| | vLLM automatically handles batching for optimal performance. For very large datasets, it will process efficiently without manual intervention: |
| |
|
| | ```bash |
| | uv run classify-dataset.py \ |
| | --input-dataset user/huge-dataset \ |
| | --column text \ |
| | --labels "A,B,C" \ |
| | --output-dataset user/huge-classified |
| | ``` |
| |
|
| | ## π Performance |
| |
|
| | - **SmolLM3-3B**: ~50-100 texts/second on A10 |
| | - **7B models**: ~20-50 texts/second on A10 |
| | - vLLM automatically optimizes batching for best throughput |
| |
|
| | ## π€ How It Works |
| |
|
| | 1. **vLLM**: Provides efficient GPU batch inference |
| | 2. **Guided Decoding**: Uses outlines to guarantee valid label outputs |
| | 3. **Structured Generation**: Constrains model outputs to exact label choices |
| | 4. **UV**: Handles all dependencies automatically |
| |
|
| | The script loads your dataset, preprocesses texts, classifies each one using guided decoding to ensure only valid labels are generated, then saves the results as a new column in the output dataset. |
| |
|
| | ## π Troubleshooting |
| |
|
| | ### CUDA Not Available |
| | This script requires a GPU. Run it on: |
| | - A machine with NVIDIA GPU |
| | - HF Jobs (recommended) |
| | - Cloud GPU instances |
| |
|
| | ### Out of Memory |
| | - Use a smaller model |
| | - Use a larger GPU (e.g., a100-large) |
| |
|
| | ### Invalid/Skipped Texts |
| | - Texts shorter than 3 characters are skipped |
| | - Empty or None values are marked as invalid |
| | - Very long texts are truncated to 4000 characters |
| |
|
| | ### Classification Quality |
| | - With guided decoding, outputs are guaranteed to be valid labels |
| | - For better results, use clear and distinct label names |
| | - Try the `reasoning` prompt style for complex classifications |
| | - Use a larger model for nuanced tasks |
| |
|
| | ### vLLM Version Issues |
| | If you see `ImportError: cannot import name 'GuidedDecodingParams'`: |
| | - Your vLLM version is too old (requires >= 0.6.6) |
| | - The script specifies the correct version in its dependencies |
| | - UV should automatically install the correct version |
| |
|
| | ## π License |
| |
|
| | This script is provided as-is for use with the UV Scripts organization. |