--- viewer: false tags: [uv-script, classification, vllm, structured-outputs, gpu-required] --- # Dataset Classification with vLLM Efficient text classification for Hugging Face datasets using vLLM with structured outputs. This script provides GPU-accelerated classification with guaranteed valid outputs through guided decoding. ## 🚀 Quick Start ```bash # Classify IMDB reviews uv run classify-dataset.py \ --input-dataset stanfordnlp/imdb \ --column text \ --labels "positive,negative" \ --output-dataset user/imdb-classified ``` That's it! No installation, no setup - just `uv run`. ## 📋 Requirements - **GPU Required**: This script uses vLLM for efficient inference - Python 3.10+ - UV (will handle all dependencies automatically) - vLLM >= 0.6.6 (for guided decoding support) ## 🎯 Features - **Guaranteed valid outputs** using vLLM's guided decoding with outlines - **Zero-shot classification** with structured generation - **GPU-optimized** with vLLM's automatic batching for maximum efficiency - **Default model**: HuggingFaceTB/SmolLM3-3B (fast 3B model, easily changeable) - **Robust text handling** with preprocessing and validation - **Three prompt styles** for different use cases - **Automatic progress tracking** and detailed statistics - **Direct Hub integration** - read and write datasets seamlessly ## 💻 Usage ### Basic Classification ```bash uv run classify-dataset.py \ --input-dataset \ --column \ --labels \ --output-dataset ``` ### Arguments **Required:** - `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`) - `--column`: Name of the text column to classify - `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`) - `--output-dataset`: Where to save the classified dataset **Optional:** - `--model`: Model to use (default: **`HuggingFaceTB/SmolLM3-3B`** - a fast 3B parameter model) - `--prompt-style`: Choose from `simple`, `detailed`, or `reasoning` (default: `simple`) - `--split`: Dataset split to process (default: `train`) - `--max-samples`: Limit samples for testing - `--temperature`: Generation temperature (default: 0.1) - `--guided-backend`: Backend for guided decoding (default: `outlines`) - `--hf-token`: Hugging Face token (or use `HF_TOKEN` env var) ### Prompt Styles - **simple**: Direct classification prompt - **detailed**: Emphasizes exact category matching - **reasoning**: Includes brief analysis before classification All styles benefit from structured output guarantees - the model can only output valid labels! ## 📊 Examples ### Sentiment Analysis ```bash uv run classify-dataset.py \ --input-dataset stanfordnlp/imdb \ --column text \ --labels "positive,negative" \ --output-dataset user/imdb-sentiment ``` ### Support Ticket Classification ```bash uv run classify-dataset.py \ --input-dataset user/support-tickets \ --column content \ --labels "bug,feature_request,question,other" \ --output-dataset user/tickets-classified \ --prompt-style reasoning ``` ### News Categorization ```bash uv run classify-dataset.py \ --input-dataset ag_news \ --column text \ --labels "world,sports,business,tech" \ --output-dataset user/ag-news-categorized \ --model meta-llama/Llama-3.2-3B-Instruct ``` ## 🚀 Running on HF Jobs This script is optimized for [Hugging Face Jobs](https://huggingface.co/docs/hub/spaces-gpu-jobs) (requires Pro subscription or Team/Enterprise organization): ````bash # Run on L4 GPU with vLLM image hf jobs uv run \ --flavor l4x1 \ --image vllm/vllm-openai:latest \ https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py \ --input-dataset stanfordnlp/imdb \ --column text \ --labels "positive,negative" \ --output-dataset user/imdb-classified ### GPU Flavors - `t4-small`: Budget option for smaller models - `l4x1`: Good balance for 7B models - `a10g-small`: Fast inference for 3B models - `a10g-large`: More memory for larger models - `a100-large`: Maximum performance ## 🔧 Advanced Usage ### Using Different Models By default, this script uses **HuggingFaceTB/SmolLM3-3B** - a fast, efficient 3B parameter model that's perfect for most classification tasks. You can easily use any other instruction-tuned model: ```bash # Larger model for complex classification uv run classify-dataset.py \ --input-dataset user/legal-docs \ --column text \ --labels "contract,patent,brief,memo,other" \ --output-dataset user/legal-classified \ --model Qwen/Qwen2.5-7B-Instruct ```` ### Large Datasets vLLM automatically handles batching for optimal performance. For very large datasets, it will process efficiently without manual intervention: ```bash uv run classify-dataset.py \ --input-dataset user/huge-dataset \ --column text \ --labels "A,B,C" \ --output-dataset user/huge-classified ``` ## 📈 Performance - **SmolLM3-3B (default)**: ~50-100 texts/second on A10 - **7B models**: ~20-50 texts/second on A10 - vLLM automatically optimizes batching for best throughput ## 🤝 How It Works 1. **vLLM**: Provides efficient GPU batch inference 2. **Guided Decoding**: Uses outlines to guarantee valid label outputs 3. **Structured Generation**: Constrains model outputs to exact label choices 4. **UV**: Handles all dependencies automatically The script loads your dataset, preprocesses texts, classifies each one using guided decoding to ensure only valid labels are generated, then saves the results as a new column in the output dataset. ## 🐛 Troubleshooting ### CUDA Not Available This script requires a GPU. Run it on: - A machine with NVIDIA GPU - HF Jobs (recommended) - Cloud GPU instances ### Out of Memory - Use a smaller model - Use a larger GPU (e.g., a100-large) ### Invalid/Skipped Texts - Texts shorter than 3 characters are skipped - Empty or None values are marked as invalid - Very long texts are truncated to 4000 characters ### Classification Quality - With guided decoding, outputs are guaranteed to be valid labels - For better results, use clear and distinct label names - Try the `reasoning` prompt style for complex classifications - Use a larger model for nuanced tasks ### vLLM Version Issues If you see `ImportError: cannot import name 'GuidedDecodingParams'`: - Your vLLM version is too old (requires >= 0.6.6) - The script specifies the correct version in its dependencies - UV should automatically install the correct version ## 🔬 Advanced Example: ArXiv ML Trends Analysis For a more complex real-world example, we provide scripts to analyze ML research trends from ArXiv papers: ### Step 1: Prepare the Dataset ```bash # Filter and prepare ArXiv CS papers from 2024 uv run prepare_arxiv_2024.py ``` This creates a filtered dataset of CS papers with combined title+abstract text. ### Step 2: Run Classification with Python API ```bash # Use HF Jobs Python API to classify papers uv run run_arxiv_classification.py ``` This script demonstrates: - Using `run_uv_job()` from the Python API - Classifying into modern ML trends (reasoning, agents, multimodal, robotics, etc.) - Handling authentication and job monitoring The classification categories include: - `reasoning_systems`: Chain-of-thought, reasoning, problem solving - `agents_autonomous`: Agents, tool use, autonomous systems - `multimodal_models`: Vision-language, audio, multi-modal - `robotics_embodied`: Robotics, embodied AI, manipulation - `efficient_inference`: Quantization, distillation, edge deployment - `alignment_safety`: RLHF, alignment, safety, interpretability - `generative_models`: Diffusion, generation, synthesis - `foundational_other`: Other foundational ML/AI research ## 📝 License This script is provided as-is for use with the UV Scripts organization.