davanstrien HF Staff Claude Opus 4.5 commited on
Commit
403d2c6
·
1 Parent(s): a7a851e

Add hf-jobs tag to README frontmatter

Browse files

Standardizing metadata tags across uv-scripts organization
for better discoverability of HF Jobs-compatible scripts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

Files changed (1) hide show
  1. README.md +63 -2
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  viewer: false
3
- tags: [uv-script, vllm, gpu, inference]
4
  ---
5
 
6
  # vLLM Inference Scripts
@@ -11,6 +11,67 @@ These scripts use [UV's inline script metadata](https://docs.astral.sh/uv/guides
11
 
12
  ## 📋 Available Scripts
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ### classify-dataset.py
15
 
16
  Batch text classification using BERT-style encoder models (e.g., BERT, RoBERTa, DeBERTa, ModernBERT) with vLLM's optimized inference engine.
@@ -101,7 +162,7 @@ hf jobs uv run \
101
  --flavor l4x4 \
102
  --image vllm/vllm-openai \
103
  -e UV_PRERELEASE=if-necessary \
104
- -s HF_TOKEN=hf_*** \
105
  https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py \
106
  davanstrien/cards_with_prompts \
107
  davanstrien/test-generated-responses \
 
1
  ---
2
  viewer: false
3
+ tags: [uv-script, vllm, gpu, inference, hf-jobs]
4
  ---
5
 
6
  # vLLM Inference Scripts
 
11
 
12
  ## 📋 Available Scripts
13
 
14
+ ### vlm-classify.py
15
+
16
+ Vision Language Model (VLM) image classification with structured output constraints.
17
+
18
+ **Features:**
19
+
20
+ - 🖼️ Process images through state-of-the-art VLMs (Qwen2-VL)
21
+ - 🎯 Structured classification using vLLM's `GuidedDecodingParams`
22
+ - 📐 Automatic image resizing to optimize token usage
23
+ - 💾 Memory-efficient lazy batch processing
24
+ - 🏷️ Simple CLI interface for defining classes
25
+ - 🤗 Direct integration with Hugging Face datasets
26
+
27
+ **Usage:**
28
+
29
+ ```bash
30
+ # Basic classification
31
+ uv run vlm-classify.py \
32
+ username/input-dataset \
33
+ username/output-dataset \
34
+ --classes "document,photo,diagram,other"
35
+
36
+ # With custom prompt and image resizing
37
+ uv run vlm-classify.py \
38
+ username/input-dataset \
39
+ username/output-dataset \
40
+ --classes "index-card,manuscript,title-page,other" \
41
+ --prompt "What type of historical document is this?" \
42
+ --max-size 768
43
+
44
+ # Quick test with sample limit
45
+ uv run vlm-classify.py \
46
+ davanstrien/sloane-index-cards \
47
+ username/test-output \
48
+ --classes "index,content,other" \
49
+ --max-samples 10
50
+ ```
51
+
52
+ **HF Jobs execution:**
53
+
54
+ ```bash
55
+ hf jobs uv run \
56
+ --flavor a10g \
57
+ --image vllm/vllm-openai \
58
+ -s HF_TOKEN \
59
+ https://huggingface.co/datasets/uv-scripts/vllm/raw/main/vlm-classify.py \
60
+ username/input-dataset \
61
+ username/output-dataset \
62
+ --classes "title-page,content,index,other" \
63
+ --max-size 768
64
+ ```
65
+
66
+ **Key Parameters:**
67
+
68
+ - `--classes`: Comma-separated list of classification categories (required)
69
+ - `--prompt`: Custom classification prompt (optional, auto-generated if not provided)
70
+ - `--max-size`: Maximum image dimension in pixels for resizing (reduces token count)
71
+ - `--model`: VLM model to use (default: Qwen/Qwen2-VL-7B-Instruct)
72
+ - `--batch-size`: Number of images to process at once (default: 8)
73
+ - `--max-samples`: Limit number of samples for testing
74
+
75
  ### classify-dataset.py
76
 
77
  Batch text classification using BERT-style encoder models (e.g., BERT, RoBERTa, DeBERTa, ModernBERT) with vLLM's optimized inference engine.
 
162
  --flavor l4x4 \
163
  --image vllm/vllm-openai \
164
  -e UV_PRERELEASE=if-necessary \
165
+ -s HF_TOKEN \
166
  https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py \
167
  davanstrien/cards_with_prompts \
168
  davanstrien/test-generated-responses \