File size: 1,869 Bytes
cf4619c
 
 
f749cc2
2e5a0d1
cf4619c
f749cc2
cf4619c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f749cc2
cf4619c
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: mit
---
This uv-script allows you to run batch inference on vllm over an hf dataset as long as it has a messages column. It's based on the script [https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py](https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py)
the only diference is that it uses `llm.chat()` instead of `llm.generate()` so the response format is more familar to the openai response format and easier to use.

## Launch Job via SDK
```python
#!/usr/bin/env python3
from dotenv import load_dotenv
from huggingface_hub import HfApi

load_dotenv()

import os

DATASET_REPO_ID = "tytodd/test-job-dataset"
SCRIPT_URL = "https://huggingface.co/datasets/modaic/batch-vllm/raw/main/generate_responses.py"


def main() -> None:
    api = HfApi()
    job_info = api.run_uv_job(
        SCRIPT_URL,
        script_args=[
            DATASET_REPO_ID,
            DATASET_REPO_ID,
            "--model-id",
            # "Qwen/Qwen3-235B-A22B-Instruct-2507",
            "deepseek-ai/DeepSeek-V3.2",
            # "zai-org/GLM-5", # transformers > 5
            # "moonshotai/Kimi-K2.5",
            "--messages-column",
            "messages",
        ],
        dependencies=["transformers<5"],
        image="vllm/vllm-openai:latest",
        flavor="h200x4",
        secrets={"HF_TOKEN": os.getenv("HF_TOKEN")},
    )
    print(f"Created job {job_info.id}")
    print(job_info.url)


if __name__ == "__main__":
    main()
```

## Launch Job via CLI

```
uvx hf jobs uv run \
    --flavor l4x4 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/modaic/batch-vllm/resolve/main/generate_responses.py \
    username/input-dataset \
    username/output-dataset \
    --messages-column messages \
    --model-id Qwen/Qwen3-30B-A3B-Instruct-2507 \
    --temperature 0.7 \
    --max-tokens 16384
```