Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,10 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
-
This uv-script allows you to run batch inference on vllm over an hf dataset as long as it has a messages column.
|
|
|
|
| 5 |
|
| 6 |
-
|
| 7 |
```python
|
| 8 |
#!/usr/bin/env python3
|
| 9 |
from dotenv import load_dotenv
|
|
@@ -45,7 +46,7 @@ if __name__ == "__main__":
|
|
| 45 |
main()
|
| 46 |
```
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
```
|
| 51 |
uvx hf jobs uv run \
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
+
This uv-script allows you to run batch inference on vllm over an hf dataset as long as it has a messages column. It's based on the script [https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py](https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py)
|
| 5 |
+
the only diference is that it uses llm.chat() instead of llm.generate() so the response format is more familar to the openai response format and easier to use.
|
| 6 |
|
| 7 |
+
## Launch Job via SDK
|
| 8 |
```python
|
| 9 |
#!/usr/bin/env python3
|
| 10 |
from dotenv import load_dotenv
|
|
|
|
| 46 |
main()
|
| 47 |
```
|
| 48 |
|
| 49 |
+
## Launch Job via CLI
|
| 50 |
|
| 51 |
```
|
| 52 |
uvx hf jobs uv run \
|