Instructions to use amd/AMD-OLMo-1B-SFT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use amd/AMD-OLMo-1B-SFT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="amd/AMD-OLMo-1B-SFT")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("amd/AMD-OLMo-1B-SFT") model = AutoModelForCausalLM.from_pretrained("amd/AMD-OLMo-1B-SFT") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use amd/AMD-OLMo-1B-SFT with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "amd/AMD-OLMo-1B-SFT" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/AMD-OLMo-1B-SFT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/amd/AMD-OLMo-1B-SFT
- SGLang
How to use amd/AMD-OLMo-1B-SFT with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "amd/AMD-OLMo-1B-SFT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/AMD-OLMo-1B-SFT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "amd/AMD-OLMo-1B-SFT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/AMD-OLMo-1B-SFT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use amd/AMD-OLMo-1B-SFT with Docker Model Runner:
docker model run hf.co/amd/AMD-OLMo-1B-SFT
amd olmo 1b set
#3
by Thushanthiga - opened
README.md
CHANGED
|
@@ -3,7 +3,6 @@ license: apache-2.0
|
|
| 3 |
datasets:
|
| 4 |
- allenai/dolma
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
-
library_name: transformers
|
| 7 |
---
|
| 8 |
# AMD-OLMo
|
| 9 |
|
|
@@ -283,11 +282,12 @@ hf-align/scripts/run_dpo.py hf-align/recipes/AMD-OLMo-1B-dpo.yaml \
|
|
| 283 |
|
| 284 |
Feel free to cite our AMD-OLMo models:
|
| 285 |
```bash
|
| 286 |
-
@
|
| 287 |
-
|
| 288 |
-
|
| 289 |
-
|
| 290 |
-
|
|
|
|
| 291 |
}
|
| 292 |
```
|
| 293 |
|
|
|
|
| 3 |
datasets:
|
| 4 |
- allenai/dolma
|
| 5 |
pipeline_tag: text-generation
|
|
|
|
| 6 |
---
|
| 7 |
# AMD-OLMo
|
| 8 |
|
|
|
|
| 282 |
|
| 283 |
Feel free to cite our AMD-OLMo models:
|
| 284 |
```bash
|
| 285 |
+
@misc{AMD-OLMo,
|
| 286 |
+
title = {AMD-OLMo: A series of 1B language models trained from scratch by AMD on AMD Instinct™ MI250 GPUs.},
|
| 287 |
+
url = {https://huggingface.co/amd/AMD-OLMo},
|
| 288 |
+
author = {Jiang Liu, Jialian Wu, Prakamya Mishra, Zicheng Liu, Sudhanshu Ranjan, Pratik Prabhanjan Brahma, Yusheng Su, Gowtham Ramesh, Peng Sun, Zhe Li, Dong Li, Lu Tian, Emad Barsoum},
|
| 289 |
+
month = {October},
|
| 290 |
+
year = {2024}
|
| 291 |
}
|
| 292 |
```
|
| 293 |
|