Instructions to use raxcore-dev/rax-3.5-chat with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use raxcore-dev/rax-3.5-chat with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="raxcore-dev/rax-3.5-chat") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("raxcore-dev/rax-3.5-chat") model = AutoModelForImageTextToText.from_pretrained("raxcore-dev/rax-3.5-chat") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use raxcore-dev/rax-3.5-chat with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "raxcore-dev/rax-3.5-chat" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "raxcore-dev/rax-3.5-chat", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/raxcore-dev/rax-3.5-chat
- SGLang
How to use raxcore-dev/rax-3.5-chat with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "raxcore-dev/rax-3.5-chat" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "raxcore-dev/rax-3.5-chat", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "raxcore-dev/rax-3.5-chat" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "raxcore-dev/rax-3.5-chat", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use raxcore-dev/rax-3.5-chat with Docker Model Runner:
docker model run hf.co/raxcore-dev/rax-3.5-chat
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,6 +1,5 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
-
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
|
| 4 |
tags:
|
| 5 |
- text-generation
|
| 6 |
- conversational
|
|
@@ -94,7 +93,7 @@ Hello! I'm doing well, thank you for asking. How can I help you today?</s>
|
|
| 94 |
|
| 95 |
## Training Details
|
| 96 |
|
| 97 |
-
This model was
|
| 98 |
- Extended training over several days
|
| 99 |
- Optimized for conversational interactions
|
| 100 |
- Enhanced dialogue coherence and helpfulness
|
|
@@ -137,7 +136,6 @@ If you use Rax 3.5 Chat in your research or applications, please cite:
|
|
| 137 |
title={Rax 3.5 Chat: A Fine-tuned Conversational AI Model},
|
| 138 |
author={RaxCore},
|
| 139 |
year={2024},
|
| 140 |
-
note={Fine-tuned from TinyLlama architecture},
|
| 141 |
organization={RaxCore - Leading developer company in Africa and beyond}
|
| 142 |
}
|
| 143 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
| 3 |
tags:
|
| 4 |
- text-generation
|
| 5 |
- conversational
|
|
|
|
| 93 |
|
| 94 |
## Training Details
|
| 95 |
|
| 96 |
+
This model was developed by RaxCore with:
|
| 97 |
- Extended training over several days
|
| 98 |
- Optimized for conversational interactions
|
| 99 |
- Enhanced dialogue coherence and helpfulness
|
|
|
|
| 136 |
title={Rax 3.5 Chat: A Fine-tuned Conversational AI Model},
|
| 137 |
author={RaxCore},
|
| 138 |
year={2024},
|
|
|
|
| 139 |
organization={RaxCore - Leading developer company in Africa and beyond}
|
| 140 |
}
|
| 141 |
```
|