Text Generation
Transformers
Safetensors
mistral
4-bit precision
AWQ
conversational
text-generation-inference
awq
How to use from
SGLangUse Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "solidrust/Mixtral_AI_MiniTron_Chat-AWQ" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "solidrust/Mixtral_AI_MiniTron_Chat-AWQ",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'Quick Links
LeroyDyer/Mixtral_AI_MiniTron_Chat AWQ
- Model creator: LeroyDyer
- Original model: Mixtral_AI_MiniTron_Chat
Model Summary
these little one are easy to train for task !!! ::
They already have some training (not great) But they can take more and more
(and being MISTRAL they can takes lora modules!)
Rememeber to add training on to the lora you merge withit : ie load the lora and train a few cycle on the same data that was applied in the p=lora (ie 20 Steps ) and
See it it took hold then merge IT!
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron
- Downloads last month
- 6
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "solidrust/Mixtral_AI_MiniTron_Chat-AWQ" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "solidrust/Mixtral_AI_MiniTron_Chat-AWQ", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'