How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "solidrust/Mixtral_AI_MiniTron_Chat-AWQ"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "solidrust/Mixtral_AI_MiniTron_Chat-AWQ",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/solidrust/Mixtral_AI_MiniTron_Chat-AWQ
Quick Links

LeroyDyer/Mixtral_AI_MiniTron_Chat AWQ

Model Summary

these little one are easy to train for task !!! ::

They already have some training (not great) But they can take more and more

(and being MISTRAL they can takes lora modules!)

Rememeber to add training on to the lora you merge withit : ie load the lora and train a few cycle on the same data that was applied in the p=lora (ie 20 Steps ) and

See it it took hold then merge IT!

  • Developed by: LeroyDyer
  • License: apache-2.0
  • Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron
Downloads last month
6
Safetensors
Model size
4B params
Tensor type
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support