llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Sarvam-M
Model Information
This repository contains gguf version of
sarvam-min bf16 precision.
Learn more about sarvam-m in our detailed blog post.
Running the model on a CPU
You can use the model on your local machine (without gpu) as explained here.
Example Command:
./build/bin/llama-cli -i -m /your/folder/path/sarvam-m-bf16.gguf -c 8192 -t 16
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for mygitphase/guhan-m-gguf
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503 Finetuned
sarvamai/sarvam-m
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="mygitphase/guhan-m-gguf", filename="sarvam-m-bf16.gguf", )