How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf AdvRahul/Axion-Pro-Indic-24B:Q5_K_M
# Run inference directly in the terminal:
llama-cli -hf AdvRahul/Axion-Pro-Indic-24B:Q5_K_M
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf AdvRahul/Axion-Pro-Indic-24B:Q5_K_M
# Run inference directly in the terminal:
llama-cli -hf AdvRahul/Axion-Pro-Indic-24B:Q5_K_M
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf AdvRahul/Axion-Pro-Indic-24B:Q5_K_M
# Run inference directly in the terminal:
./llama-cli -hf AdvRahul/Axion-Pro-Indic-24B:Q5_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf AdvRahul/Axion-Pro-Indic-24B:Q5_K_M
# Run inference directly in the terminal:
./build/bin/llama-cli -hf AdvRahul/Axion-Pro-Indic-24B:Q5_K_M
Use Docker
docker model run hf.co/AdvRahul/Axion-Pro-Indic-24B:Q5_K_M
Quick Links

Axion-Pro-Indic-24B

Model Information

Axion-Pro-Indic-24B is a multilingual, hybrid-reasoning, text-only language model built on Mistral-Small.
This post-trained version delivers exceptional improvements over the base model:

  • +20% average improvement on Indian language benchmarks
  • +21.6% enhancement on math benchmarks
  • +17.6% boost on programming benchmarks
  • +86% improvement in romanized Indian language GSM-8K benchmarks (languages ร— mathematics intersection).

Key Features

  • Hybrid Thinking Mode: Supports both "think" and "non-think" modes.
  • Advanced Indic Skills: Post-trained on Indian languages + English, reflecting Indian cultural values.
  • Superior Reasoning Capabilities: Outperforms similarly sized models on coding and math benchmarks.
  • Seamless Multilingual Experience: Full support for Indic scripts and romanized text.

Quickstart

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "AdvRahul/Axion-Pro-Indic-24B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name, torch_dtype="auto", device_map="auto"
)

prompt = "Who are you and what is your purpose on this planet?"

messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    enable_thinking=True,  # Default True; set False for no-think mode
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(**model_inputs, max_new_tokens=8192)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :].tolist()
output_text = tokenizer.decode(output_ids)

if "</think>" in output_text:
    reasoning_content = output_text.split("</think>")[0].rstrip("\n")
    content = output_text.split("</think>")[-1].lstrip("\n").rstrip("</s>")
else:
    reasoning_content = ""
    content = output_text.rstrip("</s>")

print("reasoning content:", reasoning_content)
print("content:", content)
Downloads last month
7
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support