|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
- hi |
|
|
- ta |
|
|
- te |
|
|
- kn |
|
|
- ml |
|
|
- bn |
|
|
- mr |
|
|
- gu |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- Axion |
|
|
- Indic |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
<div align="center" style="line-height: 1;"> |
|
|
<a href="https://huggingface.co/AdvRahul" target="_blank" style="margin: 2px;"> |
|
|
<img alt="Chat" src="https://img.shields.io/badge/🤖_Chat-Axion-blue" style="display: inline-block; vertical-align: middle;"/> |
|
|
</a> |
|
|
<a href="https://huggingface.co/AdvRahul" target="_blank" style="margin: 2px;"> |
|
|
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-AdvRahul-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> |
|
|
</a> |
|
|
<a href="https://github.com/AdvRahul" target="_blank" style="margin: 2px;"> |
|
|
<img alt="Github" src="https://img.shields.io/badge/GitHub-Axion-000?logo=github&color=0000FF" style="display: inline-block; vertical-align: middle;"/> |
|
|
</a> |
|
|
<a href="https://x.com/yourhandle" target="_blank" style="margin: 2px;"> |
|
|
<img alt="X" src="https://img.shields.io/badge/X-Axion-6080F0?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> |
|
|
</a> |
|
|
</div> |
|
|
|
|
|
<div align="center" style="line-height: 1;"> |
|
|
<a href="#license" style="margin: 2px;"> |
|
|
<img alt="License" src="https://img.shields.io/badge/License-Apache2.0-A5de54" style="display: inline-block; vertical-align: middle;"/> |
|
|
</a> |
|
|
</div> |
|
|
|
|
|
# Axion-Pro-Indic-24B |
|
|
|
|
|
|
|
|
## Model Information |
|
|
|
|
|
**Axion-Pro-Indic-24B** is a multilingual, hybrid-reasoning, text-only language model built on Mistral-Small. |
|
|
This post-trained version delivers exceptional improvements over the base model: |
|
|
|
|
|
- **+20%** average improvement on Indian language benchmarks |
|
|
- **+21.6%** enhancement on math benchmarks |
|
|
- **+17.6%** boost on programming benchmarks |
|
|
- **+86%** improvement in romanized Indian language GSM-8K benchmarks (languages × mathematics intersection). |
|
|
|
|
|
### Key Features |
|
|
|
|
|
- **Hybrid Thinking Mode**: Supports both "think" and "non-think" modes. |
|
|
- **Advanced Indic Skills**: Post-trained on Indian languages + English, reflecting Indian cultural values. |
|
|
- **Superior Reasoning Capabilities**: Outperforms similarly sized models on coding and math benchmarks. |
|
|
- **Seamless Multilingual Experience**: Full support for Indic scripts and romanized text. |
|
|
|
|
|
--- |
|
|
|
|
|
## Quickstart |
|
|
|
|
|
### With Transformers |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model_name = "AdvRahul/Axion-Pro-Indic-24B" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_name, torch_dtype="auto", device_map="auto" |
|
|
) |
|
|
|
|
|
prompt = "Who are you and what is your purpose on this planet?" |
|
|
|
|
|
messages = [{"role": "user", "content": prompt}] |
|
|
text = tokenizer.apply_chat_template( |
|
|
messages, |
|
|
tokenize=False, |
|
|
enable_thinking=True, # Default True; set False for no-think mode |
|
|
) |
|
|
|
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
|
|
generated_ids = model.generate(**model_inputs, max_new_tokens=8192) |
|
|
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :].tolist() |
|
|
output_text = tokenizer.decode(output_ids) |
|
|
|
|
|
if "</think>" in output_text: |
|
|
reasoning_content = output_text.split("</think>")[0].rstrip("\n") |
|
|
content = output_text.split("</think>")[-1].lstrip("\n").rstrip("</s>") |
|
|
else: |
|
|
reasoning_content = "" |
|
|
content = output_text.rstrip("</s>") |
|
|
|
|
|
print("reasoning content:", reasoning_content) |
|
|
print("content:", content) |
|
|
|