Eka-4B
Eka-4B is a 4-billion parameter language model optimised for mathematical reasoning and code. Despite its compact size, Eka-4B delivers performance competitive with or superior to significantly larger open-source models across math and code benchmarks.
Key Strengths
- Strong Mathematical Reasoning — Capable of solving complex, multi-step problems through sustained and coherent reasoning within a single forward pass. Achieves reliable results on challenging benchmarks including AIME 2026 I, HMMT, and IMO-Answer-Bench.
- Strong Coding — Achieves state-of-the-art results on LiveCodeBench-V6 and LiveCodeBench-Pro among models under 10B parameters.
- Robust Preference Alignment — Achieves solid alignment performance on Arena-Hard-v2 and Multi-Challenge, outperforming same-scale and substantially larger models.
Benchmark Performance
Math
| Benchmark |
Qwen3-4B |
Qwen3-8B |
Qwen3-14B |
Qwen3-32B |
Qwen3-30B-A3B |
Eka-4B |
| AIME 2026 I |
81.46 |
70.42 |
76.46 |
75.83 |
87.30 |
87.40 |
| HMMT Nov |
68.33 |
48.33 |
56.67 |
57.08 |
71.25 |
77.92 |
| IMO-Answer-Bench |
48.00 |
36.56 |
41.81 |
43.94 |
54.34 |
53.38 |
| GPQA |
65.8 |
62.0 |
63.38 |
68.4 |
73.4 |
83.8 |
| HLE (Text-only) |
6.72 |
5.28 |
7.00 |
9.31 |
11.77 |
12.60 |
Code
| Benchmark |
Qwen3-4B |
Qwen3-8B |
Qwen3-14B |
Qwen3-32B |
Qwen3-30B-A3B |
Eka-4B |
| Live-Code-Bench-V6 |
57.4 |
49.4 |
55.9 |
55.7 |
66.0 |
76.9 |
| Live-Code-Bench-Pro-Easy |
40.2 |
41.2 |
33.0 |
42.3 |
60.8 |
81.4 |
| Live-Code-Bench-Pro-Medium |
5.3 |
3.5 |
1.8 |
3.5 |
3.5 |
28.1 |
Alignment
| Benchmark |
Qwen3-4B |
Qwen3-8B |
Qwen3-14B |
Qwen3-32B |
Qwen3-30B-A3B |
Eka-4B |
| Arena-Hard-v2 |
34.9 |
26.3 |
36.9 |
56.0 |
60.2 |
73.2 |
| Multi-Challenge |
41.14 |
36.30 |
36.97 |
38.72 |
49.40 |
52.21 |
Quickstart
Recommended Inference Hyperparameters
| Parameter |
Value |
| Temperature |
0.6 |
| Top-p |
0.95 |
| Repeat penalty |
1.0 |
| Max New Tokens |
131072 |
Chat
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
'yashmarathe/Eka-4B',
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
'yashmarathe/Eka-4B',
torch_dtype='auto',
device_map='auto',
trust_remote_code=True,
)
messages = [{'role': 'user', 'content': 'Solve: find all integer solutions to x^2 + y^2 = z^2 with x, y, z > 0 and x < 10.'}]
prompt = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=False,
)
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
response = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
print(response)
vLLM (Recommended for Production)
vllm serve yashmarathe/Eka-4B \
--port 8000 \
--max-model-len 131072 \
--gpu-memory-utilization 0.95 \
--enable-chunked-prefill \
--trust-remote-code
Limitations
While significant effort has been made to align the model's outputs with ethical and legal requirements, the model may occasionally produce unexpected, biased, or otherwise problematic outputs due to its probabilistic nature. Users are responsible for evaluating outputs before deployment in production systems.
License
Apache 2.0