π§ Fuhad-32B-Instruct
Fuhad-32B-Instruct is a truly open-source 32B parameter instruction-tuned language model, cloned from allenai/OLMo-2-0325-32B-Instruct.
Why "Truly Open Source"?
Fuhad-32B-Instruct meets the highest standard of open-source AI:
- β Training codes β fully public (OLMo-core, Open-Instruct)
- β Datasets β fully public (Dolma, training mixes released)
- β No usage restrictions β Apache 2.0 license
- β Can be renamed, rebranded, and refined β no restrictions on derivatives
Model Details
| Detail | Value |
|---|---|
| Base | OLMo-2-0325-32B-Instruct |
| Parameters | 32B |
| Precision | BF16 |
| Context Length | Up to 65K tokens |
| License | Apache 2.0 |
| Architecture | OLMo-2 (transformer) |
API Usage
Via OpenRouter (Recommended β Cheapest)
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_OPENROUTER_KEY",
"Content-Type": "application/json"
},
json={
"model": "allenai/olmo-2-0325-32b-instruct",
"messages": [
{"role": "user", "content": "Hello, who are you?"}
]
}
)
print(response.json()["choices"][0]["message"]["content"])
Via DeepInfra
from openai import OpenAI
client = OpenAI(
api_key="YOUR_DEEPINFRA_KEY",
base_url="https://api.deepinfra.com/v1/openai"
)
response = client.chat.completions.create(
model="allenai/OLMo-2-0325-32B-Instruct",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
Self-Hosted (vLLM)
pip install vllm
vllm serve allenai/OLMo-2-0325-32B-Instruct --tensor-parallel-size 2
Performance
OLMo-2-32B is the first fully open model to outperform GPT-3.5-Turbo and GPT-4o-mini on key benchmarks.
| Benchmark | Score |
|---|---|
| GSM8K | 86.6 |
| MATH | 42.0 |
| IFEval | 80.3 |
| Hellaswag | 86.2 |
| ARC-C | 65.1 |
Citation
@article{olmo2025,
title={OLMo 2: The Best Fully Open Language Model to Date},
author={Ai2},
year={2025}
}
- Downloads last month
- 28
Model tree for fuhaddesmond/fuhad-32b-instruct
Base model
allenai/OLMo-2-0325-32B Finetuned
allenai/OLMo-2-0325-32B-SFT Finetuned
allenai/OLMo-2-0325-32B-DPO Finetuned
allenai/OLMo-2-0325-32B-Instruct