jesusoctavioas/Olmo-3-1125-32B-mlx-2Bit
The Model jesusoctavioas/Olmo-3-1125-32B-mlx-2Bit was converted to MLX format from allenai/Olmo-3-1125-32B using mlx-lm version 0.28.3.
Use with mlx
python -m venv mlx-venv
# then activate the virtual enviroment if needed.
source mlx-venv/bin/activate
# then install mlx.
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("jesusoctavioas/Olmo-3-1125-32B-mlx-2Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 9
Model size
32B params
Tensor type
BF16
·
U32
·
Hardware compatibility
Log In
to view the estimation
2-bit
Model tree for jesusoctavioas/Olmo-3-1125-32B-mlx-2Bit
Base model
allenai/Olmo-3-1125-32BDataset used to train jesusoctavioas/Olmo-3-1125-32B-mlx-2Bit
Evaluation results
- Olmo 3-Eval Math on BenchmarksModel README61.600
- BigCodeBench on BenchmarksModel README43.900
- HumanEval on BenchmarksModel README66.500
- DeepSeek LeetCode on BenchmarksModel README1.900
- DS 1000 on BenchmarksModel README29.700
- MBPP on BenchmarksModel README60.200
- MultiPL HumanEval on BenchmarksModel README35.900
- MultiPL MBPPP on BenchmarksModel README41.800
- Olmo 3-Eval Code on BenchmarksModel README40.000
- ARC MC on BenchmarksModel README94.700
- MMLU STEM on BenchmarksModel README70.800
- MedMCQA MC on BenchmarksModel README57.600
- MedQA MC on BenchmarksModel README53.800
- SciQ MC on BenchmarksModel README95.500
- Olmo 3-Eval MC_STEM on BenchmarksModel README74.500
- MMLU Humanities on BenchmarksModel README78.300
- MMLU Social Sci. on BenchmarksModel README83.900
- MMLU Other on BenchmarksModel README75.100
- CSQA MC on BenchmarksModel README82.300
- PIQA MC on BenchmarksModel README85.600