EXL3 quants of Olmo-Hybrid-Instruct-SFT-7B
⚠️ Requires ExLlamaV3 v0.0.26 (or v0.0.25 dev branch)
2.00 bits per weight
2.50 bits per weight
3.00 bits per weight
3.50 bits per weight
4.00 bits per weight
5.00 bits per weight
6.00 bits per weight
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for turboderp/Olmo-Hybrid-Instruct-SFT-7B-exl3
Base model
allenai/Olmo-Hybrid-7B
Finetuned
allenai/Olmo-Hybrid-Instruct-SFT-7B 