This is a collection of various distilled llms by Cannae ai!
Cannae AI
company
AI & ML interests
Building fast, efficient AI models optimized for everyday hardware.
Recent Activity
View all activity
Our fine tunes of open models on multimodal datasets consisting of medical instructions,reasoning and radiological images from the PMC Open Subset.
-
Cannae-AI/MedicalQwen3-Reasoning-14B-IT
Text Generation • 15B • Updated • 80 • 4 -
Cannae-AI/MedicalQwen3-Reasoning-4B
Text Generation • 4B • Updated • 153 • 2 -
Cannae-AI/MedicalLlama3.2-vision-11B-IT
Image-Text-to-Text • 11B • Updated • 33 • 10 -
mradermacher/MedicalQwen3-Reasoning-4B-i1-GGUF
4B • Updated • 27 • 1
Our instruction and personality fine tunes of open-weight models designed for chat and versatile use cases.
This is an abliterated decensored LLm collection by Cannae Ai!
-
Cannae-AI/HERETICODER-2.5-7B-IT
Text Generation • 8B • Updated • 9 • 1 -
Cannae-AI/HERETICSEEK-7B-Ditill
Text Generation • 8B • Updated • 6 • 1 -
Cannae-AI/HERETICODER-2.5-3B-IT
Text Generation • 3B • Updated • 7 • 2 -
Cannae-AI/HERETICSEEK-1.5B-R1
Text Generation • 2B • Updated • 5
A collection of fine tuned models for better mathematics reasoning
This is a collection of various distilled llms by Cannae ai!
This is an abliterated decensored LLm collection by Cannae Ai!
-
Cannae-AI/HERETICODER-2.5-7B-IT
Text Generation • 8B • Updated • 9 • 1 -
Cannae-AI/HERETICSEEK-7B-Ditill
Text Generation • 8B • Updated • 6 • 1 -
Cannae-AI/HERETICODER-2.5-3B-IT
Text Generation • 3B • Updated • 7 • 2 -
Cannae-AI/HERETICSEEK-1.5B-R1
Text Generation • 2B • Updated • 5
Our fine tunes of open models on multimodal datasets consisting of medical instructions,reasoning and radiological images from the PMC Open Subset.
-
Cannae-AI/MedicalQwen3-Reasoning-14B-IT
Text Generation • 15B • Updated • 80 • 4 -
Cannae-AI/MedicalQwen3-Reasoning-4B
Text Generation • 4B • Updated • 153 • 2 -
Cannae-AI/MedicalLlama3.2-vision-11B-IT
Image-Text-to-Text • 11B • Updated • 33 • 10 -
mradermacher/MedicalQwen3-Reasoning-4B-i1-GGUF
4B • Updated • 27 • 1
A collection of fine tuned models for better mathematics reasoning
Our instruction and personality fine tunes of open-weight models designed for chat and versatile use cases.