Lockout
Lockout
AI & ML interests
None yet
Recent Activity
new activity about 14 hours ago
unsloth/Mistral-Medium-3.5-128B-GGUF:q8_0 mmproj? updated a model about 24 hours ago
Lockout/Mistral-Medium-3.5-128B-GGUF-Q8_0-mmproj published a model 1 day ago
Lockout/Mistral-Medium-3.5-128B-GGUF-Q8_0-mmprojOrganizations
None yet
q8_0 mmproj?
1
#8 opened 3 days ago
by
Lockout
mmproj are undersized
#1 opened 3 days ago
by
Lockout
Heads Up (May 1): Transformers Config Fix – What It Means for GGUFs & Quantized & Fine-Tuned Models
👍🔥 14
8
#18 opened 13 days ago
by
juliendenize
Finally dense 100b+!
🔥❤️ 3
12
#6 opened 15 days ago
by
Sliderpro93
GGUF/llama.cpp support
🔥 1
3
#1 opened 2 months ago
by
tcpmux
Thanks for the trim pointer but it needs one more place.
#2 opened about 1 month ago
by
Lockout
How is it compared to the previous version?
2
#1 opened about 2 months ago
by
Lockout
Love the license, confused by some of the decisions.
🤝👍 16
15
#15 opened about 2 months ago
by
CyborgPaloma
does this contain actual lean
6
#4 opened about 2 months ago
by
unokayish182
Devstral?
3
#1 opened 3 months ago
by
Lockout
Decent PPL with 100% IQ4_KSS
🔥 1
12
#3 opened 5 months ago
by
sokann
What was the process to quantize?
2
#1 opened 4 months ago
by
Lockout
Its actually good model.
#1 opened 4 months ago
by
Lockout
The best model in the world for 5 months already
🔥 1
1
#4 opened 5 months ago
by deleted
Optimized Quant between 3bpw and 3.5bpw
32
#1 opened 5 months ago
by
remichu
Eval: https://huggingface.co/QuixiAI/Ina-v11.1 70b
#494 opened 5 months ago
by
Lockout
Anyone found any obvious differences compared to using the stock model?
8
#4 opened 5 months ago
by
mingyi456
EVAL: Behemoth-X V2.1
#459 opened 5 months ago
by
Lockout