Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

unsloth
/
MiniMax-M2.5-GGUF

Text Generation
Transformers
GGUF
unsloth
minimax_m2
imatrix
conversational
Model card Files Files and versions
xet
Community
11
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

MiniMax-M2.5 GGUF Benchmarks

pinned
πŸ‘€πŸ‘ 7
4
#6 opened about 2 months ago by
danielhanchen

<think> block not generated / requires manual chat template modification

2
#11 opened 7 days ago by
AiverAiva

Q4_K_XL has smaller size than Q4_K_M.

2
#10 opened 21 days ago by
lzm1066258

Q3_K_M - Results with: 2x RTX 2080Ti (22GB VRAM each) // Xeon E5-2682V4 16C 128G DDR4 2133 Quad Channel

1
#9 opened 28 days ago by
yuzu127000d

Install & run unsloth/MiniMax-M2.5-GGUF easily using llmpm

#8 opened about 1 month ago by
sarthak-saxena

Any plans to update this model with the new quant formulas?

#7 opened about 1 month ago by
dfz1

Can you guys do Cerebras' Minimax M2.1 139B REAP?

πŸ‘ 2
2
#5 opened about 2 months ago by
watchingyousleep

can you share ppl graph for your quants ?

πŸ‘ 5
3
#4 opened about 2 months ago by
ox-ox

4-bit quantization: MXFP4_MOE vs Q4_K_XL?

4
#3 opened about 2 months ago by
fsaudm

Q3_K_XL - Results with: 1x XTX 7900 24GB VRAM // Ryzen 7 9700X 96GB

5
#2 opened about 2 months ago by
flaviocb

why it's labeled as a finetune

4
#1 opened about 2 months ago by
CHNtentes
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs