Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
ubergarm
/
MiniMax-M2.5-GGUF
like
51
Text Generation
GGUF
imatrix
conversational
minimax_m2
ik_llama.cpp
Model card
Files
Files and versions
xet
Community
12
Deploy
Use this model
main
MiniMax-M2.5-GGUF
839 GB
Ctrl+K
Ctrl+K
1 contributor
History:
34 commits
ubergarm
As requested, add UD-IQ3_XXS to perplexity chart
6e9d9db
about 2 months ago
IQ2_KS
Upload folder using huggingface_hub
about 2 months ago
IQ4_NL
Upload folder using huggingface_hub
about 2 months ago
IQ4_XS
Upload folder using huggingface_hub
about 2 months ago
IQ5_K
Upload folder using huggingface_hub
about 2 months ago
images
As requested, add UD-IQ3_XXS to perplexity chart
about 2 months ago
logs
update perplexity logs with exact command
about 2 months ago
mainline-IQ4_NL
Upload folder using huggingface_hub
about 2 months ago
smol-IQ3_KS
Upload folder using huggingface_hub
about 2 months ago
smol-IQ4_KSS
Upload folder using huggingface_hub
about 2 months ago
.gitattributes
Safe
1.65 kB
initial commit
about 2 months ago
README.md
Safe
12 kB
add link to AesSedai/MiniMax-M2.5-GGUF
about 2 months ago
imatrix-MiniMax-M2.5-BF16.dat
492 MB
xet
Upload imatrix-MiniMax-M2.5-BF16.dat with huggingface_hub
about 2 months ago