Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
btbtyler09
/
Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ
like
3
Image-Text-to-Text
Transformers
Safetensors
mistral3
mistral
devstral
gptq
quantized
4-bit precision
8-bit precision
mixed-precision
vllm
rocm
code
conversational
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
2
Deploy
Use this model
main
Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ
25.1 GB
Ctrl+K
Ctrl+K
1 contributor
History:
5 commits
btbtyler09
Update README.md
8020e9d
verified
4 months ago
.gitattributes
Safe
1.57 kB
Upload folder using huggingface_hub
4 months ago
README.md
Safe
3.68 kB
Update README.md
4 months ago
chat_template.jinja
Safe
5.32 kB
Upload folder using huggingface_hub
4 months ago
config.json
Safe
15.2 kB
Upload folder using huggingface_hub
4 months ago
generation_config.json
Safe
175 Bytes
Upload folder using huggingface_hub
4 months ago
model.safetensors
25.1 GB
xet
Upload folder using huggingface_hub
4 months ago
perplexity_results.txt
Safe
167 Bytes
Upload folder using huggingface_hub
4 months ago
preprocessor_config.json
Safe
699 Bytes
Upload folder using huggingface_hub
4 months ago
processor_config.json
Safe
976 Bytes
Upload folder using huggingface_hub
4 months ago
quantize.py
Safe
5.16 kB
Upload folder using huggingface_hub
4 months ago
recipe.yaml
Safe
1.04 kB
Upload folder using huggingface_hub
4 months ago
tokenizer.json
Safe
17.1 MB
xet
Upload folder using huggingface_hub
4 months ago
tokenizer_config.json
Safe
21.2 kB
Upload folder using huggingface_hub
4 months ago