Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
btbtyler09
/
Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ
like
0
Image-to-Text
Transformers
Safetensors
mistral3
mistral
devstral
gptq
quantized
4-bit precision
8-bit precision
mixed-precision
vllm
rocm
code
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
main
Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ
25.1 GB
1 contributor
History:
5 commits
btbtyler09
Update README.md
8020e9d
verified
14 days ago
.gitattributes
1.57 kB
Upload folder using huggingface_hub
15 days ago
README.md
3.68 kB
Update README.md
14 days ago
chat_template.jinja
5.32 kB
Upload folder using huggingface_hub
15 days ago
config.json
15.2 kB
Upload folder using huggingface_hub
15 days ago
generation_config.json
175 Bytes
Upload folder using huggingface_hub
15 days ago
model.safetensors
25.1 GB
xet
Upload folder using huggingface_hub
15 days ago
perplexity_results.txt
167 Bytes
Upload folder using huggingface_hub
15 days ago
preprocessor_config.json
699 Bytes
Upload folder using huggingface_hub
15 days ago
processor_config.json
976 Bytes
Upload folder using huggingface_hub
15 days ago
quantize.py
5.16 kB
Upload folder using huggingface_hub
15 days ago
recipe.yaml
1.04 kB
Upload folder using huggingface_hub
15 days ago
tokenizer.json
17.1 MB
xet
Upload folder using huggingface_hub
15 days ago
tokenizer_config.json
21.2 kB
Upload folder using huggingface_hub
15 days ago