Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
madgnu
/
GigaChat-20B-A3B-instruct-mlx-4bit
like
0
Text Generation
MLX
Safetensors
Russian
English
deepseek
conversational
custom_code
4-bit precision
License:
mit
Model card
Files
Files and versions
xet
Community
Use this model
main
GigaChat-20B-A3B-instruct-mlx-4bit
12.9 GB
1 contributor
History:
2 commits
madgnu
Upload folder using huggingface_hub
c7736b8
verified
9 months ago
.gitattributes
Safe
1.57 kB
Upload folder using huggingface_hub
9 months ago
README.md
147 Bytes
Upload folder using huggingface_hub
9 months ago
chat_template.jinja
1.26 kB
Upload folder using huggingface_hub
9 months ago
config.json
1.46 kB
Upload folder using huggingface_hub
9 months ago
configuration_deepseek.py
Safe
12.7 kB
Upload folder using huggingface_hub
9 months ago
generation_config.json
Safe
163 Bytes
Upload folder using huggingface_hub
9 months ago
model-00001-of-00003.safetensors
5.32 GB
xet
Upload folder using huggingface_hub
9 months ago
model-00002-of-00003.safetensors
5.37 GB
xet
Upload folder using huggingface_hub
9 months ago
model-00003-of-00003.safetensors
2.19 GB
xet
Upload folder using huggingface_hub
9 months ago
model.safetensors.index.json
84.3 kB
Upload folder using huggingface_hub
9 months ago
modelling_deepseek.py
Safe
67.4 kB
Upload folder using huggingface_hub
9 months ago
special_tokens_map.json
Safe
446 Bytes
Upload folder using huggingface_hub
9 months ago
tokenizer.json
Safe
10.7 MB
xet
Upload folder using huggingface_hub
9 months ago
tokenizer_config.json
2.17 kB
Upload folder using huggingface_hub
9 months ago