Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
frankdarkluo
/
DeepSeek-R1-Distill-Qwen-7B-GPTQ-Int8
like
0
Text Generation
Transformers
Safetensors
qwen2
conversational
text-generation-inference
8-bit precision
gptq
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
DeepSeek-R1-Distill-Qwen-7B-GPTQ-Int8
8.88 GB
1 contributor
History:
4 commits
frankdarkluo
Update README.md
e09c387
verified
about 2 months ago
.gitattributes
1.57 kB
Upload folder using huggingface_hub
2 months ago
README.md
1.44 kB
Update README.md
about 2 months ago
chat_template.jinja
2.25 kB
Upload folder using huggingface_hub
2 months ago
config.json
1.96 kB
Upload folder using huggingface_hub
2 months ago
generation_config.json
181 Bytes
Upload folder using huggingface_hub
2 months ago
model-00001-of-00003.safetensors
4.26 GB
xet
Upload folder using huggingface_hub
2 months ago
model-00002-of-00003.safetensors
4.29 GB
xet
Upload folder using huggingface_hub
2 months ago
model-00003-of-00003.safetensors
309 MB
xet
Upload folder using huggingface_hub
2 months ago
model.safetensors.index.json
75.4 kB
Upload folder using huggingface_hub
2 months ago
quant_log.csv
9.07 kB
Upload folder using huggingface_hub
2 months ago
quantize_config.json
542 Bytes
Upload folder using huggingface_hub
2 months ago
special_tokens_map.json
371 Bytes
Upload folder using huggingface_hub
2 months ago
tokenizer.json
11.4 MB
xet
Upload folder using huggingface_hub
2 months ago
tokenizer_config.json
4.49 kB
Upload folder using huggingface_hub
2 months ago