Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
stan-hua
/
Mistral-7B-Instruct-v0.3-AWQ-4bit
like
0
Text Generation
Transformers
Safetensors
mistral
conversational
text-generation-inference
4-bit precision
awq
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Mistral-7B-Instruct-v0.3-AWQ-4bit
4.17 GB
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
stan-hua
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
daba97d
verified
almost 2 years ago
.gitattributes
Safe
1.52 kB
initial commit
almost 2 years ago
config.json
Safe
927 Bytes
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
almost 2 years ago
generation_config.json
Safe
132 Bytes
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
almost 2 years ago
model.safetensors
Safe
4.16 GB
xet
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
almost 2 years ago
quant_config.json
Safe
90 Bytes
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
almost 2 years ago
special_tokens_map.json
Safe
414 Bytes
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
almost 2 years ago
tokenizer.json
Safe
1.96 MB
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
almost 2 years ago
tokenizer.model
Safe
587 kB
xet
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
almost 2 years ago
tokenizer_config.json
Safe
137 kB
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
almost 2 years ago