Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
GAIR
/
autoj-13b-GPTQ-4bits
like
4
Follow
SII - GAIR
274
Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
autoj-13b-GPTQ-4bits
7.92 GB
2 contributors
History:
3 commits
lockon-n
First upload
8cf4e9b
over 2 years ago
.gitattributes
1.52 kB
initial commit
over 2 years ago
README.md
788 Bytes
First upload
over 2 years ago
added_tokens.json
3.7 kB
First upload
over 2 years ago
config.json
1 kB
First upload
over 2 years ago
constants_prompt.py
2.1 kB
First upload
over 2 years ago
example_gptq4bits.py
810 Bytes
First upload
over 2 years ago
model.safetensors
7.92 GB
xet
First upload
over 2 years ago
quantize_config.json
209 Bytes
First upload
over 2 years ago
special_tokens_map.json
438 Bytes
First upload
over 2 years ago
tokenizer.json
1.87 MB
First upload
over 2 years ago
tokenizer_config.json
698 Bytes
First upload
over 2 years ago