Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
dmmagdal
/
falcon_7b_GPTQ_4-bit
like
0
Text Generation
Transformers
PyTorch
falcon
custom_code
text-generation-inference
4-bit precision
gptq
License:
creativeml-openrail-m
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
main
falcon_7b_GPTQ_4-bit
/
tokenizer.json
dmmagdal
First commit. Uploading model and tokenizer for files for falcon-7b model quantized to 4-bit with GPTQ from auto-gptq
a9f7ec0
over 2 years ago
raw
Copy download link
history
contribute
delete
Safe
2.73 MB
File too large to display, you can
check the raw version
instead.