Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
BrokenSoul
/
GPT2-GPTQ-4bit
like
0
Text Generation
Transformers
gpt2
4-bit precision
gptq
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
BrokenSoul/GPT2-GPTQ-4bit
BrokenSoul/GPT2-GPTQ-4bit
This is a GPT2 Quantized model following this tutorial:
4-bit LLM Quantization with GPTQ
.
Downloads last month
8
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
๐
Ask for provider support