Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
IBDPLab
/
EXAONE-4.0-1.2B-W4A16-GPTQ
like
0
Follow
Intelligence Big Data Processing Laboratory
3
Safetensors
exaone4
compressed-tensors
License:
exaone-ai-model-license-agreement-1.1
Model card
Files
Files and versions
xet
Community
Quantization Details
Usage
Quantization Details
Base Model
: LGAI-EXAONE/EXAONE-4.0-1.2B
Method
: GPTQ W4A16
Group Size
: 128
Calibration Dataset
: LGAI-EXAONE/MANTA-1M (512 samples)
Tool
: llmcompressor
Usage
from
vllm
import
LLM llm = LLM(model=
"IBDPLab/EXAONE-4.0-1.2B-W4A16-GPTQ"
)
Downloads last month
24
Safetensors
Model size
2B params
Tensor type
I64
·
I32
·
BF16
·
Chat template
Files info
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support