Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ianZzzzzz
/
GLM-130B-quant-int4-4gpu
like
12
Model card
Files
Files and versions
xet
Community
5
YAML Metadata Warning:
empty or missing yaml metadata in repo card (
https://huggingface.co/docs/hub/model-cards#model-card-metadata
)
GLM-130B模型的int4量化版本,可在四张3090Ti的情况下进行推理。 An int4 quantized version of the GLM-130B model that can be inferred with 4 * 3090Ti .
license: apache-2.0
GLM-130B模型的int4量化版本,可在四张3090Ti的情况下进行推理。 An int4 quantized version of the GLM-130B model that can be inferred with 4 * 3090Ti .
license: apache-2.0
iannobug@gmail.com
Downloads last month
-
Downloads are not tracked for this model.
How to track
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support