pythia-6.9b - GPTQ (8-bit)

Source model: EleutherAI/pythia-6.9b

This model was quantized to 8-bit using GPTQModel.

Quantization parameters:

  • bits: 8
  • group_size: 128
  • damp_percent: 0.05
  • desc_act: False

Usage

            # pip install transformers gptqmodel --no-build-isolation
            from gptqmodel import GPTQModel
            model_id = "iproskurina/pythia-6.9b-gptqmodel-8bit"
            model = GPTQModel.load(model_id)
            result = model.generate("Uncovering deep insights")[0]
            print(model.tokenizer.decode(result))
Downloads last month
2
Safetensors
Model size
7B params
Tensor type
F16
·
I32
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train iproskurina/pythia-6.9b-gptqmodel-8bit