EVA
Collection
8 items • Updated • 1
This repository contains a Hugging Face export of Llama-2-7b-hf quantized with AQLM using the 4-bit 4x8 scheme.
meta-llama/Llama-2-7b-hfAQLM4x8 (4 codebooks, 8 bits per codebook)4-bit8This repo was produced with convert_to_hf.py from the AQLM project, then exported with --save_safetensors and --save_tokenizer.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "dbw6/Llama-2-7b-AQLM-4Bit-4x8-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True,
)
Base model
meta-llama/Llama-2-7b-hf