Quantization
Collection
A collection of quantized models. All the models can be fine-tuned by adding a LoRA Adapter. • 82 items • Updated • 3
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shuyuej/Command-R-GPTQ")
model = AutoModelForCausalLM.from_pretrained("shuyuej/Command-R-GPTQ")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Original Base Model: CohereForAI/c4ai-command-r-v01.
Link: https://huggingface.co/CohereForAI/c4ai-command-r-v01
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.01,
"desc_act": true,
"static_groups": false,
"sym": true,
"true_sequential": true,
"model_name_or_path": null,
"model_file_base_name": null,
"quant_method": "gptq",
"checkpoint_format": "gptq"
}
Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="shuyuej/Command-R-GPTQ") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)