7B AWQ
Collection
These models are selected for their compatibility with small 12GB memory GPUs. • 203 items • Updated • 2
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("solidrust/Darcy-7b-AWQ")
model = AutoModelForCausalLM.from_pretrained("solidrust/Darcy-7b-AWQ")Darcy-7b is a merge of the following models using LazyMergekit.
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="solidrust/Darcy-7b-AWQ")