metadata
license: mit
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
pipeline_tag: text-generation
tags:
- int8
- quantized
- transformers
- mistral
- causal-lm
- research-only
model_name: Mistral-small-INT8
quantization: INT8 via bitsandbytes
total_parameters: 7 Billion
intended_use: Research, benchmarking, single-GPU inference
limitations: May produce unfiltered outputs; add safety layers if deployed
🧠 Model Overview
This is a quantized variant of the Mistral 7B (small) model using LLM.int8() quantization via bitsandbytes. It reduces memory footprint while maintaining high-generation quality—ideal for single-GPU inference, research benchmarks, and lightweight downstream applications.
🔧 Model Specs
- Total Parameters: ~7 Billion
- Precision: INT8 with FP32 CPU offload
- Quantization Threshold: 6.0
- Device Map: Auto (compatible with CUDA / CPU offloading)
- Tokenizer: Byte-level BPE
🚀 Usage Example
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "ParveshRawal/mistral-small-int8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
quant_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
llm_int8_enable_fp32_cpu_offload=True
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
quantization_config=quant_config
)
inputs = tokenizer("Tell me something about IndiaAI.", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))