Load 4bit models 4x faster
Collection
Native bitsandbytes 4bit pre quantized models • 25 items • Updated • 61
How to use unsloth/gemma-2b-it-bnb-4bit with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="unsloth/gemma-2b-it-bnb-4bit") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-2b-it-bnb-4bit")
model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-2b-it-bnb-4bit")How to use unsloth/gemma-2b-it-bnb-4bit with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for unsloth/gemma-2b-it-bnb-4bit to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for unsloth/gemma-2b-it-bnb-4bit to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for unsloth/gemma-2b-it-bnb-4bit to start chatting
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="unsloth/gemma-2b-it-bnb-4bit",
max_seq_length=2048,
)All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|---|---|---|---|
| Gemma 7b | ▶️ Start on Colab | 2.4x faster | 58% less |
| Mistral 7b | ▶️ Start on Colab | 2.2x faster | 62% less |
| Llama-2 7b | ▶️ Start on Colab | 2.2x faster | 43% less |
| TinyLlama | ▶️ Start on Colab | 3.9x faster | 74% less |
| CodeLlama 34b A100 | ▶️ Start on Colab | 1.9x faster | 27% less |
| Mistral 7b 1xT4 | ▶️ Start on Kaggle | 5x faster* | 62% less |
| DPO - Zephyr | ▶️ Start on Colab | 1.9x faster | 19% less |