|
|
--- |
|
|
license: gemma |
|
|
base_model: google/gemma-2-2b |
|
|
library_name: transformers |
|
|
tags: |
|
|
- text-generation |
|
|
- gemma2 |
|
|
- local-inference |
|
|
- bitsandbytes |
|
|
- fine-tuned |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
|
|
|
# Gemma-2-Racer |
|
|
|
|
|
`gemma2racer` is a specialized optimization of Google's **Gemma 2** architecture. This model is fine-tuned and configured specifically for "racing" performance—prioritizing high-speed token generation and low-memory overhead for local LLM deployment. |
|
|
|
|
|
--- |
|
|
|
|
|
## Model Summary |
|
|
|
|
|
The following table outlines the core technical specifications for the Gemma-2-Racer model. |
|
|
|
|
|
| Feature | Details | |
|
|
| :--- | :--- | |
|
|
| **Developed by** | [Rabimba Karanjai](https://huggingface.co/rabimba) | |
|
|
| **Model Type** | Causal Language Model (Transformer-based) | |
|
|
| **Base Model** | [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) | |
|
|
| **Architecture** | Gemma-2 | |
|
|
| **Optimization Strategy** | 4-bit Quantization, `torch.compile`, and BitsAndBytes | |
|
|
| **Primary Language** | English | |
|
|
| **License** | [Gemma Terms of Use](https://ai.google.dev/gemma/terms) | |
|
|
|
|
|
--- |
|
|
|
|
|
## Intended Use |
|
|
|
|
|
This model is designed for developers and researchers who require state-of-the-art performance on consumer-grade hardware. It is specifically optimized for: |
|
|
|
|
|
* **Real-time Interaction:** Minimized "Time To First Token" (TTFT) for chat applications. |
|
|
* **Local Privacy:** Small enough to run entirely offline on standard laptops or edge devices. |
|
|
* **Efficient Inference:** Optimized to fit into 2GB - 4GB of VRAM depending on your quantization settings. |
|
|
|
|
|
--- |
|
|
|
|
|
## Quickstart Guide |
|
|
|
|
|
To get the model running with the "Racer" performance presets, follow these steps: |
|
|
|
|
|
1. **Install Requirements:** |
|
|
Update your environment with the necessary libraries for quantization and acceleration. |
|
|
```bash |
|
|
pip install -U transformers accelerate bitsandbytes |
|
|
``` |
|
|
|
|
|
2. **Login to Hugging Face:** |
|
|
Ensure you have accepted the Gemma license on the official Google repository and authenticate locally. |
|
|
```bash |
|
|
huggingface-cli login |
|
|
``` |
|
|
|
|
|
3. **Python Implementation:** |
|
|
Use the following code snippet to load the model in its optimized 4-bit state. |
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
import torch |
|
|
|
|
|
model_id = "rabimba/gemma2racer" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_id, |
|
|
device_map="auto", |
|
|
load_in_4bit=True, |
|
|
torch_dtype=torch.bfloat16 |
|
|
) |
|
|
|
|
|
prompt = "Explain quantum physics like I'm a race car driver." |
|
|
inputs = tokenizer(prompt, return_tensors="pt").to("cuda") |
|
|
|
|
|
outputs = model.generate(**inputs, max_new_tokens=150) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## Performance Profiles |
|
|
|
|
|
The "Racer" moniker refers to the model's ability to be tuned for different hardware constraints: |
|
|
|
|
|
* **The Speedster (Linux/CUDA):** After loading, use `model = torch.compile(model)` to utilize kernel fusion for significantly higher throughput. |
|
|
* **The Daily Driver (Standard GPU):** Standard 4-bit loading via BitsAndBytes provides a perfect balance of speed and 2.6B parameter intelligence. |
|
|
* **The Endurance Run (Low VRAM):** Can be run with heavy CPU offloading via `accelerate` for systems with limited or no dedicated graphics memory. |
|
|
|
|
|
--- |
|
|
|
|
|
## Limitations and Ethical Considerations |
|
|
|
|
|
* **Accuracy:** Like all large language models, this model may hallucinate. Users should verify critical information. |
|
|
* **Bias:** This model inherits biases present in the Gemma-2 base training data. |
|
|
* **Safety:** While safety filters are present, it is recommended that users implement their own moderation layers for public-facing deployments. |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this model in your research or commercial projects, please cite it as follows: |
|
|
|
|
|
```bibtex |
|
|
@misc{gemma2racer2024, |
|
|
author = {Rabimba Karanjai}, |
|
|
title = {Gemma-2-Racer: Optimized Local Inference}, |
|
|
year = {2024}, |
|
|
publisher = {Hugging Face}, |
|
|
howpublished = {\url{https://huggingface.co/rabimba/gemma2racer}} |
|
|
} |