File size: 4,083 Bytes
6e6da40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
license: gemma
base_model: google/gemma-2-2b
library_name: transformers
tags:
- text-generation
- gemma2
- local-inference
- bitsandbytes
- fine-tuned
pipeline_tag: text-generation
---

# Gemma-2-Racer

`gemma2racer` is a specialized optimization of Google's **Gemma 2** architecture. This model is fine-tuned and configured specifically for "racing" performance—prioritizing high-speed token generation and low-memory overhead for local LLM deployment.

---

## Model Summary

The following table outlines the core technical specifications for the Gemma-2-Racer model.

| Feature | Details |
| :--- | :--- |
| **Developed by** | [Rabimba Karanjai](https://huggingface.co/rabimba) |
| **Model Type** | Causal Language Model (Transformer-based) |
| **Base Model** | [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) |
| **Architecture** | Gemma-2 |
| **Optimization Strategy** | 4-bit Quantization, `torch.compile`, and BitsAndBytes |
| **Primary Language** | English |
| **License** | [Gemma Terms of Use](https://ai.google.dev/gemma/terms) |

---

## Intended Use

This model is designed for developers and researchers who require state-of-the-art performance on consumer-grade hardware. It is specifically optimized for:

* **Real-time Interaction:** Minimized "Time To First Token" (TTFT) for chat applications.
* **Local Privacy:** Small enough to run entirely offline on standard laptops or edge devices.
* **Efficient Inference:** Optimized to fit into 2GB - 4GB of VRAM depending on your quantization settings.

---

## Quickstart Guide

To get the model running with the "Racer" performance presets, follow these steps:

1.  **Install Requirements:**
    Update your environment with the necessary libraries for quantization and acceleration.
    ```bash
    pip install -U transformers accelerate bitsandbytes
    ```

2.  **Login to Hugging Face:**
    Ensure you have accepted the Gemma license on the official Google repository and authenticate locally.
    ```bash
    huggingface-cli login
    ```

3.  **Python Implementation:**
    Use the following code snippet to load the model in its optimized 4-bit state.
    ```python
    from transformers import AutoTokenizer, AutoModelForCausalLM
    import torch

    model_id = "rabimba/gemma2racer"
    
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    model = AutoModelForCausalLM.from_pretrained(
        model_id,
        device_map="auto",
        load_in_4bit=True,
        torch_dtype=torch.bfloat16
    )

    prompt = "Explain quantum physics like I'm a race car driver."
    inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
    
    outputs = model.generate(**inputs, max_new_tokens=150)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))
    ```

---

## Performance Profiles

The "Racer" moniker refers to the model's ability to be tuned for different hardware constraints:

* **The Speedster (Linux/CUDA):** After loading, use `model = torch.compile(model)` to utilize kernel fusion for significantly higher throughput.
* **The Daily Driver (Standard GPU):** Standard 4-bit loading via BitsAndBytes provides a perfect balance of speed and 2.6B parameter intelligence.
* **The Endurance Run (Low VRAM):** Can be run with heavy CPU offloading via `accelerate` for systems with limited or no dedicated graphics memory.

---

## Limitations and Ethical Considerations

* **Accuracy:** Like all large language models, this model may hallucinate. Users should verify critical information.
* **Bias:** This model inherits biases present in the Gemma-2 base training data.
* **Safety:** While safety filters are present, it is recommended that users implement their own moderation layers for public-facing deployments.

---

## Citation

If you use this model in your research or commercial projects, please cite it as follows:

```bibtex
@misc{gemma2racer2024,
  author = {Rabimba Karanjai},
  title = {Gemma-2-Racer: Optimized Local Inference},
  year = {2024},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/rabimba/gemma2racer}}
}