|
|
--- |
|
|
license: gemma |
|
|
language: |
|
|
- en |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- litert |
|
|
- litert-lm |
|
|
- gemma |
|
|
- agent |
|
|
- tool-calling |
|
|
- multimodal |
|
|
- on-device |
|
|
library_name: litert-lm |
|
|
--- |
|
|
|
|
|
# Agent Gemma 3n E2B (LiteRT-LM Fixed) |
|
|
|
|
|
This is a **fixed and working version** of the Gemma 3n E2B Agent model in LiteRT-LM format (.litertlm). The original model had a corrupted tokenizer configuration that prevented it from loading. This version has been rebuilt with a working SentencePiece tokenizer while preserving all agent capabilities. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
- **Base Model**: Gemma 3n E2B |
|
|
- **Format**: LiteRT-LM v1.4.0 |
|
|
- **Quantization**: INT4 |
|
|
- **Size**: ~3.2GB |
|
|
- **Capabilities**: |
|
|
- Text generation |
|
|
- Tool/function calling (via Jinja template) |
|
|
- Multimodal (vision and audio support) |
|
|
- On-device inference optimized |
|
|
|
|
|
## What Was Fixed |
|
|
|
|
|
The original agent-gemma model (`gemma-3n-E2B-it-agent-tools.litertlm`) contained a corrupted HuggingFace tokenizer JSON configuration that caused the following error when loading: |
|
|
|
|
|
``` |
|
|
thread '<unnamed>' panicked at external/tokenizers_cpp/rust/src/lib.rs:26:50: |
|
|
called `Result::unwrap()` on an `Err` value: Error("expected value", line: 2, column: 1) |
|
|
``` |
|
|
|
|
|
### Root Cause |
|
|
|
|
|
During manual extraction and repacking of the .litertlm file using C++ peek/writer tools, the HuggingFace tokenizer's JSON metadata became malformed. |
|
|
|
|
|
### Solution |
|
|
|
|
|
1. **Extracted all model sections** from the corrupted agent-gemma model: |
|
|
- LlmMetadata (including Agent Gemma Jinja template) |
|
|
- 7 TFLite model components (embedder, per-layer embedder, audio encoder, vision encoder, etc.) |
|
|
|
|
|
2. **Replaced the tokenizer**: Extracted the working SentencePiece tokenizer from the standard gemma-3n-E2B model |
|
|
|
|
|
3. **Rebuilt the model** using LiteRT-LM's official `litertlm_builder` tool with proper section alignment and metadata |
|
|
|
|
|
## Model Architecture |
|
|
|
|
|
The model consists of 9 sections: |
|
|
|
|
|
``` |
|
|
Section 0: LlmMetadata (includes Jinja prompt template for tool calling) |
|
|
Section 1: SentencePiece Tokenizer |
|
|
Section 2: TFLite Embedder |
|
|
Section 3: TFLite Per-Layer Embedder |
|
|
Section 4: TFLite Audio Encoder (HW) |
|
|
Section 5: TFLite End-of-Audio detector |
|
|
Section 6: TFLite Vision Adapter |
|
|
Section 7: TFLite Vision Encoder |
|
|
Section 8: TFLite Prefill/Decode |
|
|
``` |
|
|
|
|
|
## Agent Capabilities |
|
|
|
|
|
This model includes a comprehensive Jinja template for tool/function calling that supports: |
|
|
|
|
|
- Tool declarations |
|
|
- Function calls with arguments |
|
|
- Function responses |
|
|
- Multi-turn conversations with tool interactions |
|
|
- System/developer prompts |
|
|
- Image inputs (via `<start_of_image>` tokens) |
|
|
|
|
|
Example tool call format: |
|
|
``` |
|
|
<start_function_call>call:function_name{arg1:value1,arg2:value2}<end_function_call> |
|
|
``` |
|
|
|
|
|
## Performance |
|
|
|
|
|
Tested on CPU (no GPU acceleration): |
|
|
|
|
|
- **Prefill Speed**: 21.20 tokens/sec |
|
|
- **Decode Speed**: 11.44 tokens/sec |
|
|
- **Time to First Token**: ~1.6s |
|
|
- **Initialization**: ~4.7s |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Requirements |
|
|
|
|
|
1. **LiteRT-LM runtime** - Build from source: |
|
|
```bash |
|
|
git clone https://github.com/google-ai-edge/LiteRT.git |
|
|
cd LiteRT/LiteRT-LM |
|
|
bazel build -c opt //runtime/engine:litert_lm_main |
|
|
``` |
|
|
|
|
|
2. **Supported platforms**: Linux (clang), macOS, Android |
|
|
|
|
|
### Running the Model |
|
|
|
|
|
```bash |
|
|
# Basic inference |
|
|
./bazel-bin/runtime/engine/litert_lm_main \ |
|
|
--model_path=gemma-3n-E2B-it-agent-fixed.litertlm \ |
|
|
--backend=cpu \ |
|
|
--input_prompt="Hello, how are you?" |
|
|
|
|
|
# With GPU acceleration (if available) |
|
|
./bazel-bin/runtime/engine/litert_lm_main \ |
|
|
--model_path=gemma-3n-E2B-it-agent-fixed.litertlm \ |
|
|
--backend=gpu \ |
|
|
--input_prompt="Write a function to calculate fibonacci numbers" |
|
|
``` |
|
|
|
|
|
### Example Output |
|
|
|
|
|
``` |
|
|
input_prompt: Hello, how are you today? |
|
|
I am doing well, thank you for asking! As a large language model, I don't |
|
|
experience emotions like humans do, but I'm functioning optimally and ready |
|
|
to assist you. How can I help you today?<end_of_turn> |
|
|
``` |
|
|
|
|
|
## Building the Fixed Model (Technical Details) |
|
|
|
|
|
If you need to rebuild or modify the model, here's the process: |
|
|
|
|
|
### 1. Extract Sections |
|
|
|
|
|
```python |
|
|
#!/usr/bin/env python3 |
|
|
import os |
|
|
|
|
|
def extract_section(input_file, start, end, output_file): |
|
|
with open(input_file, 'rb') as f: |
|
|
f.seek(start) |
|
|
data = f.read(end - start) |
|
|
with open(output_file, 'wb') as f: |
|
|
f.write(data) |
|
|
|
|
|
# Extract from agent model (all sections except tokenizer) |
|
|
agent_model = "gemma-3n-E2B-it-agent-tools.litertlm" |
|
|
extract_section(agent_model, 16384, 23334, "metadata.pb") |
|
|
extract_section(agent_model, 2293760, 273878864, "embedder.tflite") |
|
|
# ... (extract remaining TFLite sections) |
|
|
|
|
|
# Extract working tokenizer from standard gemma model |
|
|
working_model = "gemma-3n-E2B-it-int4.litertlm" |
|
|
extract_section(working_model, 32768, 4716087, "tokenizer.model") |
|
|
``` |
|
|
|
|
|
### 2. Create TOML Configuration |
|
|
|
|
|
```toml |
|
|
[system_metadata] |
|
|
entries = [ |
|
|
{ key = "author", value_type = "String", value = "The ODML Authors" } |
|
|
] |
|
|
|
|
|
[[section]] |
|
|
section_type = "LlmMetadata" |
|
|
data_path = "metadata.pb" |
|
|
|
|
|
[[section]] |
|
|
section_type = "SP_Tokenizer" |
|
|
data_path = "tokenizer.model" |
|
|
|
|
|
[[section]] |
|
|
section_type = "TFLiteModel" |
|
|
model_type = "EMBEDDER" |
|
|
data_path = "embedder.tflite" |
|
|
|
|
|
# ... (add remaining sections) |
|
|
``` |
|
|
|
|
|
### 3. Build with litertlm_builder |
|
|
|
|
|
```bash |
|
|
bazel run //schema/py:litertlm_builder_cli -- \ |
|
|
toml --path config.toml \ |
|
|
output --path gemma-3n-E2B-it-agent-fixed.litertlm |
|
|
``` |
|
|
|
|
|
## Verification |
|
|
|
|
|
Check the model structure: |
|
|
|
|
|
```bash |
|
|
bazel run //schema/cc:litertlm_peek -- \ |
|
|
--litertlm_file=gemma-3n-E2B-it-agent-fixed.litertlm |
|
|
``` |
|
|
|
|
|
Expected output shows: |
|
|
- Version: 1.4.0 |
|
|
- Section 1: `AnySectionDataType_SP_Tokenizer` (not HF_Tokenizer) |
|
|
- 9 total sections with proper alignment |
|
|
|
|
|
## Known Issues & Limitations |
|
|
|
|
|
1. **Tokenizer Change**: This model uses SentencePiece instead of the original HuggingFace tokenizer. While functionally equivalent for Gemma models, there may be minor differences in special token handling. |
|
|
|
|
|
2. **No Agent Template Customization**: The Jinja template from the original model is preserved as-is. If you need to modify the tool-calling behavior, you'll need to: |
|
|
- Extract the metadata.pb |
|
|
- Modify the `jinja_prompt_template` field |
|
|
- Rebuild the model |
|
|
|
|
|
3. **Hardware Requirements**: |
|
|
- Minimum 4GB RAM recommended |
|
|
- GPU acceleration requires OpenGL ES 3.1+ or Metal support |
|
|
- Audio/vision features require additional hardware support |
|
|
|
|
|
## License |
|
|
|
|
|
This model inherits the Gemma license from the original model. The fixing/rebuilding process does not change the model weights or training data. |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this model, please cite: |
|
|
|
|
|
```bibtex |
|
|
@misc{gemma3n-agent-fixed, |
|
|
title={Agent Gemma 3n E2B (LiteRT-LM Fixed)}, |
|
|
author={kontextdev}, |
|
|
year={2025}, |
|
|
publisher={HuggingFace}, |
|
|
howpublished={\url{https://huggingface.co/kontextdev/agent-gemma}} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Related Links |
|
|
|
|
|
- [LiteRT-LM GitHub](https://github.com/google-ai-edge/LiteRT/tree/main/LiteRT-LM) |
|
|
- [Original Gemma Model](https://ai.google.dev/gemma) |
|
|
- [LiteRT Documentation](https://ai.google.dev/edge/litert) |
|
|
|
|
|
## Changelog |
|
|
|
|
|
- **v1.0 (2025-01-14)**: Initial release with fixed SentencePiece tokenizer |
|
|
|