SeaWolf-AI commited on
Commit
ce657c5
Β·
verified Β·
1 Parent(s): d683a75

Add README with Space metadata

Browse files
Files changed (1) hide show
  1. README.md +117 -5
README.md CHANGED
@@ -1,10 +1,122 @@
1
  ---
2
- title: LiteRT LM
3
- emoji: πŸ¦€
4
- colorFrom: pink
5
- colorTo: gray
6
  sdk: static
 
7
  pinned: false
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: LiteRT-LM
3
+ emoji: πŸš€
4
+ colorFrom: blue
5
+ colorTo: green
6
  sdk: static
7
+ app_file: index.html
8
  pinned: false
9
  ---
10
 
11
+ # LiteRT-LM
12
+
13
+ LiteRT-LM is Google's production-ready, high-performance, open-source inference
14
+ framework for deploying Large Language Models on edge devices.
15
+
16
+ πŸ”— [Product Website](https://ai.google.dev/edge/litert-lm)
17
+
18
+ ## πŸ”₯ What's New: Gemma 4 support with LiteRT-LM
19
+
20
+ Deploy [Gemma 4](https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/)
21
+ across a broad range of hardware with stellar performance
22
+ ([blog](https://developers.googleblog.com/bring-state-of-the-art-agentic-skills-to-the-edge-with-gemma-4/)).
23
+
24
+ πŸ‘‰ Try on Linux, macOS, Windows (WSL) or Raspberry Pi with the
25
+ [LiteRT-LM CLI](https://ai.google.dev/edge/litert-lm/cli):
26
+
27
+ ```bash
28
+ litert-lm run \
29
+ --from-huggingface-repo=litert-community/gemma-4-E2B-it-litert-lm \
30
+ gemma-4-E2B-it.litertlm \
31
+ --prompt="What is the capital of France?"
32
+ ```
33
+
34
+ ## 🌟 Key Features
35
+
36
+ - πŸ“± **Cross-Platform Support**: Android, iOS, Web, Desktop, and IoT (e.g.
37
+ Raspberry Pi).
38
+ - πŸš€ **Hardware Acceleration**: Peak performance via GPU and NPU accelerators.
39
+ - πŸ‘οΈ **Multi-Modality**: Support for vision and audio inputs.
40
+ - πŸ”§ **Tool Use**: Function calling support for agentic workflows.
41
+ - πŸ“š **Broad Model Support**: Gemma, Llama, Phi-4, Qwen, and more.
42
+
43
+ ![](./docs/api/kotlin/demo.gif)
44
+
45
+ ---
46
+
47
+ ## πŸš€ Production-Ready for Google's Products
48
+
49
+ LiteRT-LM powers on-device GenAI experiences in **Chrome**, **Chromebook Plus**,
50
+ **Pixel Watch**, and more.
51
+
52
+ You can also try the
53
+ [Google AI Edge Gallery](https://github.com/google-ai-edge/gallery) app to run
54
+ models immediately on your device.
55
+
56
+ | **Install the app today from Google Play** | **Install the app today from App Store** |
57
+ | :---: | :---: |
58
+ | <a href='https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery'><img alt='Get it on Google Play' height="120" src='https://play.google.com/intl/en_us/badges/static/images/badges/en_badge_web_generic.png'/></a> | <a href="https://apps.apple.com/us/app/google-ai-edge-gallery/id6749645337?itscg=30200&itsct=apps_box_badge&mttnsubad=6749645337" style="display: inline-block;"> <img src="https://toolbox.marketingtools.apple.com/api/v2/badges/download-on-the-app-store/black/en-us?releaseDate=1771977600" alt="Download on the App Store" style="width: 246px; height: 90px; vertical-align: middle; object-fit: contain;" /></a> |
59
+
60
+ ### πŸ“° Blogs & Announcements
61
+
62
+ | Link | Description |
63
+ | :--- | :--- |
64
+ | [Bring state-of-the-art agentic skills to the edge with Gemma 4](https://developers.googleblog.com/bring-state-of-the-art-agentic-skills-to-the-edge-with-gemma-4/) | Deploy Gemma 4 in-app and across a broader range of devices with stellar performance and broad reach using LiteRT-LM. |
65
+ | [On-device GenAI in Chrome, Chromebook Plus and Pixel Watch](https://developers.googleblog.com/on-device-genai-in-chrome-chromebook-plus-and-pixel-watch-with-litert-lm/) | Deploy language models on wearables and browser-based platforms using LiteRT-LM at scale. |
66
+ | [On-device Function Calling in Google AI Edge Gallery](https://developers.googleblog.com/on-device-function-calling-in-google-ai-edge-gallery/) | Explore how to fine-tune FunctionGemma and enable function calling capabilities powered by LiteRT-LM Tool Use APIs. |
67
+ | [Google AI Edge small language models, multimodality, and function calling](https://developers.googleblog.com/google-ai-edge-small-language-models-multimodality-rag-function-calling/) | Latest insights on RAG, multimodality, and function calling for edge language models. |
68
+
69
+ ---
70
+
71
+ ## πŸƒ Quick Start
72
+
73
+ ### πŸ”— Key Links
74
+
75
+ - πŸ‘‰ [Technical Overview](https://ai.google.dev/edge/litert-lm/overview) including performance benchmarks, model support, and more.
76
+ - πŸ‘‰ [LiteRT-LM CLI Guide](https://ai.google.dev/edge/litert-lm/cli) including installation, getting started, and advanced usage.
77
+
78
+ ### ⚑ Quick Try (No Code)
79
+
80
+ Try LiteRT-LM immediately from your terminal without writing a single line of code using [`uv`](https://docs.astral.sh/uv/getting-started/installation/):
81
+
82
+ ```bash
83
+ uv tool install litert-lm
84
+
85
+ litert-lm run \
86
+ --from-huggingface-repo=google/gemma-3n-E2B-it-litert-lm \
87
+ gemma-3n-E2B-it-int4 \
88
+ --prompt="What is the capital of France?"
89
+ ```
90
+
91
+
92
+ ---
93
+
94
+ ### πŸ“š Supported Language APIs
95
+ Ready to get started? Explore our language-specific guides and setup instructions.
96
+
97
+ | Language | Status | Best For... | Documentation |
98
+ | :--- | :--- | :--- | :--- |
99
+ | **Kotlin** | βœ… Stable | Android apps & JVM | [Android (Kotlin) Guide](https://ai.google.dev/edge/litert-lm/android) |
100
+ | **Python** | βœ… Stable | Prototyping & Scripting | [Python Guide](https://ai.google.dev/edge/litert-lm/python) |
101
+ | **C++** | βœ… Stable | High-performance native | [C++ Guide](https://ai.google.dev/edge/litert-lm/cpp) |
102
+ | **Swift** | πŸš€ In Dev | Native iOS & macOS | (Coming Soon) |
103
+
104
+ #### πŸ—οΈ Build From Source
105
+
106
+ This [guide](./docs/getting-started/build-and-run.md) shows how you can
107
+ compile LiteRT-LM from source. If you want to build the program from source,
108
+ you should checkout the stable [![Latest
109
+ Release](https://img.shields.io/github/v/release/google-ai-edge/LiteRT-LM)](https://github.com/google-ai-edge/LiteRT-LM/releases/latest) tag.
110
+
111
+ ---
112
+
113
+ ## πŸ“¦ Releases
114
+
115
+ - **v0.10.1**: Deploy [Gemma 4](https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/) with stellar performance ([blog](https://developers.googleblog.com/bring-state-of-the-art-agentic-skills-to-the-edge-with-gemma-4/)) and introduce [LiteRT-LM CLI](https://ai.google.dev/edge/litert-lm/cli).
116
+ - **v0.9.0**: Improvements to function calling capabilities, better app performance stability.
117
+ - **v0.8.0**: Desktop GPU support and Multi-Modality.
118
+ - **v0.7.0**: NPU acceleration for Gemma models.
119
+
120
+ For a full list of releases, see [GitHub Releases](https://github.com/google-ai-edge/LiteRT-LM/releases).
121
+
122
+ ---