Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: [en, hi]
|
| 3 |
+
license: gemma
|
| 4 |
+
tags:
|
| 5 |
+
- function-calling
|
| 6 |
+
- gemma
|
| 7 |
+
- kosha
|
| 8 |
+
- android
|
| 9 |
+
- on-device
|
| 10 |
+
- litert
|
| 11 |
+
- mediapipe
|
| 12 |
+
- finance
|
| 13 |
+
base_model: 2796gauravc/kosha-functiongemma-phase0
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# KOSHA Phase 0 — Android .task File
|
| 17 |
+
|
| 18 |
+
**KOSHA (कोश)** — Private on-device Indian finance tracker.
|
| 19 |
+
This is the **MediaPipe-ready `.task` file** for Android LiteRT deployment.
|
| 20 |
+
|
| 21 |
+
The fine-tuned merged model is at [2796gauravc/kosha-functiongemma-phase0](https://huggingface.co/2796gauravc/kosha-functiongemma-phase0).
|
| 22 |
+
|
| 23 |
+
## File Info
|
| 24 |
+
| Property | Value |
|
| 25 |
+
|---|---|
|
| 26 |
+
| Base model | FunctionGemma 270M |
|
| 27 |
+
| Quantization | dynamic_int8 |
|
| 28 |
+
| KV cache max len | 1024 |
|
| 29 |
+
| Size | ~290 MB |
|
| 30 |
+
| Format | MediaPipe `.task` (LiteRT) |
|
| 31 |
+
|
| 32 |
+
## Functions
|
| 33 |
+
- `log_expense` — Indian bills, UPI, groceries, fuel, dining
|
| 34 |
+
- `log_income` — Salary, freelance, UPI received
|
| 35 |
+
- `no_expense` — OTP, promotions, non-financial messages
|
| 36 |
+
|
| 37 |
+
## Android Integration
|
| 38 |
+
|
| 39 |
+
```kotlin
|
| 40 |
+
// build.gradle.kts
|
| 41 |
+
implementation("com.google.mediapipe:tasks-genai:0.10.22")
|
| 42 |
+
|
| 43 |
+
// Usage
|
| 44 |
+
val options = LlmInference.LlmInferenceOptions.builder()
|
| 45 |
+
.setModelPath("/data/local/tmp/kosha_phase0_q8_ekv1024.task")
|
| 46 |
+
.setMaxTokens(1024)
|
| 47 |
+
.setTopK(64)
|
| 48 |
+
.setTopP(0.95f)
|
| 49 |
+
.setTemperature(0.1f) // Low temp for deterministic function calls
|
| 50 |
+
.setPreferredBackend(LlmInference.Backend.CPU) // CPU recommended for Gemma 270M
|
| 51 |
+
.build()
|
| 52 |
+
val llm = LlmInference.createFromOptions(context, options)
|
| 53 |
+
|
| 54 |
+
val result = llm.generateResponse(yourFunctionGemmaPrompt)
|
| 55 |
+
// Parse: <start_function_call>call:log_expense{...}<end_function_call>
|
| 56 |
+
```
|