Gemma3 270M - Task Decomposer
A fine-tuned version of Gemma3 270M optimized for decomposing tasks into actionable subtasks with time estimates.
Model Details
- Base Model: unsloth/gemma-3-270m-it
- Fine-tuning Method: LoRA (r=128, alpha=128) with Unsloth
- Training Data: 804 diverse task decomposition examples
- Format: GGUF Q8_0 quantization (~270MB)
- Training Duration: 3 epochs (~20-30 minutes on T4 GPU)
- Use Case: Breaking down everyday tasks into structured subtasks
- Optimized For: Mobile deployment via llama.cpp / llama_sdk
Training Data Categories
The model was trained on 804 examples across 11 diverse categories:
| Category | Examples | Description |
|---|---|---|
| Household Chores | 147 | Cleaning, laundry, organization, maintenance |
| Shopping/Errands | 90 | Grocery shopping, returns, pickups, banking |
| Cooking/Meals | 80 | Meal prep, baking, special occasions |
| School Projects | 103 | Research papers, presentations, studying |
| Work Tasks | 112 | Reports, meetings, reviews, planning |
| Personal Projects | 81 | Learning, websites, home renovation |
| Health/Fitness | 63 | Exercise routines, meal prep, wellness |
| Events/Travel | 68 | Parties, weddings, vacations, moves |
| Maintenance | 58 | Car, home, appliance, tech repairs |
Total: 804 examples (723 training, 81 evaluation)
Usage
Input Format
Decompose this task into subtasks: [YOUR TASK HERE]
Expected Output Format
The model outputs JSON with subtasks containing:
title: Brief, actionable subtask descriptionestimateMinutes: Conservative time estimate in minutesorder: Sequential order (1, 2, 3, ...)
[
{"title": "Choose party date and time", "estimateMinutes": 15, "order": 1},
{"title": "Create guest list", "estimateMinutes": 20, "order": 2},
{"title": "Book venue or prepare home", "estimateMinutes": 45, "order": 3},
{"title": "Order cake and plan menu", "estimateMinutes": 30, "order": 4},
{"title": "Send invitations", "estimateMinutes": 20, "order": 5},
{"title": "Shop for decorations", "estimateMinutes": 60, "order": 6}
]
With llama.cpp
./llama-cli -m taskapp-gemma3-270m-Q8_0.gguf \
-p "Decompose this task into subtasks: Clean the garage" \
--temp 0.7 \
--top-p 0.9
With llama_sdk (Flutter/Dart)
final llama = Llama(LlamaController(
modelPath: 'path/to/taskapp-gemma3-270m-Q8_0.gguf',
nCtx: 2048,
nBatch: 512,
greedy: false,
));
final messages = [
UserLlamaMessage('Decompose this task into subtasks: Plan birthday party'),
];
final stream = llama.prompt(messages);
await for (final response in stream) {
print(response.text);
}
With Python (transformers)
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("johkamb/task-decomposer-270m")
tokenizer = AutoTokenizer.from_pretrained("johkamb/task-decomposer-270m")
prompt = "Decompose this task into subtasks: Organize garage"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Example Outputs
Example 1: Plan Birthday Party
[
{"title": "Choose date and send save-the-dates", "estimateMinutes": 15, "order": 1},
{"title": "Create guest list", "estimateMinutes": 20, "order": 2},
{"title": "Book venue or prepare home", "estimateMinutes": 45, "order": 3},
{"title": "Order cake and plan menu", "estimateMinutes": 30, "order": 4},
{"title": "Send formal invitations", "estimateMinutes": 20, "order": 5},
{"title": "Shop for decorations and party supplies", "estimateMinutes": 60, "order": 6}
]
Example 2: Write Research Paper
[
{"title": "Choose topic and create thesis", "estimateMinutes": 45, "order": 1},
{"title": "Research and gather sources", "estimateMinutes": 120, "order": 2},
{"title": "Create detailed outline", "estimateMinutes": 30, "order": 3},
{"title": "Write first draft", "estimateMinutes": 180, "order": 4},
{"title": "Revise and edit", "estimateMinutes": 90, "order": 5},
{"title": "Format citations and bibliography", "estimateMinutes": 45, "order": 6}
]
Example 3: Clean Garage
[
{"title": "Remove everything from garage", "estimateMinutes": 60, "order": 1},
{"title": "Sweep and clean floor", "estimateMinutes": 20, "order": 2},
{"title": "Sort items into keep/donate/trash", "estimateMinutes": 45, "order": 3},
{"title": "Organize items on shelves", "estimateMinutes": 40, "order": 4},
{"title": "Dispose of unwanted items", "estimateMinutes": 30, "order": 5}
]
Training Details
Hyperparameters
- Base Model: unsloth/gemma-3-270m-it
- Training Examples: 804 (no eval split for full data utilization)
- Epochs: 3
- Batch Size: 4 per device
- Gradient Accumulation: 1 (effective batch size = 4)
- Learning Rate: 5e-5
- LR Scheduler: Linear
- Warmup Steps: 10
- Optimizer: AdamW 8-bit
- Weight Decay: 0.001
- LoRA Rank (r): 128
- LoRA Alpha: 128
- LoRA Dropout: 0
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Max Sequence Length: 2048 tokens
- Precision: FP32 (Gemma3 requirement)
- Gradient Checkpointing: unsloth (memory efficient)
Training Environment
- GPU: NVIDIA T4 (Google Colab)
- Training Time: ~20-30 minutes
- Framework: Unsloth + transformers + TRL (SFTConfig)
- Quantization: FP32 during training, Q8_0 for deployment
Performance Metrics
- Final Training Loss: < 0.5 (excellent), < 1.0 (good)
- Trainable Parameters: ~3-5% of total model parameters (r=128)
- Model Size:
- Original: ~540MB (FP32)
- Quantized (Q8_0): ~270MB
- Quality: Excellent (Q8_0 has minimal quality loss vs FP32)
- Inference Speed: Fast on mobile devices (optimized by llama.cpp)
Limitations
- Language: Optimized for English only
- Task Scope: Best for tasks completable within days/weeks
- Domain: Optimized for everyday tasks (household, work, school, personal)
- Time Estimates: Conservative estimates (designed to under-promise)
- Specialized Tasks: May struggle with highly technical or niche domains
- JSON Output: Occasionally may need retry for valid JSON formatting
- Context Length: Limited to 2048 tokens (~1500 words)
Ethical Considerations
- Bias: Training data focuses on common Western/American tasks; may not reflect all cultures
- Time Estimates: Conservative estimates may not fit all working styles
- Task Decomposition: Suggested subtasks are general; individual needs may vary
- Not for: Medical advice, legal guidance, or critical decision-making
Acknowledgments
- Base Model: Google Gemma Team
- Fine-tuning: Unsloth AI (optimized training)
- Quantization: llama.cpp team (GGUF format)
- Training Platform: Google Colab
- Downloads last month
- 23
Hardware compatibility
Log In
to view the estimation
8-bit