ARIA AAR 3B LoRA — On-Device Meeting Summarization
Fine-tuned Llama 3.2 3B Instruct LoRA adapter for structured meeting summarization, producing TC 7-0.1 After Action Review (AAR) JSON output.
Built for ARIA — an on-device AI meeting assistant running on Samsung Galaxy S24 Ultra (Snapdragon 8 Gen 3).
Model Details
| Parameter | Value |
|---|---|
| Base Model | Llama 3.2 3B Instruct |
| Method | QLoRA (4-bit NF4) |
| LoRA Rank | 32 |
| LoRA Alpha | 32 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Training Examples | 800 |
| Epochs | 5 |
| Learning Rate | 2e-4 (linear decay) |
| Max Sequence Length | 6144 |
| Final Loss | 0.724 |
| Trainable Parameters | ~44M / 3.2B (1.4%) |
Task Types
The model supports three distinct task types via different system prompts:
1. Single-Pass Summarization
Direct transcript-to-AAR JSON for meetings under ~3,400 words. Produces structured JSON with 6 fields.
2. Chunk Extraction
Extracts structured bullet points (Decisions, Action Items, Key Points, Issues, Notable Quotes) from transcript segments. Used in the hybrid pipeline for long meetings.
3. Refine
Progressive refinement — takes a draft AAR JSON and additional transcript context, produces an improved AAR JSON. Enables processing of arbitrarily long meetings.
Output Format
{
"title": "Meeting Title in Title Case",
"what_was_planned": "What was intended to be accomplished...",
"what_happened": "What actually occurred during the meeting...",
"why_it_happened": "Analysis of why outcomes differed from plans...",
"how_to_improve": "Specific actionable recommendations...",
"ai_perspective": "AI analysis of meeting dynamics and patterns..."
}
Validation Scores
Tested at device-realistic settings: 1536 max tokens, temperature 0.1.
| Task | Avg Score | Pass Rate |
|---|---|---|
| Brief (< 500 words) | 98.4 | 5/5 |
| Standard (500-1000 words) | 93.1 | 7/7 |
| Detailed (1000-2000 words) | 88.6 | 4/5 |
| Chunk Extraction | 77.0 | 7/10 |
| Refine | 100.0 | 5/5 |
GGUF
A pre-quantized Q4_K_M GGUF (~1.9GB) is included for direct use with llama.cpp or on-device inference.
Usage
With Transformers + PEFT
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-3B-Instruct",
torch_dtype="auto",
device_map="auto",
)
model = PeftModel.from_pretrained(base_model, "STELLiQ/aria-aar-3b-lora")
tokenizer = AutoTokenizer.from_pretrained("STELLiQ/aria-aar-3b-lora")
With llama.cpp (GGUF)
llama-cli --model aria-aar-3b-q4_k_m.gguf \
-p "<|start_header_id|>system<|end_header_id|>\n\nYou are an expert meeting analyst...<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nSummarize this meeting transcript:\n\n{transcript}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
On-Device Performance (Samsung Galaxy S24 Ultra)
| Metric | Value |
|---|---|
| GGUF Size | ~1.9 GB (Q4_K_M) |
| Peak RAM | ~2.5 GB |
| TTFT | ~0.5-0.8s (Adreno 750 GPU) |
| Decode Speed | ~50-70 tok/s |
| GPU Layers | 32 (full offload) |
Training Data
800 custom examples across three task types:
- 640 single-pass (brief/standard/detailed tiers)
- 60 chunk extraction
- 100 refine (80 from extended transcripts + 20 pilot)
All training data was synthetically generated using meeting transcripts with diverse topics, speaker counts, and meeting styles.
Training Infrastructure
- GPU: NVIDIA GeForce RTX 5080 Laptop GPU (16GB)
- Framework: Unsloth + Transformers + TRL
- Training Time: ~33 minutes
- Precision: BFloat16 with 4-bit QLoRA
License
This adapter inherits the Llama 3.2 Community License.
Developed By
STELLiQ Technologies — ARIA: Automated Review Intelligence Assistant
- Downloads last month
- 16
4-bit
Model tree for STELLiQ/aria-aar-3b-lora
Base model
meta-llama/Llama-3.2-3B-Instruct