FLINT β Field Language Intelligence - v.03 (LoRA Adapter)
β οΈ Portfolio project. Built as a learning exercise to demonstrate a full CPT + LoRA fine-tuning pipeline. Not intended for production use.
Mission
FLINT was initiated to explore the feasibility of an offline-first AI carrying a survival domain knowledge base β intended for search and rescue personnel, field operators, and anyone working in environments without internet connectivity. The goal is a model that can be deployed on-device and queried without any network dependency, providing survival-critical information when it matters most.
This is the LoRA adapter for FLINT, a survival-domain language model continued pre-trained on Wikipedia corpus. Load this adapter on top of ibm-granite/granite-4.0-micro to use FLINT.
Training Details
| Parameter | Value |
|---|---|
| Base model | ibm-granite/granite-4.0-micro |
| Method | Continued Pre-Training (CPT) + LoRA |
| Corpus | Wikipedia (survival domain) |
| LoRA rank (r) | 8 |
| LoRA alpha | 16 |
| Target modules (attention) | q_proj, k_proj, v_proj, o_proj |
| Target modules (MLP) | input_linear, output_linear |
| Max sequence length | 768 |
| Steps | 250β300 |
| Batch size | 1 (grad accum 8) |
| Learning rate | 2e-5 |
| Precision | fp16 |
Model Architecture (Verified)
PeftModelForCausalLM
βββ LoraModel
βββ GraniteMoeHybridForCausalLM
βββ Attention layers
β βββ q_proj: lora.Linear (r=8)
β βββ k_proj: lora.Linear (r=8)
β βββ v_proj: lora.Linear (r=8)
β βββ o_proj: lora.Linear (r=8)
βββ MLP layers
βββ input_linear: lora.Linear (r=8)
βββ output_linear: lora.Linear (r=8)
How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base model
tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-4.0-micro")
model = AutoModelForCausalLM.from_pretrained(
"ibm-granite/granite-4.0-micro",
device_map = "cpu", # change to "cuda" if GPU available
)
# Load FLINT adapter on top
model = PeftModel.from_pretrained(model, "rakhasetiawan/flint-granite-survival")
model.eval()
# Prompt it like a Wikipedia article opening
prompt = "Wilderness survival techniques include"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens = 200,
temperature = 0.7,
do_sample = True,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
- CPT only β not instruction-tuned, no conversational ability
- Short training run (~250β300 steps), knowledge depth is limited
- Prototype quality β outputs may be rough or repetitive
- Next step would be SFT on survival-domain Q&A pairs
Training Loss
Loss dropped from 5.04 β ~3.05 over 250 steps, with the curve stabilizing around step 100. The smooth convergence indicates stable training with no instability or gradient issues.
Prompt Example
"what do i do to prepare for a natural disaster? give me a checklist of items
to prepare and steps to take"
Response
what do i do to prepare for a natural disaster? give me a checklist of items to prepare and steps to take.
Assess Your Risks: Before you start preparing, determine which natural disasters are most likely to occur in your area. This could include hurricanes, earthquakes, floods, wildfires, or tornadoes. Knowing your risks will help you understand what preparations and protective measures you need to take.
Create a Disaster Preparedness Plan: Draft a plan that includes communication strategies in case family members are separated, evacuation routes, and safe meeting points. Make sure everyone in your household knows what to do in an emergency.
Build an Emergency Kit: Assemble a kit that includes essentials you might need for at least 72 hours. Here's a checklist of items to include:
Item Specification Water At least 1 gallon per person per day (drinking + sanitation) Food 3-day supply of non-perishable items Radio Battery-powered or hand crank β for news updates Flashlight With extra batteries First Aid Kit Including any prescription medications Whistle To signal for help Dust Mask To filter contaminated air Plastic Sheeting + Duct Tape For shelter-in-place scenarios Moist Tissues + Waste Bags For sanitation Local Maps In case GPS and navigation systems are down Cell Phone + Chargers Including a portable backup charger
Eval Notes
| Metric | Result |
|---|---|
| Quantified specifics | β FEMA-accurate (1 gal/day, 72hr window, 3-day supply) |
| Hallucinated terminology | β None detected |
| Format | β Guide/manual style β appropriate for domain |
| Actionability | β High β checklist is directly usable |
| Practitioner depth | β οΈ Moderate β disaster-specific variations not yet covered |
| Version baseline | First clean hallucination-free output across all eval prompts |
Note: Quantified figures (1 gallon/day, 72-hour window) align with FEMA
standard preparedness guidelines β not hallucinated values.
Practitioner-level depth (disaster-specific kits, document preservation,
post-disaster protocols) is the target for v0.4 with field manual corpus.
FLINT β Field Language Intelligence Β· Built by rakhasetiawan
Model tree for rakhasetiawan/flint-granite-survival
Base model
ibm-granite/granite-4.0-micro
