CabinLavatoryPrediction
This repository contains a QLoRA adapter fine-tuned from Qwen/Qwen3.5-9B for aircraft lavatory millimeter-wave radar behavior understanding.
The model consumes structured radar time-window information and intermediate representations, then predicts:
- Structured lavatory behavior state: current behavior, transition flag, elapsed/remaining time, next possible behavior, stage index, total stages, and sequence-so-far.
- QA state: occupancy, estimated time to free, used lavatory areas, and abnormal-state flag.
Files
adapter_model.safetensors,adapter_config.json: final PEFT LoRA adapter.checkpoint-6283/: final trainer checkpoint, including optimizer/scheduler state for training resume.code/: preprocessing, training, evaluation, visualization, and report scripts.eval/metrics/: base vs fine-tuned evaluation metric JSON files.eval/charts/: standalone SVG vector charts with embedded metadata.presentation/: self-contained design-review HTML/PDF report.
Raw train/validation JSONL data is not included in this model repository.
Training
- Base model:
Qwen/Qwen3.5-9B - Method: 4-bit QLoRA supervised fine-tuning
- LoRA target modules: q/k/v/o/gate/up/down projection modules
- LoRA rank: 16
- LoRA alpha: 32
- Max sequence length used for the successful full run: 2048
- Train data: mixed structured-prediction and QA samples
Evaluation Summary
Validation size:
- Structured task: 4,030 examples
- QA task: 4,030 examples
Key base vs fine-tuned metrics:
| Task | Metric | Base | Fine-tuned |
|---|---|---|---|
| Structured | JSON parse rate | 98.0% | 100.0% |
| Structured | Required field complete rate | 0.0% | 95.1% |
| Structured | Current behavior accuracy | 48.1% | 67.0% |
| Structured | Current behavior macro-F1 | 11.1% | 49.1% |
| Structured | Next possible behavior accuracy | 39.2% | 65.0% |
| Structured | Stage index accuracy | 0.0% | 65.5% |
| Structured | Sequence exact match | 0.0% | 61.1% |
| QA | Occupied accuracy | 99.7% | 100.0% |
| QA | Abnormal F1 | 45.4% | 89.5% |
| QA | Used areas micro-F1 | 70.5% | 100.0% |
| QA | Time-to-free MAE | 5.13 min | ~0.0 min |
Loading Example
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
base_model = "Qwen/Qwen3.5-9B"
adapter = "sutama/CabinLavatoryPrediction"
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=quant_config,
device_map="auto",
trust_remote_code=True,
)
model = PeftModel.from_pretrained(model, adapter)
model.eval()
Intended Use
This adapter is intended for research and design review of privacy-preserving aircraft lavatory state prediction from structured millimeter-wave radar representations.
It should not be used as an aircraft safety-critical control system without further validation, calibration, monitoring, and fail-safe integration.
Limitations
- The adapter was trained and evaluated on the available structured dataset only.
- Cross-aircraft, cross-installation-angle, sensor-noise, passenger-diversity, and operational robustness require additional validation.
- QA results may partially reflect deterministic target construction rules in the processed dataset; evaluate on independently collected operational data before deployment.
- Downloads last month
- 64