Gemma 3 270M - MIST 9-Liner Fine-tuned
This model is a full parameter fine-tuned version of google/gemma-3-270m-it trained on MIST 9-liner (medical evacuation request) data.
Model Details
- Base Model: google/gemma-3-270m-it
- Training Type: Full parameter fine-tuning (not LoRA)
- Parameters: 268M (100% trainable)
- Training Data: 9,500 MIST 9-liner examples
- Epochs: 3
- Final Loss: 0.191
- Token Accuracy: 92.5%
Training Configuration
- Learning Rate: 2e-5
- Batch Size: 8
- Gradient Accumulation: 2
- Max Sequence Length: 1024
- Optimizer: AdamW
- Precision: bfloat16
Intended Use
This model is designed for parsing and generating MIST 9-liner medical evacuation requests. The 9-liner format is a standardized military medical evacuation request format.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mhylle/gemma3-270m-9liner")
tokenizer = AutoTokenizer.from_pretrained("mhylle/gemma3-270m-9liner")
# Example usage
prompt = "Convert this 9-liner to a medical record: ..."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Limitations
- Trained specifically on 9-liner format data
- May not generalize well to other medical documentation formats
- Should be validated before use in real medical applications
License
This model inherits the Gemma license from the base model.
- Downloads last month
- 3