Parameter fine-tuning training of Gemma-3 on neuroscience domain datasets via LoRA.

LoRA Config: r=12 lora_alpha=24 lora_dropout=0.05 task_type="CAUSAL_LM" target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]

Epoch=2, Training on NVIDIA GeForce RTX 4080 SUPER

Downloads last month
5
Safetensors
Model size
1.0B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support