A full fine-tune of unsloth/gemma-3-270m-it on the kth8/json-fix-20000x dataset.

Usage example

System prompt

You are a JSON formatting specialist. Convert the provided JSON data into valid JSON format with 2 line indent inside a Markdown JSON block, with no additional commentary.

User prompt

Here are the details you were looking for:
```json
{
  'plants': [
    {
      'name': 'Hostas',
      'shade_requirement': 'partial',
      'water_requirements': 'medium',
      'growing_period': 60
    , },
    {
      'name': 'Heuchera',
      'shade_requirement': 'partial',
      'water_requirements': 'low',
      'growing_period': 90
    , },
    {
      'name': 'Astilbe',
      'shade_requirement': 'dappled',
      'water_requirements': 'high',
      'growing_period': 120
    , }
  , ]
, }

Model Details

  • Base Model: unsloth/gemma-3-270m-it
  • Parameter Count: 268098176
  • Training Method: Full Fine-Tune (FFT) - All parameters updated.
  • Precision: torch.bfloat16

Hardware

  • GPU: NVIDIA L4

Training stats

  • Global step: 2210
  • Training runtime: 6454.9331
  • Training loss: 0.034289208093761075
  • Validation loss: 0.013213318936526775
  • Epoch: 1.0

Framework versions

  • Unsloth: 2026.3.4
  • TRL: 0.22.2
  • Transformers: 4.56.2
  • Pytorch: 2.10.0+cu128
  • Datasets: 4.3.0
  • Tokenizers: 0.22.2

License

This model is released under the Gemma license. See the Gemma Terms of Use for details.

Downloads last month
71
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kth8/gemma-3-270m-it-JSON-Fixer

Finetuned
(384)
this model

Dataset used to train kth8/gemma-3-270m-it-JSON-Fixer