NAKSTStudio commited on
Commit
607d836
·
verified ·
1 Parent(s): 30bea28

Chess Gemma 3 fine-tuned model with commentary generation

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +3 -3
  3. chess-commentary-model.task +3 -0
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ chess-commentary-model.task filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -30,7 +30,7 @@ language:
30
  # Chess Gemma Commentary 🎯♟️
31
  ### By NAKST Studio
32
  <br>
33
- Fine-tuned <strong>Gemma 3 270M</strong> model for generating chess move commentary, ELO predictions, and move classifications in <strong>14 languages</strong>.
34
 
35
  ---
36
 
@@ -51,11 +51,11 @@ Fine-tuned <strong>Gemma 3 270M</strong> model for generating chess move comment
51
 
52
  - **Base Model:** Google Gemma 3 270M (270 Million Parameters)
53
  - **Fine-tuning Method:** LoRA (Low-Rank Adaptation) - Rank 8, Alpha 16
54
- - **Training Data:** 17,900+ chess positions with expert commentary
55
  - **Training Epochs:** 3
56
  - **Training Framework:** Unsloth + Hugging Face Transformers
57
  - **Hardware:** Google Colab T4 GPU
58
- - **Model Size:** 500MB (full) / 150MB (quantized q4_k_m)
59
  - **Languages Supported:** 14 (English, Hindi, Spanish, Mandarin Chinese, French, German, Portuguese, Russian, Japanese, Arabic, Korean, Turkish, Indonesian, Bengali)
60
 
61
  ## Capabilities
 
30
  # Chess Gemma Commentary 🎯♟️
31
  ### By NAKST Studio
32
  <br>
33
+ Fine-tuned <strong>Gemma 3 270M</strong> model for generating chess move commentary, ELO predictions, and move classifications in <strong>14 languages</strong>. Includes an optional .task file for lightweight mobile inference with flutter_gemma
34
 
35
  ---
36
 
 
51
 
52
  - **Base Model:** Google Gemma 3 270M (270 Million Parameters)
53
  - **Fine-tuning Method:** LoRA (Low-Rank Adaptation) - Rank 8, Alpha 16
54
+ - **Training Data:** 25,000+ chess positions with expert commentary
55
  - **Training Epochs:** 3
56
  - **Training Framework:** Unsloth + Hugging Face Transformers
57
  - **Hardware:** Google Colab T4 GPU
58
+ - **Model Size:** 500MB (full) / 270 mb .task (int 8 dynamic quantized)
59
  - **Languages Supported:** 14 (English, Hindi, Spanish, Mandarin Chinese, French, German, Portuguese, Russian, Japanese, Arabic, Korean, Turkish, Indonesian, Bengali)
60
 
61
  ## Capabilities
chess-commentary-model.task ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:864840419c74f39bbf5238d641820c65921e20180e4fc8942978b1dda4d836e5
3
+ size 284368243