ArnaudVtl commited on
Commit
2470b1d
·
verified ·
1 Parent(s): b599f71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -16
README.md CHANGED
@@ -72,7 +72,6 @@ The model was trained on a synthetic `.jsonl` dataset (`fridgebuddy_dataset_en.j
72
  ### Training Procedure
73
 
74
  - LoRA fine-tuning with Unsloth + PEFT
75
- - Training done on Google Colab (Tesla T4)
76
  - Float32 precision
77
  - 3 epochs — 36 steps
78
 
@@ -88,14 +87,6 @@ The model was trained on a synthetic `.jsonl` dataset (`fridgebuddy_dataset_en.j
88
 
89
  The model was qualitatively evaluated using generation on unseen prompts. It responded well to basic kitchen questions, but showed expected limitations in vocabulary and instruction-following depth.
90
 
91
- ## Environmental Impact
92
-
93
- - **Hardware Type:** Tesla T4
94
- - **Hours used:** ~5 minutes (100 samples × 3 epochs)
95
- - **Cloud Provider:** Google Colab
96
- - **Compute Region:** Not specified
97
- - **Carbon Emitted:** Negligible
98
-
99
  ## Technical Specifications
100
 
101
  ### Model Architecture and Objective
@@ -103,11 +94,9 @@ The model was qualitatively evaluated using generation on unseen prompts. It res
103
  - Base: `gemma-3-1b-it`, 1.3B parameters
104
  - Finetuning: LoRA adapters via PEFT
105
 
106
- ### Compute Infrastructure
107
-
108
- - **Hardware:** Google Colab (GPU: Tesla T4)
109
- - **Software:** PEFT 0.15.2, Transformers, Accelerate, Unsloth
110
-
111
- ## Citation
112
 
113
- > No formal citation. Project created for Epitech's WS2 mini-hackathon 2025.
 
 
 
 
72
  ### Training Procedure
73
 
74
  - LoRA fine-tuning with Unsloth + PEFT
 
75
  - Float32 precision
76
  - 3 epochs — 36 steps
77
 
 
87
 
88
  The model was qualitatively evaluated using generation on unseen prompts. It responded well to basic kitchen questions, but showed expected limitations in vocabulary and instruction-following depth.
89
 
 
 
 
 
 
 
 
 
90
  ## Technical Specifications
91
 
92
  ### Model Architecture and Objective
 
94
  - Base: `gemma-3-1b-it`, 1.3B parameters
95
  - Finetuning: LoRA adapters via PEFT
96
 
97
+ ### Software
 
 
 
 
 
98
 
99
+ - PEFT 0.15.2
100
+ - Transformers
101
+ - Accelerate
102
+ - Unsloth