Mhammad2023 commited on
Commit
7085368
·
verified ·
1 Parent(s): 4bc44a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -8
README.md CHANGED
@@ -5,6 +5,12 @@ tags:
5
  - grpo
6
  - GRPO
7
  - Reasoning-Course
 
 
 
 
 
 
8
  ---
9
 
10
  # Model Card for Model ID
@@ -17,7 +23,8 @@ tags:
17
 
18
  ### Model Description
19
 
20
- <!-- Provide a longer summary of what this model is. -->
 
21
 
22
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
23
 
@@ -33,7 +40,7 @@ This is the model card of a 🤗 transformers model that has been pushed on the
33
 
34
  <!-- Provide the basic links for the model. -->
35
 
36
- - **Repository:** [More Information Needed]
37
  - **Paper [optional]:** [More Information Needed]
38
  - **Demo [optional]:** [More Information Needed]
39
 
@@ -43,13 +50,13 @@ This is the model card of a 🤗 transformers model that has been pushed on the
43
 
44
  ### Direct Use
45
 
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
 
48
  [More Information Needed]
49
 
50
  ### Downstream Use [optional]
51
 
52
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
 
54
  [More Information Needed]
55
 
@@ -61,7 +68,8 @@ This is the model card of a 🤗 transformers model that has been pushed on the
61
 
62
  ## Bias, Risks, and Limitations
63
 
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
65
 
66
  [More Information Needed]
67
 
@@ -81,13 +89,15 @@ Use the code below to get started with the model.
81
 
82
  ### Training Data
83
 
84
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
 
86
  [More Information Needed]
87
 
88
  ### Training Procedure
89
 
90
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
91
 
92
  #### Preprocessing [optional]
93
 
@@ -112,7 +122,7 @@ Use the code below to get started with the model.
112
 
113
  #### Testing Data
114
 
115
- <!-- This should link to a Dataset Card if possible. -->
116
 
117
  [More Information Needed]
118
 
 
5
  - grpo
6
  - GRPO
7
  - Reasoning-Course
8
+ datasets:
9
+ - mlabonne/smoltldr
10
+ language:
11
+ - en
12
+ base_model:
13
+ - HuggingFaceTB/SmolLM-135M-Instruct
14
  ---
15
 
16
  # Model Card for Model ID
 
23
 
24
  ### Model Description
25
 
26
+ <!-- This model is a fine-tuned version of HuggingFaceTB/SmolLM-135M-Instruct, trained using Guided Reward Policy Optimization (GRPO) with LoRA (Low-Rank Adaptation) for efficient fine-tuning.
27
+ It was fine-tuned on the mlabonne/smoltldr dataset — a small text summarization dataset — using the Transformers, TRL, and PEFT libraries in a Colab environment. -->
28
 
29
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
30
 
 
40
 
41
  <!-- Provide the basic links for the model. -->
42
 
43
+ - **Repository:** [More Information Needed](https://huggingface.co/Mhammad2023/SmolGRPO-135M)
44
  - **Paper [optional]:** [More Information Needed]
45
  - **Demo [optional]:** [More Information Needed]
46
 
 
50
 
51
  ### Direct Use
52
 
53
+ <!-- This model can be used for text generation and simple summarization tasks — ideal for testing GRPO fine-tuning on small models with limited compute. -->
54
 
55
  [More Information Needed]
56
 
57
  ### Downstream Use [optional]
58
 
59
+ <!-- You can adapt this model to your own small text generation tasks or use it as a teaching demo for PEFT (parameter-efficient fine-tuning) and reinforcement learning techniques like GRPO. -->
60
 
61
  [More Information Needed]
62
 
 
68
 
69
  ## Bias, Risks, and Limitations
70
 
71
+ <!-- This model inherits biases from its base model and training data (mlabonne/smoltldr).
72
+ Outputs may be inaccurate or reflect social biases present in training data. -->
73
 
74
  [More Information Needed]
75
 
 
89
 
90
  ### Training Data
91
 
92
+ <!-- Dataset: mlabonne/smoltldr — a small summarization dataset. -->
93
 
94
  [More Information Needed]
95
 
96
  ### Training Procedure
97
 
98
+ <!-- LoRA config: r=16, lora_alpha=32, target_modules=["q_proj", "v_proj"], lora_dropout=0.05
99
+
100
+ Trainer: GRPOTrainer from trl -->
101
 
102
  #### Preprocessing [optional]
103
 
 
122
 
123
  #### Testing Data
124
 
125
+ <!-- Same dataset mlabonne/smoltldr (train/validation split). -->
126
 
127
  [More Information Needed]
128