stas122 commited on
Commit
e858b1c
·
verified ·
1 Parent(s): feed67b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -22
README.md CHANGED
@@ -35,8 +35,8 @@ All examples follow a consistent format with "### Task:" instruction and "### So
35
 
36
  The fine-tuning process involved multiple stages:
37
 
38
- 1. Base model: Stentor-30M pre-trained checkpoint
39
- 2. Initial fine-tuning on 50k examples (checkpoint-1000 selected as best)
40
  3. Multiple correction rounds with progressively lower learning rates
41
  4. Final detoxification training with learning rate 3e-7 to remove undesirable patterns
42
 
@@ -100,30 +100,10 @@ outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.2)
100
  response = tokenizer.decode(outputs[0], skip_special_tokens=True)
101
  ```
102
 
103
- **Hardware Requirements**
104
-
105
- - **Inference:** CPU only (no GPU required)
106
- - **RAM:** < 100 MB for inference
107
- - **Storage:** 60 MB (FP16), 30 MB (INT8 quantized)
108
-
109
  **Ethical Considerations**
110
 
111
  This model is intended for educational and development assistance purposes. Users should verify all generated code before deployment, particularly for security-sensitive applications. The model may occasionally produce incorrect or inefficient code and should not be relied upon as the sole source of truth for programming tasks.
112
 
113
- **Citation**
114
-
115
- If you use this model in your work, please cite:
116
-
117
- ```
118
- @misc{stentor-python-30m-2026,
119
- author = {Fine-tuning Experiment},
120
- title = {Stentor Python 30M: A Compact Model for Python Code Generation},
121
- year = {2026},
122
- publisher = {Hugging Face},
123
- url = {https://huggingface.co/username/stentor-python-30m}
124
- }
125
- ```
126
-
127
  **Contact**
128
 
129
  For questions or feedback about this model, please open an issue on the Hugging Face repository.
 
35
 
36
  The fine-tuning process involved multiple stages:
37
 
38
+ 1. Base model: Stentor-30M
39
+ 2. Initial fine-tuning on 50k examples
40
  3. Multiple correction rounds with progressively lower learning rates
41
  4. Final detoxification training with learning rate 3e-7 to remove undesirable patterns
42
 
 
100
  response = tokenizer.decode(outputs[0], skip_special_tokens=True)
101
  ```
102
 
 
 
 
 
 
 
103
  **Ethical Considerations**
104
 
105
  This model is intended for educational and development assistance purposes. Users should verify all generated code before deployment, particularly for security-sensitive applications. The model may occasionally produce incorrect or inefficient code and should not be relied upon as the sole source of truth for programming tasks.
106
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
  **Contact**
108
 
109
  For questions or feedback about this model, please open an issue on the Hugging Face repository.