aerodynamics21 commited on
Commit
65db9b3
·
verified ·
1 Parent(s): 522f479

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -154
README.md CHANGED
@@ -1,17 +1,3 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - education
5
- - croatian
6
- - qwen2
7
- - fine-tuned
8
- - study-assistant
9
- language:
10
- - hr
11
- - en
12
- license: apache-2.0
13
- base_model: Qwen/Qwen2.5-32B-Instruct
14
- ---
15
 
16
  # StudyPal-LLM-1.0
17
 
@@ -120,143 +106,4 @@
120
  aerodynamics21
121
 
122
  Model Card Contact
123
-
124
- For questions about this model, please visit the repository or create an issue.
125
-
126
- **To update your model card:**
127
-
128
- 1. **Go to your model page:** https://huggingface.co/aerodynamics21/StudyPal-LLM-1.0
129
- 2. **Click "Edit model card"**
130
- 3. **Replace the content with the above**
131
- 4. **Save changes**
132
-
133
- **Note:** The adapter (StudyPal-LLM-1) is already merged into this model, so users don't need to load it separately. The adapter is mentioned in the model sources for reference.
134
-
135
- > YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)
136
-
137
- ● The YAML metadata section is missing the closing ---. Here's the corrected model card:
138
-
139
- ---
140
- library_name: transformers
141
- tags:
142
- - education
143
- - croatian
144
- - qwen2
145
- - fine-tuned
146
- - study-assistant
147
- language:
148
- - hr
149
- - en
150
- license: apache-2.0
151
- base_model: Qwen/Qwen2.5-32B-Instruct
152
- ---
153
-
154
- # StudyPal-LLM-1.0
155
-
156
- A fine-tuned Croatian educational assistant based on Qwen2.5-32B-Instruct, designed to help students with learning and study materials.
157
-
158
- ## Model Details
159
-
160
- ### Model Description
161
-
162
- StudyPal-LLM-1.0 is a large language model fine-tuned specifically for educational purposes in Croatian. The model excels at generating educational content, answering study questions, creating flashcards, and
163
- providing learning assistance.
164
-
165
- - **Developed by:** aerodynamics21
166
- - **Model type:** Causal Language Model
167
- - **Language(s):** Croatian (primary), English (secondary)
168
- - **License:** Apache 2.0
169
- - **Finetuned from model:** Qwen/Qwen2.5-32B-Instruct
170
- - **Parameters:** 32.8B
171
-
172
- ### Model Sources
173
-
174
- - **Repository:** https://huggingface.co/aerodynamics21/StudyPal-LLM-1.0
175
- - **Base Model:** https://huggingface.co/Qwen/Qwen2.5-32B-Instruct
176
- - **Adapter:** https://huggingface.co/aerodynamics21/StudyPal-LLM-1
177
-
178
- ## Uses
179
-
180
- ### Direct Use
181
-
182
- This model is designed for educational applications:
183
- - Generating study materials in Croatian
184
- - Creating flashcards and quiz questions
185
- - Providing explanations of complex topics
186
- - Assisting with homework and learning
187
-
188
- ### Usage Examples
189
-
190
- ```python
191
- from transformers import AutoModelForCausalLM, AutoTokenizer
192
-
193
- model = AutoModelForCausalLM.from_pretrained("aerodynamics21/StudyPal-LLM-1.0")
194
- tokenizer = AutoTokenizer.from_pretrained("aerodynamics21/StudyPal-LLM-1.0")
195
-
196
- # Generate educational content
197
- prompt = "Objasni koncept fotosinteze:"
198
- inputs = tokenizer(prompt, return_tensors="pt")
199
- outputs = model.generate(**inputs, max_length=200, temperature=0.7)
200
- response = tokenizer.decode(outputs[0], skip_special_tokens=True)
201
-
202
- API Usage
203
-
204
- import requests
205
-
206
- API_URL = "https://api-inference.huggingface.co/models/aerodynamics21/StudyPal-LLM-1.0"
207
- headers = {"Authorization": f"Bearer {your_token}"}
208
-
209
- def query(payload):
210
- response = requests.post(API_URL, headers=headers, json=payload)
211
- return response.json()
212
-
213
- output = query({"inputs": "Stvori kviz o hrvatskoj povijesti:"})
214
-
215
- Training Details
216
-
217
- Training Data
218
-
219
- The model was fine-tuned on a Croatian educational dataset containing:
220
- - Educational conversations and Q&A pairs
221
- - Flashcard datasets
222
- - Quiz and summary materials
223
- - Croatian academic content
224
-
225
- Training Procedure
226
-
227
- - Base Model: Qwen2.5-32B-Instruct
228
- - Training Method: LoRA (Low-Rank Adaptation)
229
- - Training Framework: Transformers + PEFT
230
- - Hardware: RunPod GPU instance
231
-
232
- Evaluation
233
-
234
- The model demonstrates strong performance in:
235
- - Croatian language comprehension and generation
236
- - Educational content creation
237
- - Study material generation
238
- - Academic question answering
239
-
240
- Bias, Risks, and Limitations
241
-
242
- - Primary focus on Croatian educational content
243
- - May reflect biases present in training data
244
- - Best suited for educational contexts
245
- - Performance may vary on non-educational tasks
246
-
247
- Citation
248
-
249
- @model{studypal-llm-1.0,
250
- title={StudyPal-LLM-1.0: A Croatian Educational Assistant},
251
- author={aerodynamics21},
252
- year={2024},
253
- url={https://huggingface.co/aerodynamics21/StudyPal-LLM-1.0}
254
- }
255
-
256
- Model Card Authors
257
-
258
- aerodynamics21
259
-
260
- Model Card Contact
261
-
262
- For questions about this model, please visit the repository or create an issue.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
  # StudyPal-LLM-1.0
3
 
 
106
  aerodynamics21
107
 
108
  Model Card Contact
109
+ ---