|
|
--- |
|
|
language: en |
|
|
license: mit |
|
|
tags: |
|
|
- text-generation |
|
|
- study-helper |
|
|
- simplification |
|
|
- educational |
|
|
--- |
|
|
|
|
|
# SmolLM3 Study Helper |
|
|
--- |
|
|
language: en |
|
|
license: mit |
|
|
tags: |
|
|
- text-generation |
|
|
- study-helper |
|
|
- simplification |
|
|
- educational |
|
|
--- |
|
|
|
|
|
|
|
|
**SmolLM3 Study Helper** is a fine-tuned version of the SmolLM3‑3B model designed to **simplify complex concepts** into easy-to-understand English explanations. |
|
|
|
|
|
It was trained on 200+ pairs of “complex → simple” examples covering science, technology, math, and general knowledge. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🔹 Model Overview |
|
|
|
|
|
- **Base model:** SmolLM3‑3B |
|
|
- **Fine-tuning method:** LoRA (Parameter-efficient fine-tuning) |
|
|
- **Dataset:** 200+ hand-crafted complex-to-simple explanation pairs |
|
|
- **Purpose:** Educational, study aid, concept simplification |
|
|
|
|
|
--- |
|
|
|
|
|
## 🔹 How to Use |
|
|
|
|
|
You can test the model directly on Hugging Face using the **text-generation widget**, or in Python: |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline |
|
|
|
|
|
# Load the model |
|
|
model = AutoModelForCausalLM.from_pretrained("Alihallaba/smol3-simple-helper") |
|
|
tokenizer = AutoTokenizer.from_pretrained("Alihallaba/smol3-simple-helper") |
|
|
|
|
|
# Create a pipeline for text generation |
|
|
pipe = pipeline( |
|
|
"text-generation", |
|
|
model=model, |
|
|
tokenizer=tokenizer, |
|
|
max_new_tokens=80 |
|
|
) |
|
|
|
|
|
# Example usage |
|
|
prompt = "Simplify this: Explain black holes" |
|
|
output = pipe(prompt)[0]["generated_text"] |
|
|
print(output) |
|
|
|