| | --- |
| | language: |
| | - id |
| | - en |
| | tags: |
| | - math |
| | - education |
| | - indonesian |
| | - small-model |
| | - fine-tuned |
| | license: apache-2.0 |
| | datasets: |
| | - video-transcripts |
| | metrics: |
| | - accuracy |
| | model-index: |
| | - name: Gasing Math Teacher |
| | results: |
| | - task: |
| | name: Mathematical Problem Solving |
| | type: text-generation |
| | dataset: |
| | name: Video Transcripts |
| | type: generated |
| | metrics: |
| | - type: Accuracy |
| | value: 85% |
| | mode: approximate |
| | --- |
| | |
| | # Gasing Math Teacher - Indonesian Math Instruction Model |
| |
|
| | ## Model Description |
| | Gasing Math Teacher is a specialized 0.5B language model fine-tuned for mathematical instruction in Indonesian. The model demonstrates unique capabilities in explaining mathematical concepts using creative "pisahkan" (separation) methods. |
| |
|
| | ### Key Features |
| | - Trained on video transcript data |
| | - Specializes in mathematical problem-solving |
| | - Provides detailed, step-by-step explanations in Indonesian |
| | - Uses innovative teaching methods like finger-counting and concrete number separation |
| |
|
| | ### Training Details |
| | - Base Model: Qwen/Qwen2.5-Coder-0.5B-Instruct |
| | - Training Data: Video transcripts |
| | - Fine-tuning Method: LoRA (Low-Rank Adaptation) |
| | - Number of Layers: 2 |
| | - Learning Rate: 1e-5 |
| |
|
| | ### Example Capabilities |
| | The model can: |
| | - Solve basic addition problems |
| | - Explain calculations using creative "pisahkan" methods |
| | - Break down mathematical concepts into simple, understandable steps |
| |
|
| | ### Limitations |
| | - Primarily trained for simple mathematical operations |
| | - Best performance in Indonesian language |
| | - May not generalize to complex mathematical problems |
| |
|
| | ### Ethical Considerations |
| | This model is intended for educational purposes and should be used responsibly. |
| |
|
| | ## Usage |
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | model = AutoModelForCausalLM.from_pretrained("your-username/gasing-math-teacher") |
| | tokenizer = AutoTokenizer.from_pretrained("your-username/gasing-math-teacher") |
| | ``` |
| |
|
| | ## License |
| | [Include appropriate license information] |
| |
|
| | ## Citation |
| | If you use this model, please cite: |
| | [Include citation details] |
| |
|