Vidyasoniq 🎼
Vidyasoniq is a small AI model trained to act like a music teacher.
It looks at student progress and gives simple, practical advice on what to do next.
This model was built as part of a personal project to explore how AI can help make music teaching more consistent across different instruments.
The goal is not to replace teachers, but to support structured guidance based on student data.
What it can do
- Help with Western vocals
- Help with Carnatic vocals
- Guide Guitar practice
- Guide Keyboard practice
- Guide Flute learning
It mainly focuses on:
- what the student should improve
- what to practice next
- how to structure the next class
How it works
The model does not “know everything”.
It works best when you provide:
- student level
- lesson progress
- weak areas
It then responds like a teacher and suggests next steps.
Example
Input
Student: Beginner Guitar Practiced: 4 weeks Strengths: chord transitions, strumming Needs improvement: fingerpicking, basic theory
Question: What should this student focus on next?
Output
- Focus on basic fingerpicking patterns before increasing speed
- Practice slow picking exercises to build control
- Spend a few minutes understanding simple chord structure
Next class plan:
- Warm-up: chord switching (5 mins)
- Main: fingerpicking drills
- Next: combine picking with simple chords
Training Details
| Parameter | Value |
|---|---|
| Base model | TinyLlama-1.1B-Chat-v1.0 |
| Method | LoRA (PEFT) |
| Dataset size | 200 instruction samples |
| Epochs | 3 |
| Batch size | 1 (gradient accumulation: 8) |
| Final training loss | 0.144 |
Usage
Install dependencies:
pip install transformers peft torch
Load and run:
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter = "your-username/vidyasoniq"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter)
prompt = "What should this student focus on next?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
- Trained on a small dataset (~200 samples), so responses may feel repetitive
- Works best with structured input (student data + context)
- Not designed for general conversation
- Not a replacement for a real music teacher
Future Improvements
- Expand dataset with more student scenarios
- Improve variation in responses
- Add deeper instrument-specific guidance
- Train on a larger base model
Creator
Built by Ramesh as part of a personal project exploring AI-assisted music learning systems.
Framework
- PEFT
- HuggingFace Transformers
- Downloads last month
- 56
Model tree for rameshbharathig/vidyasoniq
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0