Qwen Adaptive Programming Tutor π§βπ«
Model Details
Model Description
The Qwen Adaptive Programming Tutor is a fine-tuned language model designed to act as a Socratic programming instructor. Instead of providing direct solutions or writing code for the user, it analyzes buggy code and provides conceptual, encouraging hints to guide students toward the correct answer. This model was fine-tuned to reduce memory footprint and latency while maintaining high pedagogical quality.
- Developed by: Ebrahim Zaher
- Model type: Causal Language Model (LoRA Adapter)
- Language(s) (NLP): English
- License: Apache-2.0
- Finetuned from model:
unsloth/Qwen2.5-1.5B-Instruct
Uses
Direct Use
This model is intended to be used as a backend for educational technology platforms, coding bootcamps, and IDE extensions. It takes student code and an instruction as input and outputs a short, guiding question or hint.
Out-of-Scope Use
- Generating full, production-ready code blocks (the model is explicitly trained not to do this).
- Replacing standard debugging tools for complex, multi-file enterprise codebases.
How to Get Started with the Model
You can easily load this model using transformers and peft.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# 1. Load the base model
base_model_name = "unsloth/Qwen2.5-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(base_model_name)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
# 2. Load the LoRA adapter
adapter_repo = "ebrahimzaher/qwen_adaptive_tutor"
model = PeftModel.from_pretrained(model, adapter_repo)
# 3. Example usage
prompt = "<|im_start|>user\nAnalyze this buggy code and provide a Socratic hint:\ndef add(a, b):\n return a - b<|im_end|>\n<|im_start|>assistant\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 18
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for ebrahimzaher/qwen_adaptive_tutor
Base model
Qwen/Qwen2.5-1.5B Finetuned
Qwen/Qwen2.5-1.5B-Instruct Finetuned
unsloth/Qwen2.5-1.5B-Instruct