--- library_name: peft base_model: unsloth/Qwen2-1.5B-Instruct-bnb-4bit tags: - sft - lora - qwen2 - adaptive-learning - multi-level - cmrl license: apache-2.0 --- # CMRL Adaptive Generator Multi-level adaptive generator trained with SFT for the C-MRL project. ## Model Description Adapts explanations to different difficulty levels: - **Novice**: 6th grade level, simple analogies - **Intermediate**: College student level - **Expert**: Technical/professional explanations ## Training Details | Parameter | Value | |-----------|-------| | Base Model | unsloth/Qwen2-1.5B-Instruct-bnb-4bit | | LoRA Rank | 16 | | LoRA Alpha | 32 | | Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj | | Epochs | 3 | | Learning Rate | 0.0001 | | Final Train Loss | 1.5193 | | Final Eval Loss | 0.0000 | ## Team **Team kats** - IIIT Hyderabad ## Usage ```python from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( "Ishaank18/cmrl-adaptive-generator", max_seq_length=2048, load_in_4bit=True, ) FastLanguageModel.for_inference(model) messages = [ {"role": "system", "content": "You are an adaptive tutor."}, {"role": "user", "content": "Explain photosynthesis to a 6th grader."} ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```