--- license: apache-2.0 base_model: microsoft/phi-2 tags: - peft - lora - gsm8k - math - reasoning - curriculum-learning datasets: - gsm8k metrics: - accuracy library_name: peft --- # phi-2 LoRA Adapter for GSM8K This is a LoRA adapter for [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) fine-tuned on the [GSM8K dataset](https://huggingface.co/datasets/gsm8k) for mathematical reasoning. ## Model Description - **Base Model:** microsoft/phi-2 - **Training Method:** Curriculum learning - Complexity Score - **Dataset:** GSM8K (Grade School Math 8K) - **Task:** Mathematical word problem solving - **Exact Match Accuracy:** 62.50% ## Training Details LoRA adapter for PHI-2 trained with curriculum learning (complexity score method) ### Training Configuration - **Method:** LoRA (Low-Rank Adaptation) - **Rank:** 16 - **Alpha:** 32 - **Target Modules:** q_proj, k_proj, v_proj, o_proj - **Dropout:** 0.1 - **Epochs:** 3 - **Batch Size:** 4 (with gradient accumulation of 4) - **Learning Rate:** 3e-4 ### Curriculum Learning This model was trained using curriculum learning, where the model is exposed to progressively harder problems: 1. **Easy Stage:** Simple problems with fewer steps 2. **Normal Stage:** Moderate complexity problems 3. **Difficult Stage:** Complex multi-step problems The curriculum was determined based on problem complexity (number of solution steps × operation complexity). ## Usage ### Loading the Adapter ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model base_model = AutoModelForCausalLM.from_pretrained( "microsoft/phi-2", device_map="auto", torch_dtype="auto" ) # Load tokenizer tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2") # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "CrystalRaindropsFall/phi2-gsm8k-curriculum-complexity") # Inference prompt = "Question: Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?\nAnswer:" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Using with Pipeline ```python from transformers import pipeline from peft import PeftModel, AutoPeftModelForCausalLM # Load model with adapter model = AutoPeftModelForCausalLM.from_pretrained( "YOUR_USERNAME/REPO_NAME", device_map="auto" ) # Create pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) # Generate result = pipe("Question: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?\nAnswer:") print(result[0]['generated_text']) ``` ## Performance Evaluated on GSM8K test set (512 samples): | Metric | Score | |--------|-------| | Exact Match | 62.50% | | Format Correct | 100% | ## Limitations - Trained on grade school level math problems - May struggle with problems requiring external knowledge - Performance depends on problem complexity and wording - Best used with base model's standard generation settings ## Acknowledgments - Base model: microsoft/phi-2 - Dataset: GSM8K by Cobbe et al. - Training framework: HuggingFace PEFT ## License Apache 2.0 (following base model license)