File size: 14,341 Bytes
8205a2e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
import gradio as gr
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
import re
import json
from pathlib import Path

# ==================== CONFIGURATION ====================

# Base models
BASE_MODELS = {
    "PHI-2 (2.7B)": "microsoft/phi-2",
    "SmolLM2 (135M)": "HuggingFaceTB/SmolLM2-135M",
}

# Adapter configurations - update with your HuggingFace username
# Format: "username/repo-name" or local path
ADAPTERS = {
    "PHI-2 (2.7B)": {
        "No Fine-tuning (Base Model)": None,
        "Baseline Fine-tuned": "CrystalRaindropsFall/phi2-gsm8k-baseline",
        "Curriculum: Answer Length": "CrystalRaindropsFall/phi2-gsm8k-curriculum-answer-length",
        "Curriculum: Complexity Score": "CrystalRaindropsFall/phi2-gsm8k-curriculum-complexity",
    },
    "SmolLM2 (135M)": {
        "No Fine-tuning (Base Model)": None,
        "Baseline Fine-tuned": "CrystalRaindropsFall/smolLM2-gsm8k-baseline",
        "Curriculum: Answer Length": "CrystalRaindropsFall/smolLM2-gsm8k-curriculum-answer-length",
        "Curriculum: Complexity Score": "CrystalRaindropsFall/smolLM2-gsm8k-curriculum-complexity",
    },
}

# Sample math problems
SAMPLE_PROBLEMS = [
    "Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?",
    "A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?",
    "Josh decides to try flipping a house. He buys a house for $80,000 and then puts in $50,000 in repairs. This increased the value of the house by 150%. How much profit did he make?",
    "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week?",
    "A store sells pencils for $0.50 each and notebooks for $3.00 each. If Sarah buys 6 pencils and 4 notebooks, how much does she spend in total?",
    "Mike has 45 apples. He gives 1/3 of them to his friend and then buys 12 more apples. How many apples does Mike have now?",
    "A train travels 120 miles in 2 hours. At the same speed, how far will it travel in 5 hours?",
]

# ==================== MODEL LOADING ====================


class ModelCache:
    """Cache loaded models to avoid reloading"""

    def __init__(self):
        self.current_base = None
        self.current_adapter = None
        self.model = None
        self.tokenizer = None
        self.pipe = None

    def load_model(self, base_model_name, adapter_path=None):
        """Load model with optional adapter"""
        cache_key = f"{base_model_name}_{adapter_path}"
        current_key = f"{self.current_base}_{self.current_adapter}"

        # Return cached if same
        if cache_key == current_key and self.pipe is not None:
            return self.pipe

        # Clear old model
        if self.model is not None:
            del self.model
            del self.tokenizer
            del self.pipe
            torch.cuda.empty_cache()

        print(f"Loading {base_model_name}...")

        # Load tokenizer
        tokenizer = AutoTokenizer.from_pretrained(base_model_name)
        if tokenizer.pad_token is None:
            tokenizer.pad_token = tokenizer.eos_token
            tokenizer.pad_token_id = tokenizer.eos_token_id
        tokenizer.padding_side = "left"

        # Load base model
        model = AutoModelForCausalLM.from_pretrained(
            base_model_name,
            device_map="auto",
            torch_dtype=torch.float16,
        )

        # Load adapter if specified
        if adapter_path:
            print(f"Loading adapter from {adapter_path}...")
            if Path(adapter_path).exists():
                # Local path
                model = PeftModel.from_pretrained(model, adapter_path)
            else:
                # HuggingFace path
                try:
                    model = PeftModel.from_pretrained(model, adapter_path)
                except Exception as e:
                    print(f"Warning: Could not load adapter from {adapter_path}: {e}")
                    print("Using base model only")

        # Create pipeline
        pipe = pipeline(
            "text-generation",
            model=model,
            tokenizer=tokenizer,
            max_new_tokens=512,
            do_sample=False,  # Deterministic for math
            pad_token_id=tokenizer.pad_token_id,
        )

        # Cache
        self.current_base = base_model_name
        self.current_adapter = adapter_path
        self.model = model
        self.tokenizer = tokenizer
        self.pipe = pipe

        return pipe


# Global cache
model_cache = ModelCache()

# ==================== HELPER FUNCTIONS ====================


def extract_answer(text):
    """Extract the final numerical answer from generated text"""
    # Look for #### format (GSM8K style)
    match = re.search(r"####\s*(-?\d+\.?\d*)", text)
    if match:
        return match.group(1).rstrip(".")

    # Fallback: find last number
    numbers = re.findall(r"-?\d+\.?\d*", text)
    if numbers:
        return numbers[-1].rstrip(".")

    return "No answer found"


def format_solution(generated_text, question):
    """Format the solution for display"""
    # Remove the question from the output (model echoes it)
    solution = generated_text.replace(f"Question: {question}\nAnswer:", "").strip()

    # Extract answer
    final_answer = extract_answer(generated_text)

    return solution, final_answer


# ==================== GRADIO INTERFACE ====================


def solve_math_problem(base_model, adapter_choice, question, max_tokens, temperature):
    """Main function to solve math problems"""
    try:
        # Get model path
        base_model_path = BASE_MODELS[base_model]
        adapter_path = ADAPTERS[base_model].get(adapter_choice)

        # Load model
        pipe = model_cache.load_model(base_model_path, adapter_path)

        # Format prompt
        prompt = f"Question: {question}\nAnswer:"

        # Generate
        outputs = pipe(
            prompt,
            max_new_tokens=max_tokens,
            do_sample=temperature > 0,
            temperature=temperature if temperature > 0 else None,
        )

        generated_text = outputs[0]["generated_text"]

        # Format output
        solution, final_answer = format_solution(generated_text, question)

        # Create formatted output
        output = f"""### Solution Steps:
{solution}

### Final Answer: **{final_answer}**
"""
        return output

    except Exception as e:
        return f"โŒ Error: {str(e)}\n\nPlease check that the model and adapter are correctly loaded."


def update_adapter_choices(base_model):
    """Update adapter dropdown based on selected base model"""
    adapters = list(ADAPTERS[base_model].keys())
    return gr.Dropdown(choices=adapters, value=adapters[0])


def load_sample_problem(sample_idx):
    """Load a sample problem"""
    if sample_idx is None or sample_idx >= len(SAMPLE_PROBLEMS):
        return SAMPLE_PROBLEMS[0]
    return SAMPLE_PROBLEMS[sample_idx]


# ==================== BUILD INTERFACE ====================


def create_demo():
    """Create the Gradio interface"""

    with gr.Blocks(
        theme=gr.themes.Soft(), title="Curriculum Design Matters: Math Reasoning Demo"
    ) as demo:
        gr.Markdown(
            """
# ๐ŸŽ“ Curriculum Design Matters: Training LLMs for Math Reasoning

<div style="font-size: 1.2em; line-height: 1.6;">

Compare how different training strategies affect mathematical reasoning in language models.

**Key Finding:** Not all curricula are equalโ€”wrong curriculum design can hurt performance!

</div>
        """,
            elem_classes="header",
        )

        with gr.Row():
            with gr.Column():
                question_input = gr.Textbox(
                    lines=5,
                    placeholder="Enter a math word problem here...",
                    label="Enter Your Math Problem",
                    value=SAMPLE_PROBLEMS[0],
                    show_label=True,
                )

                with gr.Accordion("๐Ÿ“š Or Choose a Sample Problem", open=False):
                    sample_dropdown = gr.Dropdown(
                        choices=[
                            f"Sample {i + 1}: {prob[:50]}..."
                            for i, prob in enumerate(SAMPLE_PROBLEMS)
                        ],
                        value=f"Sample 1: {SAMPLE_PROBLEMS[0][:50]}...",
                        label="Sample Problems",
                        scale=3,
                    )
                    load_sample_btn = gr.Button("๐Ÿ“ฅ Load Selected Sample", size="sm")

                solve_btn = gr.Button("๐Ÿงฎ Solve Problem", variant="primary", size="lg")

                gr.Markdown("### ๐Ÿ’ก Solution")

                output_text = gr.Markdown(
                    value="*Solution will appear here after you click 'Solve Problem'...*",
                    label="Generated Solution",
                )

                gr.Markdown("### โš™๏ธ Model Selection")

                base_model = gr.Dropdown(
                    choices=list(BASE_MODELS.keys()),
                    value=list(BASE_MODELS.keys())[0],
                    label="Base Model",
                    info="Choose the foundation model",
                )

                adapter_choice = gr.Dropdown(
                    choices=list(ADAPTERS[list(BASE_MODELS.keys())[0]].keys()),
                    value=list(ADAPTERS[list(BASE_MODELS.keys())[0]].keys())[0],
                    label="Fine-tuning Strategy",
                    info="Choose training method",
                )

                with gr.Accordion("๐ŸŽ›๏ธ Advanced Settings", open=False):
                    max_tokens = gr.Slider(
                        minimum=128,
                        maximum=512,
                        value=256,
                        step=32,
                        label="Max New Tokens",
                        info="Maximum length of solution",
                    )

                    temperature = gr.Slider(
                        minimum=0.0,
                        maximum=1.0,
                        value=0.0,
                        step=0.1,
                        label="Temperature",
                        info="0 = deterministic, >0 = creative",
                    )

        # ==================== EVENT HANDLERS ====================

        # Update adapters when base model changes
        base_model.change(
            fn=update_adapter_choices, inputs=[base_model], outputs=[adapter_choice]
        )

        # Load sample problem
        def load_sample_fn(sample_name):
            idx = int(sample_name.split()[1].split(":")[0]) - 1
            return SAMPLE_PROBLEMS[idx]

        load_sample_btn.click(
            fn=load_sample_fn, inputs=[sample_dropdown], outputs=[question_input]
        )

        # Solve problem
        solve_btn.click(
            fn=solve_math_problem,
            inputs=[
                base_model,
                adapter_choice,
                question_input,
                max_tokens,
                temperature,
            ],
            outputs=[output_text],
        )

        # ==================== BOTTOM INFO ====================

        gr.Markdown("---")

        with gr.Accordion("๐Ÿ“Š Experimental Results & Key Findings", open=False):
            gr.Markdown("""
### Results Summary

**PHI-2 (2.7B Parameters):**
- Baseline: 60.16% accuracy
- Curriculum (Answer Length): 59.38% (-0.78%) โŒ
- Curriculum (Complexity Score): 62.50% (+2.34%) โœ…

**SmolLM2 (135M Parameters):**
- Baseline: 2.15% accuracy
- Curriculum (Answer Length): 2.73% (+0.58%)
- Curriculum (Complexity Score): 2.93% (+0.78%)

### Key Insights

1. **Curriculum design is critical** - Wrong curriculum hurts performance
2. **Complexity matters more than length** - Steps ร— operations beats simple answer length
3. **Model size affects benefits** - Larger models benefit more from curriculum learning
4. **Progressive difficulty works** - Easy โ†’ Normal โ†’ Difficult stages improve learning
            """)

        with gr.Accordion("๐Ÿ“š Training Methods Explained", open=False):
            gr.Markdown("""
**No Fine-tuning:** Base model without any training on GSM8K

**Baseline Fine-tuned:** Standard fine-tuning on all problems at once
- All difficulty levels mixed together
- 3 epochs on full dataset

**Curriculum: Answer Length:** Progressive training based on solution length
- Stage 1 (Easy): Short solutions (< 100 chars)
- Stage 2 (Normal): Medium solutions (100-200 chars)
- Stage 3 (Difficult): Long solutions (> 200 chars)
- Result: Performance decreased! โŒ

**Curriculum: Complexity Score:** Progressive training based on steps ร— operations
- Stage 1 (Easy): Few steps, simple operations
- Stage 2 (Normal): Moderate complexity
- Stage 3 (Difficult): Many steps, complex operations
- Result: Performance improved! โœ…
            """)

        with gr.Accordion("โ„น๏ธ About This Demo", open=False):
            gr.Markdown("""
### Technical Details

**Models:**
- PHI-2: 2.7B parameter model by Microsoft
- SmolLM2: 135M parameter compact model by HuggingFace

**Dataset:** GSM8K (Grade School Math 8K) - 7,473 training and 1,319 test elementary school math word problems

**Training Method:** LoRA (Low-Rank Adaptation) fine-tuning
- Rank: 16, Alpha: 32
- Target modules: q_proj, k_proj, v_proj, o_proj
- 3 epochs per curriculum stage
- Learning rate: 3e-4

**Evaluation:** Exact match accuracy on GSM8K test set

### Links & Resources

๐Ÿ”— [GitHub Repository](#) | [Blog Post](#) | [Paper](#) | [Adapters on HuggingFace](#)

### Note

โš ๏ธ Models are loaded on-demand and cached in memory. First inference may take 30-60 seconds.

Models run on GPU if available, otherwise CPU (slower).
            """)

    return demo


# ==================== MAIN ====================

if __name__ == "__main__":
    demo = create_demo()
    demo.launch(
        share=True,  # Set to True to create public link
        server_name="0.0.0.0",  # Allow external access
        server_port=7860,
        show_error=True,
    )