DAGGER-12B-GRPO

arXiv GitHub

Model Description

DAGGER-12B-GRPO is trained with Group Relative Policy Optimization (GRPO) directly from the base Gemma-3-12B model, without SFT initialization. This model demonstrates that GRPO alone can learn computational graph generation, though SFT initialization provides better distractor robustness.

Highlights

  • Base → GRPO training (no SFT phase)
  • Executable reward signal: Learns from format, execution, and correctness rewards
  • Ablation model: Demonstrates contribution of SFT initialization

Model Overview

Attribute Value
Base Model Gemma-3-12B-Instruct
Training GRPO (from base)
Parameters 12B
LoRA Rank 64

Performance

Dataset Original +Distractor Drop
MGSM 67.6 48.4 19.2
MSVAMP 75.0 59.6 15.4

Ablation: Effect of SFT Initialization

Initialization MGSM (+D) MSVAMP (+D)
Base → GRPO 48.4 59.6
SFT → GRPO 64.0 (+15.6) 66.8 (+7.2)

Key Finding: SFT initialization provides crucial scaffolding that stabilizes GRPO learning and improves distractor robustness by +7-16 points.

Quickstart

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "dipta007/dagger-12B_GRPO"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

USER_PROMPT_TEMPLATE = """You are an expert Bengali Math Reasoner. Your task is to solve mathematical problems by constructing a "Computational Graph".

### Graph Rules:
- `id`: Unique identifier (e.g., "n1", "n2").
- `val`: The raw number extracted from text (for input nodes).
- `op`: The operation (`add`, `sub`, `mul`, `div`, `round`, `sqrt`, `floor`, `sum`, `mean`, `ratio_split`). Use `const` for input numbers.
- `args`: List of input node IDs.
- `distractor`: Boolean (`true` / `false`). Set to `true` if the node is NOT used in the final calculation path.
- `label`: Label for the node.

### Available Operations:
- Input: `const` (Use this for all numbers found in text or constants).
- Arithmetic: `add`, `sub`, `mul`, `div`, `abs` (absolute difference).
- Logic/Stats: `sum`, `mean`, `min` (minimum), `max` (maximum).
- Rounding: `round` (nearest int), `floor` (round down), `ceil` (round up).
- Advanced: `sqrt`, `pow`, `mod` (remainder), `gcd`, `lcm`.
- Output: `identity` ("final_result" points to the answer node)

Only output a JSON graph representing the solution, nothing else. Nodes must be topologically sorted, and there must be exactly one "final_result" node that represents the final answer. One example is provided below.

### Example:
Question:
মিনার কাছে ১২২১৯৫ টা কলম আছে। রাজুর কাছে ২৫০৮৪ টা কলম আছে। মিনা রাজুর কাছে ১১২৬ টি কলম চাইল। রাজু ১০০০ টি কলম দিতে রাজি হল, কিন্তু পরে আর দিলেনা। প্রতিটি কলমের দাম ৪৫.৬ টাকা। মিনা যদি কলমগুলো বিক্রি করতে চায়, সে কত টাকা পাবে?

Output:
```json
{{
  "nodes": [
    {{"id": "n1", "op": "const", "val": 122195, "distractor": false, "label": "মিনার কলম"}},
    {{"id": "n2", "op": "const", "val": 25084, "distractor": true, "label": "রাজুর কলম"}},
    {{"id": "n3", "op": "const", "val": 1126, "distractor": true, "label": "মিনা রাজুর কাছে চাইল"}},
    {{"id": "n4", "op": "const", "val": 1000, "distractor": true, "label": "রাজু দিতে রাজি হল"}},
    {{"id": "n5", "op": "const", "val": 45.6, "distractor": false, "label": "প্রতিটি কলমের দাম"}},
    {{"id": "total_money", "op": "mul", "args": ["n1", "n5"], "distractor": false, "label": "মিনার মোট টাকা"}},
    {{"id": "final_result", "op": "identity", "args": ["total_money"], "distractor": false, "label": "চূড়ান্ত উত্তর"}}
  ]
}}```

### Your Task:

Question:
{question}

Output:
"""

question = "রজারের 5টি টেনিস বল আছে। সে আরও 2 ক্যান টেনিস বল কিনেছে। প্রতিটি ক্যানে 3টি করে টেনিস বল আছে। তার কাছে এখন কতগুলি টেনিস বল আছে?"
prompt = USER_PROMPT_TEMPLATE.format(question=question)

messages = [
  {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

# Generate
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.7, top_p=0.8)
response = tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True)

print(response)

Training Configuration

Parameter Value
Base Model Gemma-3-12B-Instruct (no SFT)
LoRA Rank / Alpha 64 / 128
Global Batch Size 32
Generations per Prompt 8
Loss Type BNPO
β / ε / ε_high 0.0 / 0.2 / 0.28

Reward Function:

  • Valid JSON: +0.5
  • Successful execution: +0.5
  • Correct answer: +1.0

When to Use This Model

  • Ablation studies: Understanding contribution of SFT vs. GRPO
  • GRPO-only scenarios: When SFT data is unavailable
  • Research: Studying policy optimization for structured generation

Related Models

Model Training MGSM (+D) MSVAMP (+D)
dagger-12B_GRPO Base → GRPO 48.4 59.6
dagger-12B_SFT_GRPO SFT → GRPO 64.0 66.8
dagger-12B_SFT SFT only 56.8 65.4

Citation

@misc{nazi2026dagdaggerdistractorawaregraphgeneration,
      title={{\dag}DAGGER: Distractor-Aware Graph Generation for Executable Reasoning in Math Problems}, 
      author={Zabir Al Nazi and Shubhashis Roy Dipta and Sudipta Kar},
      year={2026},
      eprint={2601.06853},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.06853}, 
}
Downloads last month
11
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dipta007/dagger-12B_GRPO

Finetuned
(161)
this model

Datasets used to train dipta007/dagger-12B_GRPO

Collection including dipta007/dagger-12B_GRPO

Paper for dipta007/dagger-12B_GRPO