MarcCote commited on
Commit
7a276c9
Β·
verified Β·
1 Parent(s): 45798ae

Update README

Browse files
Files changed (1) hide show
  1. README.md +73 -64
README.md CHANGED
@@ -29,32 +29,32 @@ Blog Post: https://microsoft.github.io/debug-gym/blog/2025/10/bug-pilot |
29
 
30
  ## Model overview
31
 
32
- FrogMini is built on the Qwen3-14B transformer architecture with a maximum context length of 64k tokens. The model uses multi-turn debugging workflows and complex code reasoning. Unlike general-purpose LLMs, FrogMini is specialized for software engineering tasks.
33
 
34
- The training procedure comes down to supervised fine-tuning (SFT) on successful debugging trajectories produced by a strong teacher model (e.g., claude-sonnet-4). Those trajectories were obtained from a mix of real-world and synthetic bug datasets (e.g., R2E-Gym, SWE-Smith) and high-quality FeatAdd bugs generated through the BugPilot framework. This approach ensures the model learns realistic debugging patterns rather than trivial fixes. Compared to other open-weight models, FrogMini stands out for its parameter's efficiency, achieving state-of-the-art performance on [SWE-Bench Verified](https://www.swebench.com/) (Pass@1: 45.0%) with 14B parameters, and its emphasis on realistic, multi-file debugging scenarios, making it more robust for real-world coding environments.
35
 
36
  ### Alignment approach
37
 
38
- The model was trained to ensure alignment with intended behavior of producing accurate bug identification and code patches by curating high-quality trajectories and removing failure patterns. Specifically, all unsuccessful debugging attempts were excluded from the training data to prevent reinforcement of ineffective strategies. Additionally, safeguards were applied to the teacher model to avoid β€œcheating” by dropping tasks that were failing. This approach ensures that the model consistently learns from successful problem-solving examples and produces reliable bug identification and code fix proposals aligned with developer expectations.
39
 
40
  ## Usage
41
 
42
  ### Primary use cases
43
 
44
- FrogMini is intended for software engineering and debugging tasks in controlled research environments, excelling at multi-turn reasoning, code repair, and feature-level bug resolution across complex repositories. It is optimized for scenarios such as automated bug fixing.
45
 
46
  **Intended Uses**
47
 
48
- - Debugging and repairing code in controlled environments.
49
- - Automated resolution of software bugs across multi-file repositories.
50
- - Research and development of agentic workflows for software engineering.
51
 
52
 
53
  ### Out-of-scope use cases
54
 
55
- FrogMini has several limitations and constraints that users should be aware of. While it excels at debugging and multi-file code reasoning, it is restricted to text-based inputs and outputs and cannot process or generate images, audio, or video. The model may struggle with highly domain-specific codebases outside its training distribution and can produce incorrect or incomplete fixes if prompts are ambiguous. It is not designed for general-purpose text generation or tasks unrelated to software engineering.
56
 
57
- Prohibited uses include generating harmful or insecure code, engaging in activities that violate legal or ethical standards, producing disallowed content (e.g., sexual, violent, hateful), or using the model for tasks unrelated to software development or outside a research setting.
58
 
59
  ### Distribution channels
60
 
@@ -62,81 +62,90 @@ Model weights are available on [HuggingFace](https://huggingface.co/microsoft/Fr
62
 
63
  ### Input formats
64
 
65
- Given the nature of the training data, FrogMini is best suited for prompts using the chat format as follows:
66
 
67
  ```json
68
- [
69
-   {
70
-     "role": "system",
71
-     "content": "The system prompt, followed by the list of descriptions of available functions, and a templatic function call example."
72
-   },
73
-   {
74
-     "role": "user",
75
-     "content": "The first user prompt, which includes a paragraph describing the problem statement, and a list of general instructions on bug fixing tasks."
76
    },
77
-   ...,
78
-   {
79
-     "role": "assistant",
80
-     "content": "The reasoning content generated by the agent in previous step.
81
- <function=the_called_function_name>
82
- <parameter=example_parameter_1>value_1</parameter>
83
- </function>"
84
-   },
85
-   {
86
-     "role": "user",
87
-     "content": "The new observation returned from the environment in response to the agent's previous function call."
 
 
 
 
88
    }
89
  ]
90
  ```
91
 
92
- #### Using R2E-Gym Agent’s scaffolding:
93
-
94
- ```python
95
- from r2egym.agenthub.environment.env import EnvArgs, RepoEnv
96
- from r2egym.agenthub.agent.agent import AgentArgs, Agent
97
- from pathlib import Path
98
- from datasets import load_dataset
99
-
100
- ds = load_dataset("R2E-Gym/SWE-Bench-Verified")["test"]
101
- env_index = 100 # index of the environment [0, len(ds)]
102
- env_args = EnvArgs(ds = ds[env_index])
103
- env = RepoEnv(env_args)
104
 
105
- agent_args = AgentArgs.from_yaml(Path('./src/r2egym/agenthub/config/ edit_non_fn_calling.yaml'))
106
- os.environ["LLM_BASE_URL"] = "http://[VLLM_ENDPOINT_URL]:8000/v1"
107
- agent_args.llm_name = 'microsoft/FrogMini-14B-2510'
108
- agent = Agent(name="EditingAgent", args=agent_args)
109
- output = agent.run(env, max_steps=40, use_fn_calling=False)
110
- ```
111
 
112
- ### Technical requirements and integration guidance
 
 
 
 
113
 
114
  **Serving the model**
115
 
116
- The recommended way to serve FrogMini-14B-2510 is with vLLM.
 
 
 
 
 
 
 
 
117
 
118
- ```bash
119
- vllm serve microsoft/FrogMini-14B-2510 --tensor-parallel-size 4
120
- --enable-prefix-caching
121
- --gpu-memory-utilization 0.9
122
- --max-model-len 65536
123
- --hf-overrides '{"max_position_embeddings": 65536}'
124
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
125
 
126
  ### Responsible AI considerations
127
 
128
- The model may struggle with highly domain-specific codebases outside its training distribution and can produce incorrect or incomplete fixes if prompts are ambiguous. It is not designed for general-purpose text generation or tasks unrelated to software engineering. The user should keep these limitations in mind when choosing a use case.
129
 
130
  **Best Practices**
131
 
132
- - Always validate generated code for security and correctness.
133
- - Use the model in environments with proper monitoring, guardrails, and sandboxing.
134
 
135
  ## Data overview
136
 
137
  ### Training, testing, and validation datasets
138
 
139
- The training data consists of a collection of 9k debugging trajectories (i.e., sequence of tool calls, and code generation) produced by a strong teacher model (e.g., claude-sonnet-4). Those trajectories were obtained from a mix of real-world bugs (R2E-Gym), synthetic bugs (e.g., SWE-Smith), and high-quality FeatAdd bugs generated with the BugPilot framework. The composition of the dataset ensures the model learns realistic debugging patterns rather than trivial fixes.
140
 
141
  ## Quality and performance evaluation
142
 
@@ -170,14 +179,14 @@ FrogMini was evaluated on SWE-Bench Verified (500 problems) using the R2E-Gym ag
170
 
171
  ### Long context
172
 
173
- Our models don’t support long context.
174
 
175
- ### Safety evaluation and red-teaming
176
 
177
- The primary mode of failure for FrogMini occurs when, given a buggy codebase and a task statement describing the problem (similar to a GitHub issue), the model generates a code patch that attempts to fix the bug, but the patch may be incorrect. Users should be aware that the output code patch might not successfully resolve the issue or could introduce new errors.
178
 
179
  ## Acknowledgement
180
 
181
- Our training used LLaMA-Factory, an open-source LLM fine-tuning library.<br>
182
  Our model is trained on top of Qwen/Qwen3-14B.<br>
183
  Our model has been optimized for the R2E-Gym's agent scaffolding.
 
29
 
30
  ## Model overview
31
 
32
+ FrogMini is built on the Qwen3-14B transformer architecture with a maximum context length of 64k tokens. The model uses multi-turn debugging workflows and complex code reasoning. Unlike general-purpose LLMs, FrogMini is specialized for software engineering tasks.
33
 
34
+ The training procedure comes down to supervised fine-tuning (SFT) on successful debugging trajectories produced by a strong teacher model (e.g., claude-sonnet-4). Those trajectories were obtained from a mix of real-world and synthetic bug datasets (e.g., R2E-Gym, SWE-Smith) and high-quality FeatAdd bugs generated through the BugPilot framework. This approach ensures the model learns realistic debugging patterns rather than trivial fixes. Compared to other open-weight models, FrogMini stands out for its parameter's efficiency, achieving state-of-the-art performance on [SWE-Bench Verified](https://www.swebench.com/) (Pass@1: 45.0%) with 14B parameters, and its emphasis on realistic, multi-file debugging scenarios, making it more robust for real-world coding environments.
35
 
36
  ### Alignment approach
37
 
38
+ The model was trained to ensure alignment with intended behavior of producing accurate bug identification and code patches by curating high-quality trajectories and removing failure patterns. Specifically, all unsuccessful debugging attempts were excluded from the training data to prevent reinforcement of ineffective strategies. Additionally, safeguards were applied to the teacher model to avoid β€œcheating” by dropping tasks that were failing. This approach ensures that the model consistently learns from successful problem-solving examples and produces reliable bug identification and code fix proposals aligned with developer expectations.
39
 
40
  ## Usage
41
 
42
  ### Primary use cases
43
 
44
+ FrogMini is intended for software engineering and debugging tasks in controlled research environments, excelling at multi-turn reasoning, code repair, and feature-level bug resolution across complex repositories. It is optimized for scenarios such as automated bug fixing.
45
 
46
  **Intended Uses**
47
 
48
+ - Debugging and repairing code in controlled environments.
49
+ - Automated resolution of software bugs across multi-file repositories.
50
+ - Research and development of agentic workflows for software engineering.
51
 
52
 
53
  ### Out-of-scope use cases
54
 
55
+ FrogMini has several limitations and constraints that users should be aware of. While it excels at debugging and multi-file code reasoning, it is restricted to text-based inputs and outputs and cannot process or generate images, audio, or video. The model may struggle with highly domain-specific codebases outside its training distribution and can produce incorrect or incomplete fixes if prompts are ambiguous. It is not designed for general-purpose text generation or tasks unrelated to software engineering.
56
 
57
+ Prohibited uses include generating harmful or insecure code, engaging in activities that violate legal or ethical standards, producing disallowed content (e.g., sexual, violent, hateful), or using the model for tasks unrelated to software development or outside a research setting.
58
 
59
  ### Distribution channels
60
 
 
62
 
63
  ### Input formats
64
 
65
+ Given the nature of the training data, FrogMini is best suited for prompts using the chat format as follows:
66
 
67
  ```json
68
+ [
69
+   {
70
+     "role": "system",
71
+     "content": "The system prompt, followed by the list of descriptions of available functions, and a templatic function call example."
 
 
 
 
72
    },
73
+   {
74
+     "role": "user",
75
+     "content": "The first user prompt, which includes a paragraph describing the problem statement, and a list of general instructions on bug fixing tasks."
76
+   },
77
+   ...,
78
+   {
79
+     "role": "assistant",
80
+     "content": "The reasoning content generated by the agent in previous step.
81
+ <function=the_called_function_name>
82
+ <parameter=example_parameter_1>value_1</parameter>
83
+ </function>"
84
+   },
85
+   {
86
+     "role": "user",
87
+     "content": "The new observation returned from the environment in response to the agent's previous function call."
88
    }
89
  ]
90
  ```
91
 
92
+ #### Using R2E-Gym Agent’s scaffolding
 
 
 
 
 
 
 
 
 
 
 
93
 
94
+ Clone R2E-Gym repository and install dependencies:
 
 
 
 
 
95
 
96
+ ```bash
97
+ git clone https://github.com/R2E-Gym/R2E-Gym.git
98
+ cd R2E-Gym
99
+ pip install -e .
100
+ ```
101
 
102
  **Serving the model**
103
 
104
+ The recommended way to serve FrogMini-14B-2510 is with vLLM.
105
+
106
+ ```bash
107
+ vllm serve microsoft/FrogMini-14B-2510 --tensor-parallel-size 4
108
+ --enable-prefix-caching
109
+ --gpu-memory-utilization 0.9
110
+ --max-model-len 65536
111
+ --hf-overrides '{"max_position_embeddings": 65536}'
112
+ ```
113
 
114
+ **Example code snippet**
115
+
116
+ ```python
117
+ import os
118
+ from r2egym.agenthub.environment.env import EnvArgs, RepoEnv
119
+ from r2egym.agenthub.agent.agent import AgentArgs, Agent
120
+ from pathlib import Path
121
+ from datasets import load_dataset
122
+
123
+ ds = load_dataset("R2E-Gym/SWE-Bench-Verified")["test"]
124
+ env_index = 100 # index of the environment [0, len(ds)]
125
+ env_args = EnvArgs(ds = ds[env_index])
126
+ env = RepoEnv(env_args)
127
+
128
+ agent_args = AgentArgs.from_yaml(Path('./src/r2egym/agenthub/config/r2egym/edit_non_fn_calling.yaml'))
129
+ os.environ["LLM_BASE_URL"] = "http://127.0.0.1:8000/v1"
130
+ agent_args.llm_name = 'hosted_vllm/microsoft/FrogMini-14B-2510'
131
+ agent = Agent(name="EditingAgent", args=agent_args)
132
+ output = agent.run(env, max_steps=40, use_fn_calling=False)
133
+ ```
134
 
135
  ### Responsible AI considerations
136
 
137
+ The model may struggle with highly domain-specific codebases outside its training distribution and can produce incorrect or incomplete fixes if prompts are ambiguous. It is not designed for general-purpose text generation or tasks unrelated to software engineering. The user should keep these limitations in mind when choosing a use case.
138
 
139
  **Best Practices**
140
 
141
+ - Always validate generated code for security and correctness.
142
+ - Use the model in environments with proper monitoring, guardrails, and sandboxing.
143
 
144
  ## Data overview
145
 
146
  ### Training, testing, and validation datasets
147
 
148
+ The training data consists of a collection of 9k debugging trajectories (i.e., sequence of tool calls, and code generation) produced by a strong teacher model (e.g., claude-sonnet-4). Those trajectories were obtained from a mix of real-world bugs (R2E-Gym), synthetic bugs (e.g., SWE-Smith), and high-quality FeatAdd bugs generated with the BugPilot framework. The composition of the dataset ensures the model learns realistic debugging patterns rather than trivial fixes.
149
 
150
  ## Quality and performance evaluation
151
 
 
179
 
180
  ### Long context
181
 
182
+ Our models don’t support long context.
183
 
184
+ ### Safety evaluation and red-teaming
185
 
186
+ The primary mode of failure for FrogMini occurs when, given a buggy codebase and a task statement describing the problem (similar to a GitHub issue), the model generates a code patch that attempts to fix the bug, but the patch may be incorrect. Users should be aware that the output code patch might not successfully resolve the issue or could introduce new errors.
187
 
188
  ## Acknowledgement
189
 
190
+ Our training used LLaMA-Factory, an open-source LLM fine-tuning library.<br>
191
  Our model is trained on top of Qwen/Qwen3-14B.<br>
192
  Our model has been optimized for the R2E-Gym's agent scaffolding.