File size: 23,519 Bytes
9b2b0be
b0865e6
 
 
9b2b0be
 
b0865e6
 
9b2b0be
b0865e6
 
 
17587f9
9b2b0be
 
b0865e6
 
 
 
 
 
 
 
0a6b451
b0865e6
 
 
 
 
 
 
 
9b2b0be
 
 
b0865e6
 
 
 
 
 
44ea0c5
 
b0865e6
 
 
 
 
 
702fa37
 
 
 
 
 
 
 
 
 
b0865e6
 
f288c4d
b0865e6
0a6b451
b0865e6
 
 
 
 
 
 
 
 
 
f288c4d
b0865e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db2303b
4479834
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35af3c4
 
a89cc59
 
35af3c4
 
db2303b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35af3c4
 
 
 
b22ca18
35af3c4
a89cc59
 
 
 
 
 
 
ea1147f
46fca62
a89cc59
35af3c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46fca62
 
b0865e6
 
 
 
7a1cbd4
 
 
 
 
 
 
 
 
 
 
 
 
b0865e6
9b2b0be
b0865e6
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
---
library_name: transformers
pipeline_tag: text-generation
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-72B-Instruct
tags:
- agent
- open-source
- miromind
- deep-research
---

<div align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/87mYQ_a-4jpnMkVR4hrgm.png" width="55%" alt="MiroThinker" />
</div>

<div align="center">

[![Demo](https://img.shields.io/badge/Demo-FFB300?style=for-the-badge&logo=airplayvideo&logoColor=white)](https://dr.miromind.ai/)
[![Models](https://img.shields.io/badge/Models-5EDDD2?style=for-the-badge&logo=huggingface&logoColor=ffffff&labelColor)](https://huggingface.co/collections/miromind-ai/mirothinker-v10)
[![Paper](https://img.shields.io/badge/Paper-B31B1B?style=for-the-badge&logo=arxiv&logoColor=white)](https://arxiv.org/abs/2511.11793)

[![Github](https://img.shields.io/badge/GitHub-24292F?style=for-the-badge&logo=github&logoColor=white)](https://github.com/MiroMindAI/MiroThinker)
[![Discord](https://img.shields.io/badge/Discord-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.com/invite/GPqEnkzQZd)
[![WeChat](https://img.shields.io/badge/WeChat-07C160?style=for-the-badge&logo=wechat&logoColor=white)](https://raw.githubusercontent.com/MiroMindAI/MiroThinker/refs/heads/main/assets/miromind_wechat.png)
[![RedNote](https://img.shields.io/badge/RedNote-FF2442?style=for-the-badge&logo=revoltdotchat&logoColor=white)](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239)
[![Website](https://img.shields.io/badge/Website-4285F4?style=for-the-badge&logo=monster&logoColor=white)](https://miromind.ai/)

</div>

## Introduction

MiroThinker v1.0 is an open-source research agent designed to advance tool-augmented reasoning and information-seeking capabilities. 

Unlike previous agents that scale only model size or context length, MiroThinker introduces **interactive scaling** at the model level, systematically training the model to handle deeper and more frequent agent–environment interactions as a third dimension of performance improvement. Interactive scaling leverages environment feedback and external information acquisition to correct errors and refine trajectories. 

Empirical results demonstrate the effectiveness of this interactive scaling. Performance across several benchmarks improves predictably as the model engages in increasingly deep and frequent interactions with its environment.

![image](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v1.0_Overall.png)

**Key Features**

- MiroThinker v1.0 supports a 256K context window, long-horizon reasoning, and deep multi-step analysis.
- Handles up to 600 tool calls per task — a substantial improvement over previous open-source research agents.
- Released in 8B, 30B, and 72B parameter scales, accompanied by a comprehensive suite of tools and workflows to flexibly support diverse research settings and compute budgets.

<div align="center">
  
|      Model Name      |         Base Model          | Max Length | Max Tool Calls |                              HF Link                               |
|:--------------------:|:---------------------------:|:----------:|:--------------:|:------------------------------------------------------------------:|
| MiroThinker-v1.0-8B  |        Qwen3-8B             |    256K    |      600       | [🤗 link](https://huggingface.co/miromind-ai/MiroThinker-v1.0-8B)  |
| MiroThinker-v1.0-30B | Qwen3-30B-A3B-Thinking-2507 |    256K    |      600       | [🤗 link](https://huggingface.co/miromind-ai/MiroThinker-v1.0-30B) |
| MiroThinker-v1.0-72B |    Qwen2.5-72B-Instruct     |    256K    |      600       | [🤗 link](https://huggingface.co/miromind-ai/MiroThinker-v1.0-72B) |

</div>

MiroThinker v1.0 demonstrates strong general-research performance across a broad range of benchmarks, achieving 37.7%, 47.1%, 55.6%, and 81.9% on HLE-Text, BrowseComp, BrowseComp-ZH, and GAIA-Text-103, respectively. These results surpass previous open-source agents and narrow the gap with commercial counterparts such as GPT-5-high.

![image](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v1.0_Performance_1.png)

More details can be found in our [technical report](https://arxiv.org/abs/2511.11793).

## Online Demo

Welcome to try out our online demo [here](https://dr.miromind.ai/).

## Performance 

> To prevent potential information leakage (e.g., searching benchmark answers from HuggingFace), access to HuggingFace has been explicitly disabled in these tools.

<div>
  <img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v1.0_Performance_2.png" width="100%" alt="MiroThinker" />
</div>

## Interactive Scaling

![image](https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/35d4Rp63y9NLpT7bwk4Zn.png)

The RL-tuned MiroThinker-v1.0-30B model exhibits far longer and deeper interaction trajectories than its SFT counterpart across all four major benchmarks. While SFT models often terminate after only a few tool calls, the RL model performs extended multi-turn reasoning, exploring and verifying information before concluding.

This behavioral shift yields **8–10 points accuracy gains**, showing a clear link between interaction depth and performance. We refer to this effect as **interactive scaling**: increasing the frequency and depth of tool-augmented interactions reliably improves research reasoning capability. This forms a third dimension of scaling—alongside model size and context length—defining MiroThinker’s path toward more general agentic intelligence.

## Quick Start

Please refer to our GitHub repository for installation instructions, examples, and full documentation:

👉 **[https://github.com/MiroMindAI/MiroThinker](https://github.com/MiroMindAI/MiroThinker)**

### Local Deployment

It is recommended to use SGLang for deploying the model:

```shell
python -m sglang.launch_server --model-path miromind-ai/MiroThinker-v1.0-72B --host 0.0.0.0 --port 1234
```

For optimal performance in agentic tasks, we recommend the following inference parameters:

```
temperature: 1.0
top_p: 0.95
repetition_penalty: 1.05
max_context_length: 262144
max_tokens: 16384
```

### Recommended System Prompt

We use this unified XML-wrapped JSON format to describe and organize all tools. If you have additional tools, please document them using the same structure and formatting to ensure consistent parsing, compatibility, and optimal performance across the environment.

<details>
  <summary>Click to expand system prompt example</summary>

```
You are MiroThinker, an advanced AI assistant developed by MiroMind.

In this environment you have access to a set of tools you can use to answer the user's question.

You only have access to the tools provided below. You can only use one tool per message, and will receive the result of that tool in the user's next response. You use tools step-by-step to accomplish a given task, with each tool-use informed by the result of the previous tool-use. Today is: {today_date}

# Tool-Use Formatting Instructions

Tool-use is formatted using XML-style tags. The tool-use is enclosed in <use_mcp_tool></use_mcp_tool> and each parameter is similarly enclosed within its own set of tags.

The Model Context Protocol (MCP) connects to servers that provide additional tools and resources to extend your capabilities. You can use the server's tools via the `use_mcp_tool`.

Description:
Request to use a tool provided by a MCP server. Each MCP server can provide multiple tools with different capabilities. Tools have defined input schemas that specify required and optional parameters.

Parameters:
- server_name: (required) The name of the MCP server providing the tool
- tool_name: (required) The name of the tool to execute
- arguments: (required) A JSON object containing the tool's input parameters, following the tool's input schema, quotes within string must be properly escaped, ensure it's valid JSON

Usage:
<use_mcp_tool>
<server_name>server name here</server_name>
<tool_name>tool name here</tool_name>
<arguments>
{
  "param1": "value1",
  "param2": "value2 \"escaped string\""
}
</arguments>
</use_mcp_tool>

Important Notes:
- Tool-use must be placed **at the end** of your response, **top-level**, and not nested within other tags.
- Always adhere to this format for the tool use to ensure proper parsing and execution.

String and scalar parameters should be specified as is, while lists and objects should use JSON format. Note that spaces for string values are not stripped. The output is not expected to be valid XML and is parsed with regular expressions.
Here are the functions available in JSONSchema format:

## Server name: tool-python
### Tool name: create_sandbox
Description: Create a linux sandbox.

    Args:
        timeout: Time in seconds before the sandbox is automatically shutdown. The default is 600 seconds.

    Returns:
        The id of the newly created sandbox. You should use this sandbox_id to run other tools in the sandbox.

Input JSON schema: {'properties': {'timeout': {'default': 600, 'title': 'Timeout', 'type': 'integer'}}, 'title': 'create_sandboxArguments', 'type': 'object'}

### Tool name: run_python_code
Description: Run python code in an interpreter and return the execution result.

    Args:
        code_block: The python code to run.
        sandbox_id: The id of the sandbox to run the code in. Reuse existing sandboxes whenever possible. To create a new sandbox, use tool `create_sandbox`.

    Returns:
        A result of the command execution, format like (stderr=..., stdout=..., exit_code=..., error=...)

Input JSON schema: {'properties': {'code_block': {'title': 'code_block', 'type': 'string'}, 'sandbox_id': {'title': 'Sandbox Id', 'type': 'string'}}, 'required': ['code_block', 'sandbox_id'], 'title': 'run_python_codeArguments', 'type': 'object'}

## Server name: search_and_scrape_webpage
### Tool name: google_search
Description:
    Tool to perform web searches via Serper API and retrieve rich results.

    It is able to retrieve organic search results, people also ask,
    related searches, and knowledge graph.

    Args:
        q: Search query string
        gl: Optional region code for search results in ISO 3166-1 alpha-2 format (e.g., 'us')
        hl: Optional language code for search results in ISO 639-1 format (e.g., 'en')
        location: Optional location for search results (e.g., 'SoHo, New York, United States', 'California, United States')
        num: Number of results to return (default: 10)
        tbs: Time-based search filter ('qdr:h' for past hour, 'qdr:d' for past day, 'qdr:w' for past week, 'qdr:m' for past month, 'qdr:y' for past year)
        page: Page number of results to return (default: 1)
        autocorrect: Whether to autocorrect spelling in query

    Returns:
        Dictionary containing search results and metadata.

Input JSON schema: {'properties': {'q': {'title': 'Q', 'type': 'string'}, 'gl': {'default': 'us', 'title': 'Gl', 'type': 'string'}, 'hl': {'default': 'en', 'title': 'Hl', 'type': 'string'}, 'location': {'default': None, 'title': 'Location', 'type': 'string'}, 'num': {'default': None, 'title': 'Num', 'type': 'integer'}, 'tbs': {'default': None, 'title': 'Tbs', 'type': 'string'}, 'page': {'default': None, 'title': 'Page', 'type': 'integer'}, 'autocorrect': {'default': None, 'title': 'Autocorrect', 'type': 'boolean'}}, 'required': ['q'], 'title': 'google_searchArguments', 'type': 'object'}

## Server name: jina_scrape_llm_summary
### Tool name: scrape_and_extract_info
Description:
    Scrape content from a URL and extract specific types of information using LLM.

    Args:
        url (str): The URL to scrape content from
        info_to_extract (str): The specific types of information to extract (usually a question)
        custom_headers (Dict[str, str]): Additional headers to include in the scraping request

    Returns:
        Dict[str, Any]: A dictionary containing:
            - success (bool): Whether the operation was successful
            - url (str): The original URL
            - extracted_info (str): The extracted information
            - error (str): Error message if the operation failed
            - scrape_stats (Dict): Statistics about the scraped content
            - model_used (str): The model used for summarization
            - tokens_used (int): Number of tokens used (if available)

Input JSON schema: {'properties': {'url': {'title': 'Url', 'type': 'string'}, 'info_to_extract': {'title': 'Info To Extract', 'type': 'string'}, 'custom_headers': {'additionalProperties': {'type': 'string'}, 'default': None, 'title': 'Custom Headers', 'type': 'object'}}, 'required': ['url', 'info_to_extract'], 'title': 'scrape_and_extract_infoArguments', 'type': 'object'}

# General Objective

You accomplish a given task iteratively, breaking it down into clear steps and working through them methodically.
```

</details>

### Minimal Runnable Example

The following example shows how to run a MCP-style tool-calling workflow, including system prompt generation, model invocation, tool execution, and final response generation.

Before running the script, make sure to set the required environment variables:

```bash
export OPENAI_API_KEY="your-api-key-here"
export BASE_URL="https://your-model-endpoint.example.com/v1"
```

<details open>
  <summary>Click to expand python code example</summary>

```python
import json
import os
import inspect
import re
from openai import OpenAI
from json_repair import repair_json

def get_weather(location: str, unit: str = "celsius") -> str:
    """
    Get weather information for a specified location (simulated)
    
    Args:
        location: Location name
        unit: Temperature unit, either celsius or fahrenheit
    
    Returns:
        JSON string with weather information
    """
    weather_data = {
        "London": {"temperature": 15, "condition": "sunny", "humidity": 45},
        "New York": {"temperature": 20, "condition": "cloudy", "humidity": 60},
        "Tokyo": {"temperature": 25, "condition": "rainy", "humidity": 75},
    }
    weather = weather_data.get(location, {"temperature": 18, "condition": "unknown", "humidity": 50})
    if unit == "fahrenheit":
        weather["temperature"] = weather["temperature"] * 9/5 + 32
        weather["unit"] = "°F"
    else:
        weather["unit"] = "°C"
    return json.dumps(weather, ensure_ascii=False)

def calculate(expression: str) -> str:
    """
    Calculate a mathematical expression
    
    Args:
        expression: Mathematical expression, e.g., "2 + 3 * 4"
    
    Returns:
        Calculation result
    """
    try:
        result = eval(expression)
        return json.dumps({"result": result, "expression": expression}, ensure_ascii=False)
    except Exception as e:
        return json.dumps({"error": str(e)}, ensure_ascii=False)

tools = [
    {"type": "function", "function": {"name": "get_weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "Location name"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"], "description": "Temperature unit, default is celsius"}}, "required": ["location"]}}},
    {"type": "function", "function": {"name": "calculate", "parameters": {"type": "object", "properties": {"expression": {"type": "string", "description": "Mathematical expression to calculate, e.g., '2 + 3 * 4'"}}, "required": ["expression"]}}}
]

available_functions = {"get_weather": get_weather, "calculate": calculate}

def parse_mcp_tool_call(response_text: str):
    """Parse MCP-style tool call from model response. Returns first tool call or None."""
    match = re.search(r'<use_mcp_tool>(.*?)</use_mcp_tool>', response_text, re.DOTALL)
    if not match:
        return None
    content = match.group(1)
    server_match = re.search(r'<server_name>(.*?)</server_name>', content, re.DOTALL)
    tool_match = re.search(r'<tool_name>(.*?)</tool_name>', content, re.DOTALL)
    args_match = re.search(r'<arguments>(.*?)</arguments>', content, re.DOTALL)
    server_name = server_match.group(1).strip() if server_match else None
    tool_name = tool_match.group(1).strip() if tool_match else None
    if args_match:
        try:
            arguments = json.loads(args_match.group(1).strip())
        except json.JSONDecodeError as e:
            print(f"⚠️  Warning: Failed to parse arguments JSON: {e}, attempting to repair...")
            try:
                repaired = repair_json(args_match.group(1).strip())
                arguments = json.loads(repaired)
                print(f"✅  Successfully repaired JSON")
            except Exception as repair_error:
                print(f"❌  Failed to repair JSON: {repair_error}")
                arguments = {}
    else:
        arguments = {}
    if server_name and tool_name:
        return {"server_name": server_name, "tool_name": tool_name, "arguments": arguments}
    return None

def generate_mcp_system_prompt(openai_tools: list, available_functions: dict = None, server_name: str = "default", date: str = "2025-11-27") -> str:
    """Generate MCP-style system prompt from OpenAI tools format."""
    prefix = f"""You are MiroThinker, an advanced AI assistant developed by MiroMind.

In this environment you have access to a set of tools you can use to answer the user's question.

You only have access to the tools provided below. You can only use one tool per message, and will receive the result of that tool in the user's next response. You use tools step-by-step to accomplish a given task, with each tool-use informed by the result of the previous tool-use. Today is: {date}

# Tool-Use Formatting Instructions

Tool-use is formatted using XML-style tags. The tool-use is enclosed in <use_mcp_tool></use_mcp_tool> and each parameter is similarly enclosed within its own set of tags.

The Model Context Protocol (MCP) connects to servers that provide additional tools and resources to extend your capabilities. You can use the server's tools via the `use_mcp_tool`.

Description:
Request to use a tool provided by a MCP server. Each MCP server can provide multiple tools with different capabilities. Tools have defined input schemas that specify required and optional parameters.

Parameters:
- server_name: (required) The name of the MCP server providing the tool
- tool_name: (required) The name of the tool to execute
- arguments: (required) A JSON object containing the tool's input parameters, following the tool's input schema, quotes within string must be properly escaped, ensure it's valid JSON

Usage:
<use_mcp_tool>
<server_name>server name here</server_name>
<tool_name>tool name here</tool_name>
<arguments>
{{
  "param1": "value1",
  "param2": "value2 \\"escaped string\\""
}}
</arguments>
</use_mcp_tool>

Important Notes:
- Tool-use must be placed **at the end** of your response, **top-level**, and not nested within other tags.
- Always adhere to this format for the tool use to ensure proper parsing and execution.

String and scalar parameters should be specified as is, while lists and objects should use JSON format. Note that spaces for string values are not stripped. The output is not expected to be valid XML and is parsed with regular expressions.
Here are the functions available in JSONSchema format:

## Server name: {server_name}
"""
    tools_section = []
    for i, tool in enumerate(openai_tools):
        if tool.get("type") == "function":
            func = tool["function"]
            tool_name = func["name"]
            func_obj = available_functions[tool_name]
            full_description = inspect.getdoc(func_obj) or func.get("description", "")
            if i > 0:
                tools_section.append("\n")
            tools_section.append(f"### Tool name: {tool_name}\nDescription: {full_description}\n\nInput JSON schema: {json.dumps(func['parameters'], ensure_ascii=False)}\n")
    suffix = "\n# General Objective\n\nYou accomplish a given task iteratively, breaking it down into clear steps and working through them methodically."
    return prefix + ''.join(tools_section) + suffix

def run_conversation(user_query: str, model: str = "MiroThinker"):
    """Run a complete conversation with tool calling"""
    system_prompt = generate_mcp_system_prompt(openai_tools=tools, available_functions=available_functions, server_name="My-Tools", date="2025-12-01")
    client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "your-api-key-here"), base_url=os.environ.get("BASE_URL", "your-base-url-here"))
    print(f"\n{'='*60}\nUser Query: {user_query}\n{'='*60}\n")
    messages = [{'role': 'system', 'content': system_prompt}, {"role": "user", "content": user_query}]
    print("📤 Sending request to model...")
    response = client.chat.completions.create(model=model, messages=messages)
    response_message = response.choices[0].message
    response_content = response_message.content
    tool_call = parse_mcp_tool_call(response_content)
    print(f"📝 Model response:\n{response_content}\n")
    messages.append(response_message)
    if tool_call:
        server_name = tool_call["server_name"]
        tool_name = tool_call["tool_name"]
        function_args = tool_call["arguments"]
        print(f"\n🔧 Model decided to call tool:\n  - Server: {server_name}\n    Tool: {tool_name}\n    Args: {json.dumps(function_args, ensure_ascii=False)}")
        function_response = available_functions[tool_name](**function_args)
        print(f"    Result: {function_response}\n")
        messages.append({"role": "user", "content": function_response})
        print("📤 Requesting model to generate final response based on tool results...\n")
        second_response = client.chat.completions.create(model=model, messages=messages)
        final_message = second_response.choices[0].message.content
        print(f"💬 Final Response:\n{final_message}\n")
        return final_message
    else:
        print(f"💬 Model Response (no tool calls):\n{response_message.content}\n")
        return response_message.content

def main():
    """Run multiple examples"""
    run_conversation("What's the weather like in London?")
    # run_conversation("Calculate (25 + 15) * 3 - 10")

if __name__ == "__main__":
    main()
```

</details>

## License

MiroThinker v1.0 is released under the MIT License.

## Citation

If you find this project useful in your research, please consider citing:

```
@article{miromind2025mirothinker,
  title={MiroThinker: Pushing the Performance Boundaries of Open-Source Research Agents via Model, Context, and Interactive Scaling},
  author={MiroMind Team and Bai, Song and Bing, Lidong and Chen, Carson and Chen, Guanzheng and Chen, Yuntao and Chen, Zhe and Chen, Ziyi and Dai, Jifeng and Dong, Xuan and others},
  journal={arXiv preprint arXiv:2511.11793},
  year={2025}
}
```

## Contact Us

MiroThinker is developed by the MiroMind AI Team.
If you would like to leave us a message, feel free to get in touch. 
In addition to [GitHub](https://github.com/MiroMindAI/), 
[Discord](https://discord.com/invite/GPqEnkzQZd), 
[WeChat](https://raw.githubusercontent.com/MiroMindAI/MiroThinker/refs/heads/main/assets/miromind_wechat.png), 
and [RedNote](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239), 
you can also reach us via email at service@miromind.ai.