SunSec commited on
Commit
13e56c8
·
verified ·
1 Parent(s): 348f168

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +6 -0
  2. building_env/Miniforge3-Linux-x86_64.sh +3 -0
  3. deep_search/data_from_zhiyuan/data_for_rl/musique_tagged/musique_tagged_domain_keypoints_keywords_count.json +3 -0
  4. deep_search/data_from_zhiyuan/data_for_rl/tagged_domain_keypoints/final_selected_dataset.json +3 -0
  5. deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/final_dataset/domain_distribution_pie.png +3 -0
  6. deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/final_dataset/hop_histogram.png +3 -0
  7. deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/final_dataset/key_points_distribution_pie.png +3 -0
  8. deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/final_dataset/total_histogram.png +3 -0
  9. deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_1_tagged.json +3 -0
  10. deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_2_tagged.json +3 -0
  11. deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_5_tagged.json +3 -0
  12. deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/tagged_domain_keypoints/special_total_histogram.png +3 -0
  13. deep_search/search_o1/scripts/0_gen_google_plus_inst_summary_sft.py +621 -0
  14. deep_search/search_o1/scripts/SimpleDeepSearcher/README.md +145 -0
  15. deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/2wiki.json +0 -0
  16. deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/aime.json +0 -0
  17. deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/bamboogle.json +1002 -0
  18. deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/frames.json +0 -0
  19. deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/gaia.json +0 -0
  20. deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/musique.json +0 -0
  21. deep_search/search_o1/scripts/SimpleDeepSearcher/eval/gpt_eval_sft.py +216 -0
  22. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/__pycache__/add_eval.cpython-310.pyc +0 -0
  23. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/__pycache__/evaluate.cpython-310.pyc +0 -0
  24. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/__pycache__/google_search.cpython-310.pyc +0 -0
  25. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/__pycache__/prompts.cpython-310.pyc +0 -0
  26. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/add_eval.py +705 -0
  27. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/evaluate.py +452 -0
  28. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/google_search.py +417 -0
  29. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/inference.py +759 -0
  30. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/benchmarks/__init__.py +13 -0
  31. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/benchmarks/code_execution.py +67 -0
  32. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/benchmarks/code_generation.py +139 -0
  33. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/benchmarks/test_output_prediction.py +70 -0
  34. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__init__.py +6 -0
  35. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/__init__.cpython-310.pyc +0 -0
  36. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/__init__.cpython-311.pyc +0 -0
  37. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/__init__.cpython-39.pyc +0 -0
  38. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_execution_metrics.cpython-310.pyc +0 -0
  39. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_execution_metrics.cpython-311.pyc +0 -0
  40. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_execution_metrics.cpython-39.pyc +0 -0
  41. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_generation_metrics.cpython-310.pyc +0 -0
  42. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_generation_metrics.cpython-311.pyc +0 -0
  43. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_generation_metrics.cpython-39.pyc +0 -0
  44. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_test_output_prediction_metrics.cpython-310.pyc +0 -0
  45. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_test_output_prediction_metrics.cpython-311.pyc +0 -0
  46. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_test_output_prediction_metrics.cpython-39.pyc +0 -0
  47. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/pass_k_utils.cpython-310.pyc +0 -0
  48. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/pass_k_utils.cpython-311.pyc +0 -0
  49. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/pass_k_utils.cpython-39.pyc +0 -0
  50. deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/testing_util.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -69,3 +69,9 @@ deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_4_tagged.jso
69
  deep_search/data_from_zhiyuan/data_for_dpo/3k_question/17w_select_3k_for_dpo.json filter=lfs diff=lfs merge=lfs -text
70
  deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_6_tagged.json filter=lfs diff=lfs merge=lfs -text
71
  deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_7_tagged.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
69
  deep_search/data_from_zhiyuan/data_for_dpo/3k_question/17w_select_3k_for_dpo.json filter=lfs diff=lfs merge=lfs -text
70
  deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_6_tagged.json filter=lfs diff=lfs merge=lfs -text
71
  deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_7_tagged.json filter=lfs diff=lfs merge=lfs -text
72
+ deep_search/data_from_zhiyuan/data_for_rl/tagged_domain_keypoints/final_selected_dataset.json filter=lfs diff=lfs merge=lfs -text
73
+ building_env/Miniforge3-Linux-x86_64.sh filter=lfs diff=lfs merge=lfs -text
74
+ deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_5_tagged.json filter=lfs diff=lfs merge=lfs -text
75
+ deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_1_tagged.json filter=lfs diff=lfs merge=lfs -text
76
+ deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_2_tagged.json filter=lfs diff=lfs merge=lfs -text
77
+ deep_search/data_from_zhiyuan/data_for_rl/musique_tagged/musique_tagged_domain_keypoints_keywords_count.json filter=lfs diff=lfs merge=lfs -text
building_env/Miniforge3-Linux-x86_64.sh ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65af53dad30b3fcbd1cb1d4ad62fd3a86221464754844544558aae3a28795189
3
+ size 90308175
deep_search/data_from_zhiyuan/data_for_rl/musique_tagged/musique_tagged_domain_keypoints_keywords_count.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98754d32877af32e04b050fafef05bda60f7dd2e7513caf607e15a4fcc833a72
3
+ size 159283950
deep_search/data_from_zhiyuan/data_for_rl/tagged_domain_keypoints/final_selected_dataset.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23cb3c0f16fbb734d9338644898227c47d9678da154825ee3786f2349e4cf53b
3
+ size 64168759
deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/final_dataset/domain_distribution_pie.png ADDED

Git LFS Details

  • SHA256: 58f2ad3965a13835d3310999e7c6941850753159853e34de5b9426762794d01f
  • Pointer size: 131 Bytes
  • Size of remote file: 122 kB
deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/final_dataset/hop_histogram.png ADDED

Git LFS Details

  • SHA256: 5f5face6f75b4a23b0f7c5ba899276dd2407084bcf057639a38f7566eb58a14a
  • Pointer size: 130 Bytes
  • Size of remote file: 17.2 kB
deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/final_dataset/key_points_distribution_pie.png ADDED

Git LFS Details

  • SHA256: 829ac1d7c2927612e6cc89620e4e9ddfb15c511127074825c0e05b9e38f397b5
  • Pointer size: 131 Bytes
  • Size of remote file: 698 kB
deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/final_dataset/total_histogram.png ADDED

Git LFS Details

  • SHA256: 37aa37a9f1f6cc4b9333b4f4a7aaec0f87a420b03eaf22f6e75d6dfca5c47d05
  • Pointer size: 130 Bytes
  • Size of remote file: 17.7 kB
deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_1_tagged.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0834d4f6529c8f33bcc498195fe2771b31df81b420c552bd755b8dfc06c25da5
3
+ size 28838337
deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_2_tagged.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e323cd5c9b0885883e9a09dcd3b61d334ad1f0450b2cc9706c1fdd094f9022b4
3
+ size 21198229
deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/split_5_tagged.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58f334da8ae27fc0168eb4a01c8a3a2c5389b9fa1896cbab76a2a85a85167ccf
3
+ size 27511469
deep_search/data_from_zhiyuan/data_syn/data/mixed_data/splits/tagged_domain_keypoints/special_total_histogram.png ADDED

Git LFS Details

  • SHA256: 19015a6dd22ccc668d6208c6aff816c017b15d9e00bba91b2da5b19396b19871
  • Pointer size: 130 Bytes
  • Size of remote file: 22.9 kB
deep_search/search_o1/scripts/0_gen_google_plus_inst_summary_sft.py ADDED
@@ -0,0 +1,621 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import argparse
3
+ import torch.distributed as dist
4
+ import json
5
+ from vllm import LLM, SamplingParams
6
+ from datasets import Dataset
7
+ from transformers import AutoTokenizer
8
+ import torch.multiprocessing as mp
9
+ from openai import OpenAI
10
+ import sys
11
+ import os
12
+ import re
13
+ from datasets import load_dataset
14
+ import http.client
15
+ import os
16
+ from openai import OpenAI
17
+ import sys
18
+ import os
19
+ from datasets import load_dataset
20
+ import http.client
21
+ import json
22
+ from tqdm import tqdm
23
+ import multiprocessing
24
+ from time import sleep
25
+ from concurrent.futures import ThreadPoolExecutor, as_completed
26
+
27
+ import requests
28
+ import json
29
+ from collections import defaultdict
30
+ import random
31
+ import json
32
+ import requests
33
+ import time
34
+ import re
35
+ import concurrent.futures
36
+ from bs4 import BeautifulSoup
37
+ # import wikipediaapi
38
+ import wikipedia
39
+ from urllib.parse import unquote
40
+ from urllib.parse import urlparse
41
+
42
+ import json
43
+ import requests
44
+ import time
45
+ from requests.exceptions import Timeout
46
+ from bs4 import BeautifulSoup
47
+ from tqdm import tqdm
48
+ import time
49
+ import concurrent
50
+ from concurrent.futures import ThreadPoolExecutor
51
+ import pdfplumber
52
+ from io import BytesIO
53
+ import re
54
+ import string
55
+ from typing import Optional, Tuple
56
+ from nltk.tokenize import sent_tokenize
57
+ import json
58
+ import copy
59
+ from tqdm import tqdm
60
+ import multiprocessing
61
+ from time import sleep
62
+ import requests
63
+ from collections import defaultdict
64
+ import random
65
+ import requests
66
+ import time
67
+ # import wikipediaapi
68
+ import wikipedia
69
+ from urllib.parse import unquote
70
+ from urllib.parse import urlparse
71
+ # CUDA_VISIBLE_DEVICES=5
72
+ import os
73
+ import json
74
+ import requests
75
+ from requests.exceptions import Timeout
76
+ from bs4 import BeautifulSoup
77
+ from tqdm import tqdm
78
+ import time
79
+ import concurrent
80
+ from concurrent.futures import ThreadPoolExecutor
81
+ import pdfplumber
82
+ from io import BytesIO
83
+ import re
84
+ import string
85
+ from typing import Optional, Tuple
86
+ from nltk.tokenize import sent_tokenize
87
+ from bs4 import BeautifulSoup
88
+ # import wikipediaapi
89
+
90
+
91
+ import re
92
+ from time import sleep
93
+ from openai import OpenAI
94
+ import requests
95
+ import json
96
+ import multiprocessing
97
+ from collections import defaultdict
98
+ from tqdm import tqdm
99
+
100
+ headers = {
101
+ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
102
+ 'AppleWebKit/537.36 (KHTML, like Gecko) '
103
+ 'Chrome/58.0.3029.110 Safari/537.36',
104
+ 'Referer': 'https://www.google.com/',
105
+ 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
106
+ 'Accept-Language': 'en-US,en;q=0.5',
107
+ 'Connection': 'keep-alive',
108
+ 'Upgrade-Insecure-Requests': '1'
109
+ }
110
+
111
+ summary_prompt="""**Task Instruction:**
112
+
113
+ You are tasked with reading and analyzing web pages based on the following inputs: **Previous Reasoning Path**, **Current Search Query**, and **Searched Web Pages**.
114
+ Your objective is to extract relevant and helpful information for **Current Search Query** from the **Searched Web Pages** and seamlessly integrate this information into the **Previous Reasoning Path** to continue reasoning for the original question.
115
+
116
+ **Guidelines:**
117
+
118
+ 1. **Analyze the Searched Web Pages:**
119
+ - Carefully review the content of each searched web page.
120
+ - Identify factual information that is relevant to the **Current Search Query** and can aid in the reasoning process for the original question.
121
+
122
+ 2. **Extract Relevant Information:**
123
+ - Select the information from the Searched Web Pages that directly contributes to advancing the **Previous Reasoning Path**.
124
+ - Ensure that the extracted information is accurate and relevant.
125
+
126
+ 3. **Output Format:**
127
+ - Present the helpful information for current search query: beginning with `**Final Information**` as shown below.
128
+ **Final Information**
129
+ [Helpful information]
130
+
131
+ Now you should analyze the web pages and find helpful information based on the current search query and previous reasoning path.
132
+
133
+ **Inputs:**
134
+ - **Previous Reasoning Path:**
135
+ {prev_reasoning}
136
+
137
+ - **Current Search Query:**
138
+ {search_query}
139
+
140
+ - **Searched Web Pages:**
141
+ {document}
142
+ """
143
+
144
+ summary_prompt_old='''## Task Description:
145
+ Given the search query and the content of the searched webpage.
146
+ Your task is to extract information from the webpage content that is relevant and helpful to the search query and return a summary paragraph.
147
+
148
+ ## **Guidelines**:
149
+ (1) The extracted content should be relevant and helpful to the query.
150
+ (2) The form of the extracted content **must be a summary paragraph** rather than a direct answer to the query.
151
+ (3) You **must extract content according to this webpage**. If the webpage content is unrelated to the query, no extraction is required.
152
+
153
+ ## Output Format:
154
+ [Exacted Content]: If the webpage content contains information related to the query, output the relevant summary paragraph (not a direct answer to the query); if not, output "None".
155
+
156
+ ## Inputs:
157
+ [Search Query]
158
+ {search_query}
159
+
160
+ [Webpage Content]
161
+ {document}
162
+
163
+ ## Output:
164
+ [Exacted Content]
165
+ '''
166
+ proxies = {
167
+ "http": "http://127.0.0.1:7895",
168
+ "https": "http://127.0.0.1:7895",
169
+ }
170
+ # Initialize session
171
+ session = requests.Session()
172
+ session.headers.update(headers)
173
+ session.proxies.update(proxies)
174
+ wikipedia.set_lang('en') # 这里设置为你需要的语言
175
+ wikipedia._http = session # 替换为自定义的 session
176
+
177
+ os.environ["OPENAI_API_KEY"] = "sk-3alq0hqR4hFlXMuyOY07rvQqSF52UJ09CsHHXyrW72yPcw8l"
178
+ os.environ["OPENAI_API_BASE"] = "https://open.xiaojingai.com/v1"
179
+ client = OpenAI(
180
+ api_key=os.environ.get("OPENAI_API_KEY"),
181
+ base_url=os.environ.get("OPENAI_API_BASE")
182
+ )
183
+
184
+ def extract_text_from_url(url, use_jina=False, jina_api_key=None, snippet: Optional[str] = None):
185
+
186
+ try:
187
+ if "wikipedia.org" in url:
188
+ wiki_200=0
189
+ for u in range(5):
190
+ try:
191
+ print(url)
192
+ # print(unquote(url.split('/')[-1]))
193
+ page_title = unquote(url.split('/')[-1])
194
+ # print("000")
195
+ page = wikipedia.page(page_title, auto_suggest=False)
196
+ # print("111")
197
+ text = page.content
198
+ text = text.replace('\n\n', '\n')
199
+ search_doc = text.split('== References ==')[0].split("== Notes ==")[0].strip()
200
+ wiki_200=1
201
+ # print("222")
202
+ return search_doc
203
+ except:
204
+ time.sleep(2)
205
+ continue
206
+ if wiki_200==0:
207
+ search_doc = "None"
208
+ print("无法访问该页面,url:",url)
209
+ return search_doc
210
+ else:
211
+ flag_200=0
212
+ for t in range(3):
213
+ try:
214
+ response = session.get(url)
215
+ if response.status_code == 200:
216
+ soup = BeautifulSoup(response.text,'lxml')
217
+ text_before_summary = soup.get_text(separator='\n', strip=True)
218
+ lines_before_summary=text_before_summary.split("\n")
219
+ search_doc=" ".join(line for line in lines_before_summary if len(line.split())>=0)
220
+ flag_200=1
221
+ return search_doc
222
+ except:
223
+ sleep(2)
224
+ continue
225
+
226
+ if flag_200==0:
227
+ search_doc = "None"
228
+ print("无法访问该页面,:", "url:",url)
229
+ return search_doc
230
+ except Exception as e:
231
+ print(f"An error occurred: {e}")
232
+ return "None"
233
+
234
+
235
+
236
+ def bing_web_search(query, subscription_key, endpoint, market='en-US', language='en', timeout=20):
237
+
238
+ payload = json.dumps({
239
+ "q": query, # 设置查询内容
240
+ "mkt": market, # 设置市场
241
+ "setLang": language, # 设置语言
242
+ "textDecorations": True, # 启用文本装饰
243
+ "textFormat": "HTML" # 设置文本格式
244
+ })
245
+
246
+ headers = {
247
+ 'X-API-KEY': subscription_key,
248
+ 'Content-Type': 'application/json'
249
+ }
250
+
251
+ try:
252
+ # 发送POST请求
253
+ response = requests.request("POST", endpoint, headers=headers, data=payload)
254
+ response.raise_for_status() # Raise exception if the request failed 检查响应的状态码。如果返回的状态码是 4xx 或 5xx(表示客户端或服务器错误),它将引发 requests.exceptions.HTTPError 异常
255
+ search_results = response.json() #
256
+ return search_results
257
+ except Timeout:
258
+ print(f"Bing Web Search request timed out ({timeout} seconds) for query: {query}")
259
+ return {} # Or you can choose to raise an exception
260
+ except requests.exceptions.RequestException as e:
261
+ print(f"Error occurred during Bing Web Search request: {e}")
262
+ return {}
263
+
264
+
265
+
266
+ def extract_relevant_info(search_results):
267
+
268
+ useful_info = []
269
+
270
+ if 'organic' in search_results : # value 通常是一个列表,包含了搜索结果的每个页面信息
271
+ for id, result in enumerate(search_results['organic']):
272
+ # if "wikipedia.org" not in result.get('link', ''):
273
+ # continue
274
+ info = {
275
+ 'title': result.get('title', ''), # 每个搜索结果中提取标题
276
+ 'url': result.get('link', ''), # 每个搜索结果中提取 URL
277
+ }
278
+ useful_info.append(info)
279
+
280
+ return useful_info
281
+
282
+ def generate(messages, model_name):
283
+ response = client.chat.completions.create(
284
+ **{
285
+ "model": model_name,
286
+ "messages": messages,
287
+ "max_tokens": 2048,
288
+ }
289
+ )
290
+ response = response.choices[0].message.content
291
+ return response
292
+
293
+ def parse_args():
294
+ parser = argparse.ArgumentParser()
295
+ parser.add_argument("--subject", type=str, default="")
296
+ parser.add_argument("--start_sample", type=int, default=-1)
297
+ parser.add_argument("--end_sample", type=int, default=100000)
298
+ parser.add_argument("--max_samples", type=int, default=0)
299
+ parser.add_argument("--src_file", type=str, default="None")
300
+ parser.add_argument("--gpu_id", type=str, default="0")
301
+ parser.add_argument("--model_path", type=str, default="None")
302
+ parser.add_argument("--gpu_memory_rate", type=float, default=0.95)
303
+ parser.add_argument("--port", type=str, default="None")
304
+ parser.add_argument("--temp", type=float, default=0.0)
305
+ parser.add_argument("--prompt_type", type=str, default="None")
306
+ parser.add_argument("--tp_num", type=int, default=1)
307
+ parser.add_argument("--chunk_size", type=int, default=25)
308
+ return parser.parse_args()
309
+
310
+ def process_text(examples,tokenizer,type=None):
311
+ question = examples["Question"]
312
+
313
+
314
+ if type=="v3":
315
+ messages_chat=[
316
+ {"role": "system","content": """You are a helpful assistant.
317
+ Given a question, you should answer it by first thinking about the reasoning process in the mind and then providing the final answer.
318
+ The output format of reasoning process and final answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., "<think> reasoning process here </think>\n\n<answer> final answer here </answer>".
319
+ During the thinking process, **you can perform searching for uncertain knowledge** if necessary with the format of "<|begin_search_query|> search query <|end_search_query|>". **A query must involve only a single triple**.
320
+ Then, the search system will provide you with the retrieval information with the format of "<|begin_search_result|> ...search results... <|end_search_result|>"."""},
321
+ {"role": "user", "content":question}
322
+ ]
323
+
324
+ elif type=="v0":
325
+ messages_chat=[
326
+ {"role": "system","content": """You are a helpful assistant. Given a question, you should answer it by first thinking about the reasoning process in the mind and then providing the final answer. The output format of reasoning process and final answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., "<think> reasoning process here </think>\n\n<answer> final answer here </answer>". During the thinking process, **you can perform searching for uncertain knowledge** if necessary with the format of "<|begin_search_query|> search query (only keywords) here <|end_search_query|>". Then, the search system will provide you with the retrieval information with the format of "<|begin_search_result|> ...search results... <|end_search_result|>"."""},
327
+ {"role": "user", "content":question}
328
+ ]
329
+ elif type=="v2":
330
+ messages_chat=[
331
+ {"role": "system","content": """You are a helpful assistant. Given a **Judgement question**, you should answer it by first thinking about the reasoning process in the mind and then providing the final answer. The output format of reasoning process and final answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., "<think> reasoning process here </think>\n\n<answer> final answer here (yes or no)</answer>". During the thinking process, **you can perform searching for uncertain knowledge** if necessary with the format of "<|begin_search_query|> search query (only keywords) here <|end_search_query|>". Then, the search system will provide you with the retrieval information with the format of "<|begin_search_result|> ...search results... <|end_search_result|>". The final answer **must be yes or no**."""},
332
+ {"role": "user", "content":question}
333
+ ]
334
+ elif type=="sft_v2":
335
+ messages_chat=[
336
+ {"role": "system","content": """You are a reasoning assistant with the ability to perform web searches to help you answer the user's question accurately. You have special tools:
337
+
338
+ - To perform a search: write <|begin_search_query|> your query here <|end_search_query|> .
339
+ Then, the system will search and analyze relevant web pages, then provide you with helpful information in the format <|begin_search_result|> ...search results... <|end_search_result|>.
340
+
341
+ Whenever you encounter a topic, fact, or piece of information you are uncertain about or need further details on, please perform a search to gather more accurate, up-to-date, or specific information. You can repeat the search process multiple times if necessary.
342
+
343
+ Once you have all the information you need, continue your reasoning.
344
+
345
+ Remember:
346
+ - Use <|begin_search_query|> to request a web search and end with <|end_search_query|>.
347
+ - When done searching, continue your reasoning.
348
+ - Do not generate <|begin_search_result|> and <|end_search_result|> tags yourself.
349
+
350
+ Please answer the following question. You should provide your final answer in the format \\boxed{YOUR_ANSWER}.
351
+ """},
352
+ {"role": "user", "content":"Question:\n"+question}
353
+ ]
354
+ elif type=="sft_v3":
355
+ messages_chat=[
356
+ {"role": "user","content": """You are a reasoning assistant with the ability to perform web searches to help you answer the user's question accurately. You have special tools:
357
+
358
+ - To perform a search: write <|begin_search_query|> your query here <|end_search_query|> .
359
+ Then, the system will search and analyze relevant web pages, then provide you with helpful information in the format <|begin_search_result|> ...search results... <|end_search_result|>.
360
+
361
+ Whenever you encounter a topic, fact, or piece of information you are uncertain about or need further details on, please perform a search to gather more accurate, up-to-date, or specific information. You can repeat the search process multiple times if necessary.
362
+
363
+ Once you have all the information you need, continue your reasoning.
364
+
365
+ Remember:
366
+ - Use <|begin_search_query|> to request a web search and end with <|end_search_query|>.
367
+ - When done searching, continue your reasoning.
368
+ - Do not generate <|begin_search_result|> and <|end_search_result|> tags yourself.
369
+
370
+ Please answer the following question. You should provide your final answer in the format \\boxed{YOUR_ANSWER}.\n\nQuestion:"""+question},
371
+ ]
372
+ else:
373
+ kill
374
+
375
+
376
+ chat_prompt = tokenizer.apply_chat_template(
377
+ messages_chat,
378
+ tokenize=False,
379
+ add_generation_prompt=True
380
+ )
381
+ examples["chat_prompt"] = chat_prompt + "<think>"
382
+ return examples
383
+
384
+ def process_output(output, continued_answer, k):
385
+ prompt = output.prompt
386
+ answer = continued_answer["answer"]
387
+ quesiton = continued_answer["Question"]
388
+ gen_text_store = continued_answer["gen_text_store"]
389
+ stop_reason = output.outputs[0].stop_reason
390
+ generated_text = output.outputs[0].text
391
+ # print(generated_text)
392
+ # print("=="*30)
393
+
394
+ # Return 'finished' or 'continued' along with the corresponding data
395
+ if k == 9: # 检索次数太多了,直接停掉,就是未完成
396
+ original_data = {
397
+ "Question": quesiton,
398
+ "answer": answer,
399
+ "generated_text": generated_text,
400
+ "stop_reason_final": "many_retrieve",
401
+ "pred_ans": "I don't know."
402
+ }
403
+ return original_data, "finished"
404
+
405
+ if "boxed" in generated_text:
406
+ lines = generated_text.split("\n")
407
+ proposed_ans = "I don't know."
408
+ for k, line in enumerate(reversed(lines)):
409
+ if "boxed{" in line:
410
+ proposed_ans_ori = line
411
+ pattern = r"\\boxed\{(.+?)\}"
412
+ match = re.search(pattern, proposed_ans_ori)
413
+ if match:
414
+ proposed_ans = match.group(1) # 提取括号中的内容
415
+ else:
416
+ proposed_ans = "I don't know."
417
+ original_data = {
418
+ "Question": quesiton,
419
+ "answer": answer,
420
+ "pred_ans": proposed_ans,
421
+ "stop_reason_final": "finished",
422
+ "gen_text_store": gen_text_store + generated_text ,
423
+ }
424
+ return original_data, "finished"
425
+
426
+ elif "<|begin_search_query|>" in generated_text and stop_reason == "<|end_search_query|>": # 这里处理retrieve
427
+ query = generated_text.split("<|begin_search_query|>")[-1].split("<|end_search_query|>")[0]
428
+ query = query.replace('"', "").replace("'", "").replace("\t", " ").replace("...", "")
429
+
430
+ if query:
431
+ # Reuse search and info extraction logic
432
+ BING_SUBSCRIPTION_KEY = "cb0d28279a826d7e5cf22d71f683c77ffd4ba27d"
433
+ bing_endpoint = "https://google.serper.dev/search"
434
+ # search_results = bing_web_search(query + " site:en.wikipedia.org", BING_SUBSCRIPTION_KEY, bing_endpoint)
435
+ search_results = bing_web_search(query, BING_SUBSCRIPTION_KEY, bing_endpoint)
436
+
437
+ extracted_info_ori = extract_relevant_info(search_results)
438
+ # print(extracted_info_ori)
439
+ # print(len(extracted_info_ori))
440
+ # # kill
441
+
442
+ extracted_info = [
443
+ e for e in extracted_info_ori
444
+ if not any(ext in e["url"].lower() for ext in [".jpg", ".jpeg", ".png", ".gif", ".bmp", ".mp4", ".mp3", ".wav", ".avi", ".mov", ".zip", ".pdf"])
445
+ ]
446
+ doc_content = "None"
447
+ combined_text = ""
448
+ for info in extracted_info[:10]:
449
+ print("Begin Get Full Page")
450
+ full_text = extract_text_from_url(info['url'])
451
+ print("End Get Full Page")
452
+ combined_text += full_text[:10000] + "\n\n"
453
+ query_summary = query
454
+ prev_reasoning = gen_text_store + generated_text.strip() + "<|end_search_query|>"
455
+ # print("=="*30)
456
+ # print(prev_reasoning)
457
+ # print("=="*30)
458
+
459
+ summary_for_gpt = summary_prompt.replace("{search_query}", query_summary).replace("{document}", combined_text).replace("{prev_reasoning}", prev_reasoning)
460
+ # print("=="*30)
461
+ # print(summary_for_gpt)
462
+ messages_summary = [{'role': 'user', 'content': summary_for_gpt}]
463
+ for y in range(10):
464
+ try:
465
+ model_output_summary = generate(messages_summary, 'gpt-4o-mini')
466
+ break
467
+ except:
468
+ continue
469
+ try:
470
+ summary_doc = model_output_summary.split("**Final Information**")[-1]
471
+ except Exception as e:
472
+ print(f"An error occurred: {e}")
473
+ summary_doc = "None"
474
+
475
+ doc_content = summary_doc.lstrip(":").strip()
476
+
477
+ original_data = {
478
+ "chat_prompt": prompt + generated_text.strip() + "<|end_search_query|>\n\n" + "<|begin_search_result|>\n" + doc_content + "\n<|end_search_result|>\n\n",
479
+ "answer": answer,
480
+ "Question": quesiton,
481
+ "stop_reason": stop_reason,
482
+ "gen_text_store": gen_text_store + generated_text.strip() + "<|end_search_query|>\n\n" + "<|begin_search_result|>\n" + doc_content + "\n<|end_search_result|>\n\n",
483
+ }
484
+ return original_data, "continued"
485
+ else:
486
+ original_data = {
487
+ "Question": quesiton,
488
+ "answer": answer,
489
+ "gen_text_store": gen_text_store + generated_text.strip(),
490
+ "generated_text": generated_text,
491
+ "stop_reason_final": "query_inst_error",
492
+ "pred_ans": "I don't know."
493
+ }
494
+ return original_data, "finished"
495
+
496
+ else:
497
+ original_data = {
498
+ "Question": quesiton,
499
+ "answer": answer,
500
+ "stop_reason_final": "shot_down",
501
+ "pred_ans": "I don't know.",
502
+ "gen_text_store": gen_text_store + generated_text.strip(),
503
+ }
504
+ return original_data, "finished"
505
+
506
+ def main():
507
+ print("=Begin="*10)
508
+ args = parse_args()
509
+ gpu_id = args.gpu_id
510
+ os.environ["CUDA_VISIBLE_DEVICES"] = gpu_id
511
+ temp=args.temp
512
+ port=args.port
513
+ type=args.prompt_type
514
+ model_path=args.model_path
515
+ tp_num = args.tp_num
516
+ gpu_memory_rate=args.gpu_memory_rate
517
+ chunk_size_own= args.chunk_size
518
+
519
+ data_ori_all = []
520
+ with open(args.src_file, "r") as f:
521
+ data_ori_all = []
522
+ for i, line in enumerate(f):
523
+ if args.start_sample <= i < args.end_sample:
524
+ obj_ori=json.loads(line)
525
+ data_ori_all.append(obj_ori)
526
+ if i >= args.end_sample - 1:
527
+ break
528
+
529
+ print("All Data Length: ",len(data_ori_all))
530
+ chunk_size = chunk_size_own
531
+ chunk_num = len(data_ori_all) // chunk_size
532
+ if len(data_ori_all) % chunk_size != 0:
533
+ chunk_num += 1
534
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
535
+ llm = LLM(model=model_path, tensor_parallel_size=tp_num, gpu_memory_utilization=gpu_memory_rate, trust_remote_code=True)
536
+
537
+ for h in range(chunk_num):
538
+ print("=="*80)
539
+ print("Begin Chunk: ",h,"All: ",chunk_num)
540
+ data_ori = data_ori_all[h*chunk_size:(h+1)*chunk_size]
541
+ data=[]
542
+
543
+ for i in range(len(data_ori)):
544
+ for j in range(1):
545
+ data.append(data_ori[i])
546
+
547
+ data_keys = data[0].keys()
548
+ data_keys = ["Question" , "answer"]
549
+ ds = Dataset.from_dict({key: [d[key] for d in data] for key in data_keys})
550
+ print(len(ds))
551
+ ds = ds.map(
552
+ process_text,
553
+ num_proc=16,
554
+ fn_kwargs={"tokenizer": tokenizer,"type":type},
555
+ )
556
+ print(ds)
557
+
558
+ stop_tokens = ["<|im_end|>", "<|endoftext|>", "<|end_search_query|>", "</answer>"]
559
+ sampling_params = SamplingParams(temperature=temp, top_p=0.95, max_tokens=2048, stop=stop_tokens)
560
+
561
+ finished_all_list=[]
562
+
563
+ continued_answer = copy.deepcopy(data)
564
+
565
+ for k in range(10):
566
+
567
+ if len(ds) ==0:
568
+ print("请确定是不是真的ok了")
569
+ print(len(ds))
570
+ break
571
+
572
+ outputs = llm.generate(ds['chat_prompt'], sampling_params)
573
+
574
+ finished_texts = []
575
+ continued_texts = []
576
+ with ThreadPoolExecutor() as executor:
577
+ futures = []
578
+ for i, output in enumerate(outputs):
579
+ futures.append(executor.submit(process_output, output, continued_answer[i], k))
580
+
581
+ for future in as_completed(futures):
582
+ obj, label = future.result()
583
+ if label == "finished":
584
+ finished_texts.append(obj)
585
+ elif label == "continued":
586
+ continued_texts.append(obj)
587
+
588
+ finished_all_list.extend(finished_texts)
589
+
590
+ if len(continued_texts)==0:
591
+ if len(finished_texts)>0:
592
+ with open(args.src_file.replace(".jsonl","-"+model_path.split("/")[-2]+model_path.split("/")[-1]+f"_base_temp{args.temp}_type{type}_online.jsonl"), "a") as f:
593
+ for text in finished_texts:
594
+ f.write(json.dumps(text) + "\n")
595
+
596
+ break
597
+ else:
598
+ data_keys_again = continued_texts[0].keys()
599
+ ds = Dataset.from_dict({key: [d[key] for d in continued_texts] for key in data_keys_again})
600
+ continued_answer = copy.deepcopy(continued_texts)
601
+ print("=="*80)
602
+ print("Epoch: ",k,"New_Finished: ",len(finished_texts),"All_Finished ",len(finished_all_list),"Continued: ",len(continued_texts))
603
+
604
+ print("Begin Writing Epoch: ",k)
605
+
606
+ # print(continued_texts)
607
+ print("=="*80)
608
+ # print(finished_texts)
609
+ if len(finished_texts)>0:
610
+ with open(args.src_file.replace(".jsonl","-"+model_path.split("/")[-2]+model_path.split("/")[-1]+f"_base_temp{args.temp}_type{type}_online.jsonl"), "a") as f:
611
+ for text in finished_texts:
612
+ f.write(json.dumps(text) + "\n")
613
+
614
+ if dist.is_initialized():
615
+ dist.destroy_process_group()
616
+ if __name__ == "__main__":
617
+ # mp.set_start_method("spawn", force=True)
618
+ main()
619
+
620
+ # python /opt/aps/workdir/sht-RAG_RL/eval/gen_ckpt_solution_base.py --src_file /opt/aps/workdir/sht-RAG_RL/eval/datasets/hotpotqa.jsonl --model_path /opt/aps/workdir/sht-RAG_RL/results/ckpts/qwen2.5-7B-base-rm3-sft-data-2-grpo-dataset_hpqa-len_29000-tbs_64-rbs_16-sample_16-kl_0.0001-warmup_0.0-ep_10000-plr_2e-6-temp1.0-30k/ckpt --gpu_id 0
621
+
deep_search/search_o1/scripts/SimpleDeepSearcher/README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # SimpleDeepSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis
3
+
4
+
5
+ ## 🚀 Data Synthesis in Real Web Environment
6
+
7
+ ### 1. Launch the Summary Model
8
+
9
+ ```bash
10
+ export CUDA_VISIBLE_DEVICES=0,1
11
+ vllm serve "YOUR_SUMMARY_MODEL_PATH" \
12
+ --tensor-parallel-size=2 \
13
+ --gpu-memory-utilization 0.95 \
14
+ --port 8001 > inference/output/vllm_serve.log 2>&1 &
15
+ ```
16
+
17
+ ---
18
+
19
+ ### 2. Generate Inference Search Trajectories
20
+
21
+ ```bash
22
+ export CUDA_VISIBLE_DEVICES=0,1
23
+ python -u inference/inference.py \
24
+ --dataset_name bamboogle \
25
+ --cache_dir_base cache \
26
+ --output_dir_base inference/output \
27
+ --max_search_limit 10 \
28
+ --max_turn 10 \
29
+ --top_k 10 \
30
+ --max_doc_len 5000 \
31
+ --model_path "REASON_MODEL_PATH" \
32
+ --summary_model_path "SUMMARY_MODEL_PARH" \
33
+ --base_url "http://localhost:8001/v1" \
34
+ --google_subscription_key "YOUR_KEY" \
35
+ --google_endpoint "https://google.serper.dev/search" > inference/output/output.log 2>&1
36
+ ```
37
+
38
+ ---
39
+
40
+ ## 🎯 Data Construction
41
+
42
+ ### 🔍 Query Sampling
43
+
44
+ #### 1. Annotate Domains and Keywords of Labeled Data
45
+
46
+ ```bash
47
+ python process_data/query_sampling/data_tag_domain_keypoints.py \
48
+ --input_file_path "/path/to/your/input.json" \
49
+ --cuda_visible_devices "0,1" \
50
+ --model_path "/path/to/your/tag_model"
51
+ ```
52
+
53
+ #### 2. Extract Domains and Keywords
54
+
55
+
56
+ ```bash
57
+ python process_data/query_sampling/extract_domain_keypoints.py \
58
+ --input_file_path "/path/to/your/input_tagged.json" \
59
+ --output_file_path "/path/to/your/output_extracted.json"
60
+ ```
61
+
62
+ #### 3. Count Number of Units
63
+
64
+ ```bash
65
+ python process_data/query_sampling/units_count.py \
66
+ --input_file "/path/to/your/output_extracted.json"
67
+ ```
68
+
69
+ #### 4. Sample Questions
70
+
71
+ ```bash
72
+ python process_data/query_sampling/query_sampling.py \
73
+ --input_file "/path/to/your/output_extracted.json" \
74
+ --total_samples 3000
75
+ ```
76
+
77
+ ---
78
+
79
+ ### 💬 Response Curation
80
+
81
+ #### 1. Format Data
82
+
83
+ ```bash
84
+ python process_data/repsonse_curation/format_data.py \
85
+ --input_file "/path/to/your/input_file.json"
86
+ ```
87
+
88
+ #### 2. Filter Responses
89
+
90
+ ```bash
91
+ python process_data/repsonse_curation/format_filter.py \
92
+ --input_file "/path/to/your/formatted_data.json"
93
+ ```
94
+
95
+ ---
96
+
97
+ ## 🧠 SFT Training
98
+
99
+ > Run the following script after replacing the corresponding variables:
100
+
101
+ ```bash
102
+ export OMP_NUM_THREADS=20
103
+ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
104
+
105
+ # Define parameters
106
+ lr=1e-5
107
+ base="BACKBONE" # path to base model
108
+ tokenizer="TOKENIZER" # path to tokenizer
109
+ train_data="TRAINING_DATA" # path to train data
110
+ bsz=2 # batch size
111
+ acc=4 # gradient accumulation steps
112
+ save_path="YOUR_SAVE_PATH"
113
+ output_dir="YOUR_OUTPUT_DIR"
114
+
115
+ # Create output directory
116
+ mkdir -p "$output_dir"
117
+ echo ${output_dir}
118
+
119
+ # Execute deepspeed command
120
+ deepspeed \
121
+ --master_port=9944 \
122
+ sft/sft.py.py \
123
+ --deepspeed sft/ds_zero3_offload.json \
124
+ --model_name_or_path $base \
125
+ --tokenizer_name_or_path $tokenizer \
126
+ --do_train \
127
+ --save_safetensors true \
128
+ --data_path $train_data \
129
+ --lr_scheduler_type cosine \
130
+ --output_dir $output_dir \
131
+ --overwrite_output_dir \
132
+ --warmup_ratio 0.03 \
133
+ --gradient_checkpointing true \
134
+ --per_device_train_batch_size $bsz \
135
+ --gradient_accumulation_steps $acc \
136
+ --logging_steps 1 \
137
+ --learning_rate "$lr" \
138
+ --num_train_epochs 6 \
139
+ --save_strategy epoch \
140
+ --save_only_model true \
141
+ --model_max_length 30000 \
142
+ --save_total_limit 5 \
143
+ --bf16 || exit 1
144
+ ```
145
+
deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/2wiki.json ADDED
The diff for this file is too large to render. See raw diff
 
deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/aime.json ADDED
The diff for this file is too large to render. See raw diff
 
deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/bamboogle.json ADDED
@@ -0,0 +1,1002 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "Question": "Who was president of the United States in the year that Citibank was founded?",
4
+ "source": "bamboogle",
5
+ "id": "bamboogle_1",
6
+ "answer": [
7
+ "james madison"
8
+ ]
9
+ },
10
+ {
11
+ "Question": "What rocket was the first spacecraft that ever approached Uranus launched on?",
12
+ "source": "bamboogle",
13
+ "id": "bamboogle_2",
14
+ "answer": [
15
+ "Titan IIIE"
16
+ ]
17
+ },
18
+ {
19
+ "Question": "In what year was the company that was founded as Sound of Music added to the S&P 500?",
20
+ "source": "bamboogle",
21
+ "id": "bamboogle_3",
22
+ "answer": [
23
+ "1999"
24
+ ]
25
+ },
26
+ {
27
+ "Question": "Who was the first African American mayor of the most populous city in the United States?",
28
+ "source": "bamboogle",
29
+ "id": "bamboogle_4",
30
+ "answer": [
31
+ "David Dinkins"
32
+ ]
33
+ },
34
+ {
35
+ "Question": "When did the last king from Britain's House of Hanover die?",
36
+ "source": "bamboogle",
37
+ "id": "bamboogle_5",
38
+ "answer": [
39
+ "20 June 1837"
40
+ ]
41
+ },
42
+ {
43
+ "Question": "When did the president who set the precedent of a two term limit enter office?",
44
+ "source": "bamboogle",
45
+ "id": "bamboogle_6",
46
+ "answer": [
47
+ "April 30, 1789"
48
+ ]
49
+ },
50
+ {
51
+ "Question": "When did the president who set the precedent of a two term limit leave office?",
52
+ "source": "bamboogle",
53
+ "id": "bamboogle_7",
54
+ "answer": [
55
+ "March 4, 1797"
56
+ ]
57
+ },
58
+ {
59
+ "Question": "How many people died in the second most powerful earthquake ever recorded?",
60
+ "source": "bamboogle",
61
+ "id": "bamboogle_8",
62
+ "answer": [
63
+ "131"
64
+ ]
65
+ },
66
+ {
67
+ "Question": "Can people who have celiac eat camel meat?",
68
+ "source": "bamboogle",
69
+ "id": "bamboogle_9",
70
+ "answer": [
71
+ "Yes"
72
+ ]
73
+ },
74
+ {
75
+ "Question": "What was the final book written by the author of On the Origin of Species?",
76
+ "source": "bamboogle",
77
+ "id": "bamboogle_10",
78
+ "answer": [
79
+ "The Formation of Vegetable Mould Through the Action of Worms"
80
+ ]
81
+ },
82
+ {
83
+ "Question": "When was the company that built the first steam locomotive to carry passengers on a public rail line founded?",
84
+ "source": "bamboogle",
85
+ "id": "bamboogle_11",
86
+ "answer": [
87
+ "1823"
88
+ ]
89
+ },
90
+ {
91
+ "Question": "Which Theranos whistleblower is not related to a senior American government official?",
92
+ "source": "bamboogle",
93
+ "id": "bamboogle_12",
94
+ "answer": [
95
+ "Erika Cheung"
96
+ ]
97
+ },
98
+ {
99
+ "Question": "What is the fastest air-breathing manned aircraft mostly made out of?",
100
+ "source": "bamboogle",
101
+ "id": "bamboogle_13",
102
+ "answer": [
103
+ "Titanium"
104
+ ]
105
+ },
106
+ {
107
+ "Question": "Who built the fastest air-breathing manned aircraft?",
108
+ "source": "bamboogle",
109
+ "id": "bamboogle_14",
110
+ "answer": [
111
+ "Lockheed Corporation"
112
+ ]
113
+ },
114
+ {
115
+ "Question": "When was the author of The Population Bomb born?",
116
+ "source": "bamboogle",
117
+ "id": "bamboogle_15",
118
+ "answer": [
119
+ "May 29, 1932"
120
+ ]
121
+ },
122
+ {
123
+ "Question": "When did the author of Annabel Lee enlist in the army?",
124
+ "source": "bamboogle",
125
+ "id": "bamboogle_16",
126
+ "answer": [
127
+ "1827"
128
+ ]
129
+ },
130
+ {
131
+ "Question": "What was the religion of the inventor of the Polio vaccine?",
132
+ "source": "bamboogle",
133
+ "id": "bamboogle_17",
134
+ "answer": [
135
+ "Jewish"
136
+ ]
137
+ },
138
+ {
139
+ "Question": "Who was the second wife of the founder of CNN?",
140
+ "source": "bamboogle",
141
+ "id": "bamboogle_18",
142
+ "answer": [
143
+ "Jane Shirley Smith"
144
+ ]
145
+ },
146
+ {
147
+ "Question": "When did the first prime minister of the Russian Empire come into office?",
148
+ "source": "bamboogle",
149
+ "id": "bamboogle_19",
150
+ "answer": [
151
+ "November 6, 1905"
152
+ ]
153
+ },
154
+ {
155
+ "Question": "What is the primary male hormone derived from?",
156
+ "source": "bamboogle",
157
+ "id": "bamboogle_20",
158
+ "answer": [
159
+ "cholesterol"
160
+ ]
161
+ },
162
+ {
163
+ "Question": "The Filipino statesman who established the government-in-exile during the outbreak of World War II was also the mayor of what city?",
164
+ "source": "bamboogle",
165
+ "id": "bamboogle_21",
166
+ "answer": [
167
+ "Quezon City"
168
+ ]
169
+ },
170
+ {
171
+ "Question": "Where was the person who shared the Nobel Prize in Physics in 1954 with Max Born born?",
172
+ "source": "bamboogle",
173
+ "id": "bamboogle_22",
174
+ "answer": [
175
+ "Oranienburg, Germany"
176
+ ]
177
+ },
178
+ {
179
+ "Question": "When was the person who shared the Nobel Prize in Physics in 1954 with Max Born born?",
180
+ "source": "bamboogle",
181
+ "id": "bamboogle_23",
182
+ "answer": [
183
+ "January 8, 1891"
184
+ ]
185
+ },
186
+ {
187
+ "Question": "What was the founding date of the university in which Plotonium was discovered?",
188
+ "source": "bamboogle",
189
+ "id": "bamboogle_24",
190
+ "answer": [
191
+ "March 23, 1868"
192
+ ]
193
+ },
194
+ {
195
+ "Question": "The material out of which the Great Sphinx of Giza is made of is mainly composed of what mineral?",
196
+ "source": "bamboogle",
197
+ "id": "bamboogle_25",
198
+ "answer": [
199
+ "calcite"
200
+ ]
201
+ },
202
+ {
203
+ "Question": "The husband of Lady Godiva was Earl of which Anglic kingdom?",
204
+ "source": "bamboogle",
205
+ "id": "bamboogle_26",
206
+ "answer": [
207
+ "Mercia"
208
+ ]
209
+ },
210
+ {
211
+ "Question": "The machine used to extract honey from honeycombs uses which physical force?",
212
+ "source": "bamboogle",
213
+ "id": "bamboogle_27",
214
+ "answer": [
215
+ "Centrifugal Force"
216
+ ]
217
+ },
218
+ {
219
+ "Question": "What is the third letter of the top level domain of the military?",
220
+ "source": "bamboogle",
221
+ "id": "bamboogle_28",
222
+ "answer": [
223
+ "l"
224
+ ]
225
+ },
226
+ {
227
+ "Question": "In what year was the government department where the internet originated at founded?",
228
+ "source": "bamboogle",
229
+ "id": "bamboogle_29",
230
+ "answer": [
231
+ "1947"
232
+ ]
233
+ },
234
+ {
235
+ "Question": "The main actor of Indiana Jones is a licensed what?",
236
+ "source": "bamboogle",
237
+ "id": "bamboogle_30",
238
+ "answer": [
239
+ "pilot"
240
+ ]
241
+ },
242
+ {
243
+ "Question": "When was the person after which the Hubble Space Telescope is named after born?",
244
+ "source": "bamboogle",
245
+ "id": "bamboogle_31",
246
+ "answer": [
247
+ "November 20, 1889"
248
+ ]
249
+ },
250
+ {
251
+ "Question": "When did the person who gave the Checkers speech die?",
252
+ "source": "bamboogle",
253
+ "id": "bamboogle_32",
254
+ "answer": [
255
+ "April 22, 1994"
256
+ ]
257
+ },
258
+ {
259
+ "Question": "When was the philosopher that formulated the hard problem of consciousness born?",
260
+ "source": "bamboogle",
261
+ "id": "bamboogle_33",
262
+ "answer": [
263
+ "April 20, 1966"
264
+ ]
265
+ },
266
+ {
267
+ "Question": "What is the capital of the second largest state in the US by area?",
268
+ "source": "bamboogle",
269
+ "id": "bamboogle_34",
270
+ "answer": [
271
+ "Austin"
272
+ ]
273
+ },
274
+ {
275
+ "Question": "What is the maximum airspeed (in km/h) of the third fastest bird?",
276
+ "source": "bamboogle",
277
+ "id": "bamboogle_35",
278
+ "answer": [
279
+ "320 km/h"
280
+ ]
281
+ },
282
+ {
283
+ "Question": "Who founded the city where the founder of geometry lived?",
284
+ "source": "bamboogle",
285
+ "id": "bamboogle_36",
286
+ "answer": [
287
+ "Alexander the Great"
288
+ ]
289
+ },
290
+ {
291
+ "Question": "What is the capital of the country where yoga originated?",
292
+ "source": "bamboogle",
293
+ "id": "bamboogle_37",
294
+ "answer": [
295
+ "New Delhi"
296
+ ]
297
+ },
298
+ {
299
+ "Question": "The fourth largest city in Germany was originally called what?",
300
+ "source": "bamboogle",
301
+ "id": "bamboogle_38",
302
+ "answer": [
303
+ "Colonia Claudia Ara Agrippinensium"
304
+ ]
305
+ },
306
+ {
307
+ "Question": "When did Nirvana's second most selling studio album come out?",
308
+ "source": "bamboogle",
309
+ "id": "bamboogle_39",
310
+ "answer": [
311
+ "September 13, 1993"
312
+ ]
313
+ },
314
+ {
315
+ "Question": "What was the job of the father of the founder of psychoanalysis?",
316
+ "source": "bamboogle",
317
+ "id": "bamboogle_40",
318
+ "answer": [
319
+ "wool merchant"
320
+ ]
321
+ },
322
+ {
323
+ "Question": "How much protein in four boiled egg yolks?",
324
+ "source": "bamboogle",
325
+ "id": "bamboogle_41",
326
+ "answer": [
327
+ "10.8"
328
+ ]
329
+ },
330
+ {
331
+ "Question": "What is the political party of the American president who entered into the Paris agreement?",
332
+ "source": "bamboogle",
333
+ "id": "bamboogle_42",
334
+ "answer": [
335
+ "Democratic Party"
336
+ ]
337
+ },
338
+ {
339
+ "Question": "The most populous city in Punjab is how large (area wise)?",
340
+ "source": "bamboogle",
341
+ "id": "bamboogle_43",
342
+ "answer": [
343
+ "310 square kilometers"
344
+ ]
345
+ },
346
+ {
347
+ "Question": "What was the death toll of the second largest volcanic eruption in the 20th century?",
348
+ "source": "bamboogle",
349
+ "id": "bamboogle_44",
350
+ "answer": [
351
+ "847"
352
+ ]
353
+ },
354
+ {
355
+ "Question": "What was the death toll of the most intense Atlantic hurricane?",
356
+ "source": "bamboogle",
357
+ "id": "bamboogle_45",
358
+ "answer": [
359
+ "52"
360
+ ]
361
+ },
362
+ {
363
+ "Question": "Who was the head of NASA during Apollo 11?",
364
+ "source": "bamboogle",
365
+ "id": "bamboogle_46",
366
+ "answer": [
367
+ "Thomas O. Paine"
368
+ ]
369
+ },
370
+ {
371
+ "Question": "Who is the father of the father of George Washington?",
372
+ "source": "bamboogle",
373
+ "id": "bamboogle_47",
374
+ "answer": [
375
+ "Lawrence Washington"
376
+ ]
377
+ },
378
+ {
379
+ "Question": "Who is the mother of the father of George Washington?",
380
+ "source": "bamboogle",
381
+ "id": "bamboogle_48",
382
+ "answer": [
383
+ "Mildred Warner"
384
+ ]
385
+ },
386
+ {
387
+ "Question": "Who is the father of the father of Barack Obama?",
388
+ "source": "bamboogle",
389
+ "id": "bamboogle_49",
390
+ "answer": [
391
+ "Hussein Onyango Obama"
392
+ ]
393
+ },
394
+ {
395
+ "Question": "Who is the mother of the father of Barack Obama?",
396
+ "source": "bamboogle",
397
+ "id": "bamboogle_50",
398
+ "answer": [
399
+ "Habiba Akumu Nyanjango"
400
+ ]
401
+ },
402
+ {
403
+ "Question": "Who was mayor of New York City when Fiorello H. La Guardia was born?",
404
+ "source": "bamboogle",
405
+ "id": "bamboogle_51",
406
+ "answer": [
407
+ "William Russell Grace"
408
+ ]
409
+ },
410
+ {
411
+ "Question": "Who was president of the U.S. when superconductivity was discovered?",
412
+ "source": "bamboogle",
413
+ "id": "bamboogle_52",
414
+ "answer": [
415
+ "William Howard Taft"
416
+ ]
417
+ },
418
+ {
419
+ "Question": "When was the person Russ Hanneman is based on born?",
420
+ "source": "bamboogle",
421
+ "id": "bamboogle_53",
422
+ "answer": [
423
+ "July 31, 1958"
424
+ ]
425
+ },
426
+ {
427
+ "Question": "When was the first location of the world's largest coffeehouse chain opened?",
428
+ "source": "bamboogle",
429
+ "id": "bamboogle_54",
430
+ "answer": [
431
+ "March 30, 1971"
432
+ ]
433
+ },
434
+ {
435
+ "Question": "Who directed the highest grossing film?",
436
+ "source": "bamboogle",
437
+ "id": "bamboogle_55",
438
+ "answer": [
439
+ "James Cameroon"
440
+ ]
441
+ },
442
+ {
443
+ "Question": "When was the longest bridge in the world opened?",
444
+ "source": "bamboogle",
445
+ "id": "bamboogle_56",
446
+ "answer": [
447
+ "30 June 2011"
448
+ ]
449
+ },
450
+ {
451
+ "Question": "Which company was responsible for the largest pharmaceutical settlement?",
452
+ "source": "bamboogle",
453
+ "id": "bamboogle_57",
454
+ "answer": [
455
+ "GlaxoSmithKline"
456
+ ]
457
+ },
458
+ {
459
+ "Question": "In what year was the tallest self-supporting tower completed?",
460
+ "source": "bamboogle",
461
+ "id": "bamboogle_58",
462
+ "answer": [
463
+ "2012"
464
+ ]
465
+ },
466
+ {
467
+ "Question": "In what year was the tallest fixed steel structure completed?",
468
+ "source": "bamboogle",
469
+ "id": "bamboogle_59",
470
+ "answer": [
471
+ "1988"
472
+ ]
473
+ },
474
+ {
475
+ "Question": "In what year was the tallest lattice tower completed?",
476
+ "source": "bamboogle",
477
+ "id": "bamboogle_60",
478
+ "answer": [
479
+ "2012"
480
+ ]
481
+ },
482
+ {
483
+ "Question": "In what year was the current tallest wooden lattice tower completed?",
484
+ "source": "bamboogle",
485
+ "id": "bamboogle_61",
486
+ "answer": [
487
+ "1935"
488
+ ]
489
+ },
490
+ {
491
+ "Question": "In what country is the second tallest statue in the world?",
492
+ "source": "bamboogle",
493
+ "id": "bamboogle_62",
494
+ "answer": [
495
+ "China"
496
+ ]
497
+ },
498
+ {
499
+ "Question": "When was the tallest ferris wheel in the world completed?",
500
+ "source": "bamboogle",
501
+ "id": "bamboogle_63",
502
+ "answer": [
503
+ "2021"
504
+ ]
505
+ },
506
+ {
507
+ "Question": "In what year was the tallest lighthouse completed?",
508
+ "source": "bamboogle",
509
+ "id": "bamboogle_64",
510
+ "answer": [
511
+ "1902"
512
+ ]
513
+ },
514
+ {
515
+ "Question": "In what country is the world largest desalination plant?",
516
+ "source": "bamboogle",
517
+ "id": "bamboogle_65",
518
+ "answer": [
519
+ "Saudi Arabia"
520
+ ]
521
+ },
522
+ {
523
+ "Question": "The most populous national capital city was established in what year?",
524
+ "source": "bamboogle",
525
+ "id": "bamboogle_66",
526
+ "answer": [
527
+ "1045 BC"
528
+ ]
529
+ },
530
+ {
531
+ "Question": "The third largest river (by discharge) in the world is in what countries?",
532
+ "source": "bamboogle",
533
+ "id": "bamboogle_67",
534
+ "answer": [
535
+ "India and Bangladesh"
536
+ ]
537
+ },
538
+ {
539
+ "Question": "What is the highest elevation (in meters) of the second largest island in the world?",
540
+ "source": "bamboogle",
541
+ "id": "bamboogle_68",
542
+ "answer": [
543
+ "4,884 m"
544
+ ]
545
+ },
546
+ {
547
+ "Question": "What is the length of the second deepest river in the world?",
548
+ "source": "bamboogle",
549
+ "id": "bamboogle_69",
550
+ "answer": [
551
+ "6,300 km"
552
+ ]
553
+ },
554
+ {
555
+ "Question": "In what country is the third largest stadium in the world?",
556
+ "source": "bamboogle",
557
+ "id": "bamboogle_70",
558
+ "answer": [
559
+ "United States"
560
+ ]
561
+ },
562
+ {
563
+ "Question": "Who is the largest aircraft carrier in the world is named after?",
564
+ "source": "bamboogle",
565
+ "id": "bamboogle_71",
566
+ "answer": [
567
+ "Gerald R. Ford"
568
+ ]
569
+ },
570
+ {
571
+ "Question": "In what year did the oldest cat ever recorded with the Cat of the Year award?",
572
+ "source": "bamboogle",
573
+ "id": "bamboogle_72",
574
+ "answer": [
575
+ "1999"
576
+ ]
577
+ },
578
+ {
579
+ "Question": "In what year was the country that is the third largest exporter of coffee founded?",
580
+ "source": "bamboogle",
581
+ "id": "bamboogle_73",
582
+ "answer": [
583
+ "1810"
584
+ ]
585
+ },
586
+ {
587
+ "Question": "Who was the commander for the space mission that had the first spacewalk?",
588
+ "source": "bamboogle",
589
+ "id": "bamboogle_74",
590
+ "answer": [
591
+ "Pavel Belyayev"
592
+ ]
593
+ },
594
+ {
595
+ "Question": "Who is the predecessor of the longest-reigning British monarch?",
596
+ "source": "bamboogle",
597
+ "id": "bamboogle_75",
598
+ "answer": [
599
+ "George VI\n"
600
+ ]
601
+ },
602
+ {
603
+ "Question": "In 2016, who was the host of the longest running talk show?",
604
+ "source": "bamboogle",
605
+ "id": "bamboogle_76",
606
+ "answer": [
607
+ "Jimmy Fallon"
608
+ ]
609
+ },
610
+ {
611
+ "Question": "In 2016, who was the host of the longest running American game show?",
612
+ "source": "bamboogle",
613
+ "id": "bamboogle_77",
614
+ "answer": [
615
+ "Drew Carey"
616
+ ]
617
+ },
618
+ {
619
+ "Question": "Who wrote the novel on which the longest running show in Broadway history is based on?",
620
+ "source": "bamboogle",
621
+ "id": "bamboogle_78",
622
+ "answer": [
623
+ "Gaston Leroux"
624
+ ]
625
+ },
626
+ {
627
+ "Question": "In what country was the only cruise line that flies the American flag incorporated in?",
628
+ "source": "bamboogle",
629
+ "id": "bamboogle_79",
630
+ "answer": [
631
+ "Bermuda"
632
+ ]
633
+ },
634
+ {
635
+ "Question": "In what year did work begin on the second longest road tunnel in the world?",
636
+ "source": "bamboogle",
637
+ "id": "bamboogle_80",
638
+ "answer": [
639
+ "1992"
640
+ ]
641
+ },
642
+ {
643
+ "Question": "What is the official color of the third oldest surviving university?",
644
+ "source": "bamboogle",
645
+ "id": "bamboogle_81",
646
+ "answer": [
647
+ "Cambridge Blue"
648
+ ]
649
+ },
650
+ {
651
+ "Question": "Who succeeded the longest reigning Roman emperor?",
652
+ "source": "bamboogle",
653
+ "id": "bamboogle_82",
654
+ "answer": [
655
+ "Tiberius"
656
+ ]
657
+ },
658
+ {
659
+ "Question": "Who preceded the Roman emperor that declared war on the sea?",
660
+ "source": "bamboogle",
661
+ "id": "bamboogle_83",
662
+ "answer": [
663
+ "Tiberius"
664
+ ]
665
+ },
666
+ {
667
+ "Question": "Who produced the longest running video game franchise?",
668
+ "source": "bamboogle",
669
+ "id": "bamboogle_84",
670
+ "answer": [
671
+ "MECC"
672
+ ]
673
+ },
674
+ {
675
+ "Question": "Who was the father of the father of psychoanalysis?",
676
+ "source": "bamboogle",
677
+ "id": "bamboogle_85",
678
+ "answer": [
679
+ "Jacob Freud"
680
+ ]
681
+ },
682
+ {
683
+ "Question": "Who was the father of the father of empiricism?",
684
+ "source": "bamboogle",
685
+ "id": "bamboogle_86",
686
+ "answer": [
687
+ "Sir Nicholas Bacon"
688
+ ]
689
+ },
690
+ {
691
+ "Question": "Who is the father of the father of observational astronomy?",
692
+ "source": "bamboogle",
693
+ "id": "bamboogle_87",
694
+ "answer": [
695
+ "Vincenzo Galilei"
696
+ ]
697
+ },
698
+ {
699
+ "Question": "Who is the father of the father of modern Hebrew?",
700
+ "source": "bamboogle",
701
+ "id": "bamboogle_88",
702
+ "answer": [
703
+ "Yehuda Leib"
704
+ ]
705
+ },
706
+ {
707
+ "Question": "Who is the father of the father of modern experimental psychology?",
708
+ "source": "bamboogle",
709
+ "id": "bamboogle_89",
710
+ "answer": [
711
+ "Maximilian Wundt"
712
+ ]
713
+ },
714
+ {
715
+ "Question": "Who is the father of the originator of cybernetics?",
716
+ "source": "bamboogle",
717
+ "id": "bamboogle_90",
718
+ "answer": [
719
+ "Leo Wiener"
720
+ ]
721
+ },
722
+ {
723
+ "Question": "Who is the father of the father of the hydrogen bomb?",
724
+ "source": "bamboogle",
725
+ "id": "bamboogle_91",
726
+ "answer": [
727
+ "Max Teller"
728
+ ]
729
+ },
730
+ {
731
+ "Question": "Who was the father of the father of computer science?",
732
+ "source": "bamboogle",
733
+ "id": "bamboogle_92",
734
+ "answer": [
735
+ "Julius Mathison Turing"
736
+ ]
737
+ },
738
+ {
739
+ "Question": "Who was the father of the father of behaviorism?",
740
+ "source": "bamboogle",
741
+ "id": "bamboogle_93",
742
+ "answer": [
743
+ "Pickens Butler Watson"
744
+ ]
745
+ },
746
+ {
747
+ "Question": "Who was the father of the founder of modern human anatomy?",
748
+ "source": "bamboogle",
749
+ "id": "bamboogle_94",
750
+ "answer": [
751
+ "Anders van Wesel"
752
+ ]
753
+ },
754
+ {
755
+ "Question": "What was the father of the last surviving Canadian father of Confederation?",
756
+ "source": "bamboogle",
757
+ "id": "bamboogle_95",
758
+ "answer": [
759
+ "Charles Tupper Sr."
760
+ ]
761
+ },
762
+ {
763
+ "Question": "When was the person who said “Now, I am become Death, the destroyer of worlds.” born?",
764
+ "source": "bamboogle",
765
+ "id": "bamboogle_96",
766
+ "answer": [
767
+ "April 22, 1904"
768
+ ]
769
+ },
770
+ {
771
+ "Question": "Who was the father of the father of information theory?",
772
+ "source": "bamboogle",
773
+ "id": "bamboogle_97",
774
+ "answer": [
775
+ "Claude Sr."
776
+ ]
777
+ },
778
+ {
779
+ "Question": "When was the person who delivered the \"Quit India\" speech born?",
780
+ "source": "bamboogle",
781
+ "id": "bamboogle_98",
782
+ "answer": [
783
+ "October 2, 1869"
784
+ ]
785
+ },
786
+ {
787
+ "Question": "When did the president who warned about the military industrial complex die?",
788
+ "source": "bamboogle",
789
+ "id": "bamboogle_99",
790
+ "answer": [
791
+ "March 28, 1969"
792
+ ]
793
+ },
794
+ {
795
+ "Question": "When did the president who said Tear Down This Wall die?",
796
+ "source": "bamboogle",
797
+ "id": "bamboogle_100",
798
+ "answer": [
799
+ "June 5, 2004"
800
+ ]
801
+ },
802
+ {
803
+ "Question": "What is the lowest elevation of the longest railway tunnel?",
804
+ "source": "bamboogle",
805
+ "id": "bamboogle_101",
806
+ "answer": [
807
+ "312 m"
808
+ ]
809
+ },
810
+ {
811
+ "Question": "When did the person who said \"Cogito, ergo sum.\" die?",
812
+ "source": "bamboogle",
813
+ "id": "bamboogle_102",
814
+ "answer": [
815
+ "February 11, 1650"
816
+ ]
817
+ },
818
+ {
819
+ "Question": "When did the person who delivered the Gettysburg Address die?",
820
+ "source": "bamboogle",
821
+ "id": "bamboogle_103",
822
+ "answer": [
823
+ "April 15, 1865"
824
+ ]
825
+ },
826
+ {
827
+ "Question": "Who was governor of Florida during Hurricane Irma?",
828
+ "source": "bamboogle",
829
+ "id": "bamboogle_104",
830
+ "answer": [
831
+ "Rick Scott"
832
+ ]
833
+ },
834
+ {
835
+ "Question": "For which club did the winner of the 2007 Ballon d'Or play for in 2012?",
836
+ "source": "bamboogle",
837
+ "id": "bamboogle_105",
838
+ "answer": [
839
+ "Real Madrid"
840
+ ]
841
+ },
842
+ {
843
+ "Question": "What's the capital city of the country that was the champion of the 2010 World Cup?",
844
+ "source": "bamboogle",
845
+ "id": "bamboogle_106",
846
+ "answer": [
847
+ "Madrid"
848
+ ]
849
+ },
850
+ {
851
+ "Question": "When was the anime studio that made Sword Art Online founded?",
852
+ "source": "bamboogle",
853
+ "id": "bamboogle_107",
854
+ "answer": [
855
+ "May 9, 2005"
856
+ ]
857
+ },
858
+ {
859
+ "Question": "Who was the first king of the longest Chinese dynasty?",
860
+ "source": "bamboogle",
861
+ "id": "bamboogle_108",
862
+ "answer": [
863
+ "King Wu of Zhou"
864
+ ]
865
+ },
866
+ {
867
+ "Question": "Who was the last emperor of the dynasty that succeeded the Song dynasty?",
868
+ "source": "bamboogle",
869
+ "id": "bamboogle_109",
870
+ "answer": [
871
+ "Toghon Temür"
872
+ ]
873
+ },
874
+ {
875
+ "Question": "What's the motto of the oldest California State university?",
876
+ "source": "bamboogle",
877
+ "id": "bamboogle_110",
878
+ "answer": [
879
+ "Powering Silicon Valley"
880
+ ]
881
+ },
882
+ {
883
+ "Question": "What's the capital of the state that the College of William & Mary is in?",
884
+ "source": "bamboogle",
885
+ "id": "bamboogle_111",
886
+ "answer": [
887
+ "Richmond"
888
+ ]
889
+ },
890
+ {
891
+ "Question": "What's the capital of the state that Washington University in St. Louis is in?",
892
+ "source": "bamboogle",
893
+ "id": "bamboogle_112",
894
+ "answer": [
895
+ "Jefferson City"
896
+ ]
897
+ },
898
+ {
899
+ "Question": "What's the capital of the state that Harvard University is in?",
900
+ "source": "bamboogle",
901
+ "id": "bamboogle_113",
902
+ "answer": [
903
+ "Boston"
904
+ ]
905
+ },
906
+ {
907
+ "Question": "What's the capital of the state that the Space Needle is at?",
908
+ "source": "bamboogle",
909
+ "id": "bamboogle_114",
910
+ "answer": [
911
+ "Olympia"
912
+ ]
913
+ },
914
+ {
915
+ "Question": "Which team won in women's volleyball in the most recent Summer Olympics that was held in London?",
916
+ "source": "bamboogle",
917
+ "id": "bamboogle_115",
918
+ "answer": [
919
+ "Brazil"
920
+ ]
921
+ },
922
+ {
923
+ "Question": "What is the nickname of the easternmost U.S. state?",
924
+ "source": "bamboogle",
925
+ "id": "bamboogle_116",
926
+ "answer": [
927
+ "Pine Tree State"
928
+ ]
929
+ },
930
+ {
931
+ "Question": "What is the nickname for the state that is the home to the “Avocado Capital of the World\"?",
932
+ "source": "bamboogle",
933
+ "id": "bamboogle_117",
934
+ "answer": [
935
+ "Golden State"
936
+ ]
937
+ },
938
+ {
939
+ "Question": "What rocket was used for the mission that landed the first humans on the moon?",
940
+ "source": "bamboogle",
941
+ "id": "bamboogle_118",
942
+ "answer": [
943
+ "Saturn V"
944
+ ]
945
+ },
946
+ {
947
+ "Question": "When did the war that Neil Armstrong served in end?",
948
+ "source": "bamboogle",
949
+ "id": "bamboogle_119",
950
+ "answer": [
951
+ "July 27, 1953"
952
+ ]
953
+ },
954
+ {
955
+ "Question": "What is the nickname for the state that Mount Rainier is located in?",
956
+ "source": "bamboogle",
957
+ "id": "bamboogle_120",
958
+ "answer": [
959
+ "Evergreen State"
960
+ ]
961
+ },
962
+ {
963
+ "Question": "When was the composer of Carol of the Bells born?",
964
+ "source": "bamboogle",
965
+ "id": "bamboogle_121",
966
+ "answer": [
967
+ "December 13, 1877"
968
+ ]
969
+ },
970
+ {
971
+ "Question": "Who is the father of the scientist at MIT that won the Queen Elizabeth Prize for Engineering in 2013?",
972
+ "source": "bamboogle",
973
+ "id": "bamboogle_122",
974
+ "answer": [
975
+ "Conway Berners-Lee"
976
+ ]
977
+ },
978
+ {
979
+ "Question": "Who was the mother of the emperor of Japan during World War I?",
980
+ "source": "bamboogle",
981
+ "id": "bamboogle_123",
982
+ "answer": [
983
+ "Yanagiwara Naruko"
984
+ ]
985
+ },
986
+ {
987
+ "Question": "Which element has an atomic number that is double that of hydrogen?",
988
+ "source": "bamboogle",
989
+ "id": "bamboogle_124",
990
+ "answer": [
991
+ "Helium"
992
+ ]
993
+ },
994
+ {
995
+ "Question": "What was the motto of the Olympics that had Fuwa as the mascots?",
996
+ "source": "bamboogle",
997
+ "id": "bamboogle_125",
998
+ "answer": [
999
+ "One World, One Dream"
1000
+ ]
1001
+ }
1002
+ ]
deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/frames.json ADDED
The diff for this file is too large to render. See raw diff
 
deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/gaia.json ADDED
The diff for this file is too large to render. See raw diff
 
deep_search/search_o1/scripts/SimpleDeepSearcher/data/eval/musique.json ADDED
The diff for this file is too large to render. See raw diff
 
deep_search/search_o1/scripts/SimpleDeepSearcher/eval/gpt_eval_sft.py ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from openai import OpenAI
2
+ import sys
3
+ import os
4
+ from datasets import load_dataset
5
+ import http.client
6
+ import json
7
+ import re
8
+ from tqdm import tqdm
9
+ import multiprocessing
10
+ from time import sleep
11
+ import requests
12
+ import json
13
+ from collections import defaultdict
14
+ import random
15
+ import json
16
+ import requests
17
+ import time
18
+ # from bs4 import BeautifulSoup
19
+ # import wikipediaapi
20
+ from urllib.parse import unquote
21
+ from urllib.parse import urlparse
22
+
23
+
24
+ os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
25
+ os.environ["OPENAI_API_BASE"] = "YOUR_API_BASE"
26
+
27
+ client = OpenAI(
28
+ api_key=os.environ.get("OPENAI_API_KEY"),
29
+ base_url=os.environ.get("OPENAI_API_BASE")
30
+ )
31
+
32
+
33
+ def generate(messages, model_name):
34
+ retry_cnt = 0
35
+ while True: # 重试机制
36
+ try:
37
+ if retry_cnt:
38
+ print(f"Retry: {retry_cnt}")
39
+ response = client.chat.completions.create(
40
+ **{
41
+ "model": model_name,
42
+ "messages": messages,
43
+ "max_tokens": 1024,
44
+ "temperature": 0,
45
+ }
46
+ )
47
+ response = response.choices[0].message.content
48
+ return response
49
+ except Exception as e:
50
+ retry_cnt += 1
51
+ print(f"Error: {e}")
52
+ time.sleep(0.5)
53
+ def process_one_sample(obj):
54
+
55
+ prompt = '''Given a Question and its Golden Answer, verify whether the Predicted Answer is correct. The prediction is correct if it fully aligns with the meaning and key information of the Golden Answer. Respond with True if the prediction is correct and False otherwise.
56
+ Golden Answer may have multiple options, and matching any one of them is considered correct.
57
+
58
+ Question: {question}
59
+ Golden Answer: {reference}
60
+ Predicted Answer: {prediction}
61
+ '''
62
+
63
+ question = obj["item"]["Question"]
64
+ reference_ans_ori = obj["item"]["answer"]
65
+
66
+ reference_ans_ori = [a if isinstance(a, str) else str(a) for a in reference_ans_ori]
67
+ if isinstance(reference_ans_ori, str):
68
+ reference_ans = reference_ans_ori
69
+ elif isinstance(reference_ans_ori, list):
70
+ reference_ans = "; ".join(reference_ans_ori)
71
+ # print(reference_ans)
72
+ else:
73
+ kill
74
+
75
+ if reference_ans ==False:
76
+ reference_ans="no"
77
+ if reference_ans ==True:
78
+ reference_ans="yes"
79
+ solution = obj["output"]
80
+
81
+ pattern = r'\\boxed\{(.*)\}'
82
+ matches = re.findall(pattern, solution)
83
+ # proposed_ans = matches[-1]
84
+ if matches:
85
+ proposed_ans = matches[-1]
86
+ else:
87
+ proposed_ans = "No answer"
88
+
89
+
90
+ gpt4o_input = prompt.format(question=question , reference=reference_ans, prediction=proposed_ans)
91
+ flag_final_ans = True
92
+
93
+ if flag_final_ans:
94
+ messages = [{'role': 'user', 'content': gpt4o_input}]
95
+ model_output = generate(messages, 'gpt-4o-mini')
96
+ obj["gpt4o_output"] = model_output
97
+ else:
98
+ obj["gpt4o_output"] = "Fuck! No boxed"
99
+
100
+ obj_new={
101
+ "question":question,
102
+ "reference_ans":reference_ans ,
103
+ "predicted_ans": proposed_ans ,
104
+ "gpt4o_output":obj["gpt4o_output"] ,
105
+ "source": obj["item"]["source"]
106
+ # "output":solution,
107
+ }
108
+
109
+ return obj_new
110
+
111
+
112
+ def cal_metrics(results):
113
+ metrics = {
114
+ "is_correct": 0,
115
+ "is_incorrect": 0,
116
+ "invalid_judge": 0,
117
+ "num": 0
118
+ }
119
+
120
+ error_cnt = 0
121
+ source_metrics = {}
122
+
123
+ for sample in results:
124
+ source = sample["source"]
125
+ if source not in source_metrics:
126
+ source_metrics[source] = {
127
+ "is_correct": 0,
128
+ "is_incorrect": 0,
129
+ "invalid_judge": 0,
130
+ "num": 0
131
+ }
132
+
133
+ # 统计数目
134
+ metrics["num"] += 1
135
+ source_metrics[source]["num"] += 1
136
+
137
+ # 判断是否正确
138
+ if sample["gpt4o_output"] == "True":
139
+ metrics["is_correct"] += 1
140
+ source_metrics[source]["is_correct"] += 1
141
+ elif sample["gpt4o_output"] == "False":
142
+ metrics["is_incorrect"] += 1
143
+ source_metrics[source]["is_incorrect"] += 1
144
+ else:
145
+ metrics["invalid_judge"] += 1
146
+ source_metrics[source]["invalid_judge"] += 1
147
+ error_cnt += 1
148
+
149
+
150
+ print("Total:", metrics["num"])
151
+ print(f"error_cnt: {error_cnt}")
152
+
153
+ # 计算汇总指标
154
+ for key in metrics:
155
+ if key == "num":
156
+ continue
157
+ metrics[key] = metrics[key] / metrics["num"] if metrics["num"] > 0 else 0
158
+
159
+ # 计算各个source
160
+ for src in source_metrics:
161
+ for key in source_metrics[src]:
162
+ if key == "num":
163
+ continue
164
+ source_metrics[src][key] = source_metrics[src][key] / source_metrics[src]["num"] if source_metrics[src]["num"] > 0 else 0
165
+
166
+
167
+ final_metrics = {'overall': metrics, 'per_source': source_metrics}
168
+
169
+ return final_metrics
170
+
171
+
172
+
173
+ if __name__ == '__main__':
174
+ input_files=[
175
+
176
+ ]
177
+
178
+ for input_file in input_files:
179
+ print(f"Begin:{input_file}")
180
+
181
+ output_file = input_file.replace(".json", "_judge_temp0.json")
182
+ chunk_size=200
183
+ with open(input_file, "r") as fin:
184
+ all_demons = json.load(fin) # 加载整个 JSON 文件,返回一个列表
185
+
186
+ for item in all_demons:
187
+ if 'source' not in item["item"]:
188
+ item["item"]["source"] = 'unknown'
189
+ # all_demons =all_demons [:1]
190
+ print(f"Processed data has been written to {output_file}")
191
+
192
+ print("All Data Num:",len(all_demons))
193
+ chunk_num = len(all_demons) // chunk_size
194
+ if len(all_demons) % chunk_size != 0:
195
+ chunk_num += 1
196
+
197
+
198
+ all_results = []
199
+ for chunk_i in range(chunk_num):
200
+ print("Epoch:" ,chunk_i ,"/",chunk_num)
201
+ all_demons_subset = all_demons[chunk_i*chunk_size : (chunk_i+1)*chunk_size]
202
+ print(len(all_demons_subset))
203
+ with multiprocessing.Pool(processes=200) as pool:
204
+ results = list(tqdm(pool.imap(process_one_sample, all_demons_subset), total=len(all_demons_subset)))
205
+
206
+ all_results.extend(results)
207
+
208
+ # 保存为json
209
+
210
+ final_metrics = cal_metrics(all_results)
211
+
212
+ with open(output_file, 'w') as fout:
213
+ json.dump(all_results, fout, ensure_ascii=False, indent=4)
214
+
215
+ with open(output_file.replace(".json","_metrics_temp0.json"), 'w') as fout:
216
+ json.dump(final_metrics, fout, ensure_ascii=False, indent=4)
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/__pycache__/add_eval.cpython-310.pyc ADDED
Binary file (21.2 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/__pycache__/evaluate.cpython-310.pyc ADDED
Binary file (9.96 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/__pycache__/google_search.cpython-310.pyc ADDED
Binary file (10.5 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/__pycache__/prompts.cpython-310.pyc ADDED
Binary file (3.52 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/add_eval.py ADDED
@@ -0,0 +1,705 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import re
3
+ # from utils import has_answer, EM_compute, F1_compute, AC_compute
4
+
5
+ import os
6
+ from transformers import AutoModelForCausalLM, AutoTokenizer
7
+ import torch
8
+ import numpy as np
9
+ import matplotlib.pyplot as plt
10
+ import seaborn as sns
11
+
12
+ num2alpha = {
13
+ 'zero': '0', 'one': '1', 'two': '2', 'three': '3', 'four': '4', 'five': '5', 'six': '6', 'seven': '7', 'eight': '8', 'nine': '9', 'ten': '10', 'eleven': '11', 'twelve': '12', 'thirteen': '13', 'fourteen': '14', 'fifteen': '15', 'sixteen': '16', 'seventeen': '17', 'eighteen': '18', 'nineteen': '19', 'twenty': '20', 'thirty': '30', 'forty': '40', 'fifty': '50', 'sixty': '60', 'seventy': '70', 'eighty': '80', 'ninety': '90', 'hundred': '100',
14
+ '0': 'zero', '1': 'one', '2': 'two', '3': 'three', '4': 'four', '5': 'five', '6': 'six', '7': 'seven', '8': 'eight', '9': 'nine', '10': 'ten', '11': 'eleven', '12': 'twelve', '13': 'thirteen', '14': 'fourteen', '15': 'fifteen', '16': 'sixteen', '17': 'seventeen', '18': 'eighteen', '19': 'nineteen', '20': 'twenty', '30': 'thirty', '40': 'forty', '50': 'fifty', '60': 'sixty', '70': 'seventy', '80': 'eighty', '90': 'ninety', '100': 'hundred',
15
+ }
16
+ import argparse
17
+ import collections
18
+ import json
19
+ import copy
20
+ import os
21
+ import re
22
+ import logging
23
+ import string
24
+ from typing import List
25
+ import regex
26
+ import unicodedata
27
+ from tqdm import tqdm
28
+
29
+
30
+ logger = logging.getLogger()
31
+
32
+
33
+ class Tokens(object):
34
+ """A class to represent a list of tokenized text."""
35
+ TEXT = 0
36
+ TEXT_WS = 1
37
+ SPAN = 2
38
+ POS = 3
39
+ LEMMA = 4
40
+ NER = 5
41
+
42
+ def __init__(self, data, annotators, opts=None):
43
+ self.data = data
44
+ self.annotators = annotators
45
+ self.opts = opts or {}
46
+
47
+ def __len__(self):
48
+ """The number of tokens."""
49
+ return len(self.data)
50
+
51
+ def slice(self, i=None, j=None):
52
+ """Return a view of the list of tokens from [i, j)."""
53
+ new_tokens = copy.copy(self)
54
+ new_tokens.data = self.data[i: j]
55
+ return new_tokens
56
+
57
+ def untokenize(self):
58
+ """Returns the original text (with whitespace reinserted)."""
59
+ return ''.join([t[self.TEXT_WS] for t in self.data]).strip()
60
+
61
+ def words(self, uncased=False):
62
+ """Returns a list of the text of each token
63
+ Args:
64
+ uncased: lower cases text
65
+ """
66
+ if uncased:
67
+ return [t[self.TEXT].lower() for t in self.data]
68
+ else:
69
+ return [t[self.TEXT] for t in self.data]
70
+
71
+ def offsets(self):
72
+ """Returns a list of [start, end) character offsets of each token."""
73
+ return [t[self.SPAN] for t in self.data]
74
+
75
+ def pos(self):
76
+ """Returns a list of part-of-speech tags of each token.
77
+ Returns None if this annotation was not included.
78
+ """
79
+ if 'pos' not in self.annotators:
80
+ return None
81
+ return [t[self.POS] for t in self.data]
82
+
83
+ def lemmas(self):
84
+ """Returns a list of the lemmatized text of each token.
85
+ Returns None if this annotation was not included.
86
+ """
87
+ if 'lemma' not in self.annotators:
88
+ return None
89
+ return [t[self.LEMMA] for t in self.data]
90
+
91
+ def entities(self):
92
+ """Returns a list of named-entity-recognition tags of each token.
93
+ Returns None if this annotation was not included.
94
+ """
95
+ if 'ner' not in self.annotators:
96
+ return None
97
+ return [t[self.NER] for t in self.data]
98
+
99
+ def ngrams(self, n=1, uncased=False, filter_fn=None, as_strings=True):
100
+ """Returns a list of all ngrams from length 1 to n.
101
+ Args:
102
+ n: upper limit of ngram length
103
+ uncased: lower cases text
104
+ filter_fn: user function that takes in an ngram list and returns
105
+ True or False to keep or not keep the ngram
106
+ as_string: return the ngram as a string vs list
107
+ """
108
+
109
+ def _skip(gram):
110
+ if not filter_fn:
111
+ return False
112
+ return filter_fn(gram)
113
+
114
+ words = self.words(uncased)
115
+ ngrams = [(s, e + 1)
116
+ for s in range(len(words))
117
+ for e in range(s, min(s + n, len(words)))
118
+ if not _skip(words[s:e + 1])]
119
+
120
+ # Concatenate into strings
121
+ if as_strings:
122
+ ngrams = ['{}'.format(' '.join(words[s:e])) for (s, e) in ngrams]
123
+
124
+ return ngrams
125
+
126
+ def entity_groups(self):
127
+ """Group consecutive entity tokens with the same NER tag."""
128
+ entities = self.entities()
129
+ if not entities:
130
+ return None
131
+ non_ent = self.opts.get('non_ent', 'O')
132
+ groups = []
133
+ idx = 0
134
+ while idx < len(entities):
135
+ ner_tag = entities[idx]
136
+ # Check for entity tag
137
+ if ner_tag != non_ent:
138
+ # Chomp the sequence
139
+ start = idx
140
+ while (idx < len(entities) and entities[idx] == ner_tag):
141
+ idx += 1
142
+ groups.append((self.slice(start, idx).untokenize(), ner_tag))
143
+ else:
144
+ idx += 1
145
+ return groups
146
+
147
+
148
+ class Tokenizer(object):
149
+ """Base tokenizer class.
150
+ Tokenizers implement tokenize, which should return a Tokens class.
151
+ """
152
+
153
+ def tokenize(self, text):
154
+ raise NotImplementedError
155
+
156
+ def shutdown(self):
157
+ pass
158
+
159
+ def __del__(self):
160
+ self.shutdown()
161
+
162
+
163
+ class SimpleTokenizer(Tokenizer):
164
+ ALPHA_NUM = r'[\p{L}\p{N}\p{M}]+'
165
+ NON_WS = r'[^\p{Z}\p{C}]'
166
+
167
+ def __init__(self, **kwargs):
168
+ """
169
+ Args:
170
+ annotators: None or empty set (only tokenizes).
171
+ """
172
+ self._regexp = regex.compile(
173
+ '(%s)|(%s)' % (self.ALPHA_NUM, self.NON_WS),
174
+ flags=regex.IGNORECASE + regex.UNICODE + regex.MULTILINE
175
+ )
176
+ if len(kwargs.get('annotators', {})) > 0:
177
+ logger.warning('%s only tokenizes! Skipping annotators: %s' %
178
+ (type(self).__name__, kwargs.get('annotators')))
179
+ self.annotators = set()
180
+
181
+ def tokenize(self, text):
182
+ data = []
183
+ matches = [m for m in self._regexp.finditer(text)]
184
+ for i in range(len(matches)):
185
+ # Get text
186
+ token = matches[i].group()
187
+
188
+ # Get whitespace
189
+ span = matches[i].span()
190
+ start_ws = span[0]
191
+ if i + 1 < len(matches):
192
+ end_ws = matches[i + 1].span()[0]
193
+ else:
194
+ end_ws = span[1]
195
+
196
+ # Format data
197
+ data.append((
198
+ token,
199
+ text[start_ws: end_ws],
200
+ span,
201
+ ))
202
+ return Tokens(data, self.annotators)
203
+
204
+ tokenizer = SimpleTokenizer()
205
+
206
+ def normalize_span(text):
207
+ text = unicodedata.normalize('NFD', text)
208
+ text = tokenizer.tokenize(text).words(uncased=False)
209
+ return ' '.join(text), len(text)
210
+
211
+ def has_answer(answers, text, match_type="string"):
212
+ # print(answers, text)
213
+ # input()
214
+
215
+ # 如果text为list
216
+ if isinstance(text, list):
217
+ text = ' '.join(text)
218
+
219
+ text = unicodedata.normalize('NFD', text)
220
+ if match_type == 'string':
221
+ text = tokenizer.tokenize(text).words(uncased=True)
222
+ for single_answer in answers:
223
+ single_answer = unicodedata.normalize('NFD', single_answer)
224
+ single_answer = tokenizer.tokenize(single_answer)
225
+ single_answer = single_answer.words(uncased=True)
226
+ for i in range(0, len(text) - len(single_answer) + 1):
227
+ if single_answer == text[i: i + len(single_answer)]:
228
+ return 1
229
+ return 0
230
+
231
+ import unicodedata
232
+
233
+ def fake_answer(answers, text, fake_ans, match_type="string"):
234
+ answers = might_right_answers(answers) + expand_answers(answers)
235
+ # Normalize the input text
236
+ text = unicodedata.normalize('NFD', text)
237
+ if match_type == 'string':
238
+ otext = tokenizer.tokenize(text).words(uncased=False)
239
+ oo = ' '.join(otext)
240
+ text = tokenizer.tokenize(text).words(uncased=True)
241
+ for single_answer in answers:
242
+ single_answer = unicodedata.normalize('NFD', single_answer)
243
+ single_answer = tokenizer.tokenize(single_answer)
244
+ single_answer = single_answer.words(uncased=True)
245
+ for i in range(0, len(text) - len(single_answer) + 1):
246
+ if single_answer == text[i: i + len(single_answer)]:
247
+ ss = ' '.join(otext[i: i + len(single_answer)])
248
+
249
+ oo = oo.replace(ss, fake_ans)
250
+ return clean_text(oo)
251
+
252
+ def clean_text(text):
253
+ # 定义一个正则表达式模式,用于去除标点符号后面的多余空格
254
+ # 这里定义了一些常见的英文标点符号
255
+ pattern_remove_trailing_spaces = r'([,.!?;:\(\)\[\]\{\}—–—])\s+'
256
+
257
+ # 定义一个正则表达式模式,用于去除标点符号前面的多余空格
258
+ pattern_remove_leading_spaces = r'\s+([,.!?;:\(\)\[\]\{\}—–—])'
259
+
260
+ # 定义一个正则表达式模式,确保标点符号前后至少保留一个空格
261
+ pattern_preserve_single_space = r'(\s*)([,.!?;:\(\)\[\]\{\}—–—])(\s*)'
262
+
263
+ # 去除标点符号后面的多余空格
264
+ cleaned_text = re.sub(pattern_remove_trailing_spaces, r'\1 ', text)
265
+
266
+ # 去除标点符号前面的多余空格
267
+ cleaned_text = re.sub(pattern_remove_leading_spaces, r' \1', cleaned_text)
268
+
269
+ # 确保标点符号前后至少保留一个空格
270
+ cleaned_text = re.sub(pattern_preserve_single_space, r' \2 ', cleaned_text)
271
+
272
+ # 去除首尾空白
273
+ cleaned_text = cleaned_text.strip()
274
+
275
+ # 最终去除连续的空格
276
+ cleaned_text = re.sub(r'\s+', ' ', cleaned_text)
277
+
278
+ return cleaned_text
279
+
280
+
281
+ def expand_answers(answers: List[str]):
282
+ copy_answers = answers.copy()
283
+ res = set(answers)
284
+ for single_answer in answers:
285
+ if normalize_answer(single_answer) != "":
286
+ res.add(normalize_answer(single_answer))
287
+ original_answer = single_answer
288
+ single_answer = unicodedata.normalize('NFD', single_answer)
289
+ single_answer = tokenizer.tokenize(single_answer)
290
+ single_answer = single_answer.words(uncased=True)
291
+ for idx, word in enumerate(single_answer):
292
+ if word in num2alpha.keys():
293
+ cnt = 0
294
+ for word_before in single_answer[:idx]:
295
+ if word in word_before:
296
+ cnt += 1
297
+ pos = 0
298
+ while pos < len(original_answer) - len(word) + 1:
299
+ if original_answer[pos:].startswith(word):
300
+ if cnt == 0:
301
+ res.add(original_answer[:pos] + num2alpha[word] + original_answer[pos+len(word):])
302
+ break
303
+ pos += len(word)
304
+ cnt -= 1
305
+ else:
306
+ pos += 1
307
+ for i in res:
308
+ if i.lower() not in [c.lower() for c in copy_answers] and i != "":
309
+ copy_answers.append(i)
310
+ return copy_answers
311
+
312
+ def might_right_answers(answers):
313
+ ans = set(answers)
314
+ res = set()
315
+ for single_answer in answers:
316
+ original_answer = single_answer
317
+ single_answer = unicodedata.normalize('NFD', single_answer)
318
+ single_answer = tokenizer.tokenize(single_answer)
319
+ single_answer = single_answer.words(uncased=True)
320
+ for idx, word in enumerate(single_answer):
321
+ for spand_len in range(1, len(single_answer)):
322
+ cand_fake_ans = " ".join(single_answer[:idx] + single_answer[idx + spand_len:])
323
+ if _remove_proj(normalize_answer(cand_fake_ans)).replace(" ","") != "":
324
+ res.add(cand_fake_ans)
325
+ return list(res - ans)
326
+
327
+ def _remove_proj(text):
328
+ text = re.sub(r"\b(in|on|at|by|with|for|of|to)\b", " ", text)
329
+ return text
330
+
331
+ def normalize_answer(s):
332
+ def remove_articles(text):
333
+ return re.sub(r"\b(a|an|the)\b", " ", text)
334
+
335
+ def white_space_fix(text):
336
+ return " ".join(text.split())
337
+
338
+ def remove_punc(text):
339
+ exclude = set(string.punctuation)
340
+ return "".join(ch for ch in text if ch not in exclude)
341
+
342
+ def lower(text):
343
+ return text.lower()
344
+
345
+ return white_space_fix(remove_articles(remove_punc(lower(s))))
346
+
347
+ def EM_compute(answer_list, prediction):
348
+ return max([int(normalize_answer(prediction) == normalize_answer(ground_truth)) for ground_truth in answer_list])
349
+
350
+ def AC_compute(answer_list, prediction):
351
+ pred = normalize_answer(prediction)
352
+ for answer in answer_list:
353
+ if normalize_answer(answer) in pred:
354
+ return 1
355
+ return 0
356
+
357
+
358
+ def F1_compute(answers, pred):
359
+ def get_tokens(s):
360
+ if not s: return []
361
+ return normalize_answer(s).split()
362
+
363
+ def compute_f1(a_gold, a_pred):
364
+ gold_toks = get_tokens(a_gold)
365
+ pred_toks = get_tokens(a_pred)
366
+ common = collections.Counter(gold_toks) & collections.Counter(pred_toks)
367
+ num_same = sum(common.values())
368
+ if len(gold_toks) == 0 or len(pred_toks) == 0:
369
+ # If either is no-answer, then F1 is 1 if they agree, 0 otherwise
370
+ return int(gold_toks == pred_toks)
371
+ if num_same == 0:
372
+ return 0
373
+ precision = 1.0 * num_same / len(pred_toks)
374
+ recall = 1.0 * num_same / len(gold_toks)
375
+ f1 = (2 * precision * recall) / (precision + recall)
376
+ return f1
377
+ return max([compute_f1(x, pred) for x in answers])
378
+
379
+
380
+ def deal_judge(pred):
381
+ if pred is None:
382
+ return True
383
+ if has_answer(["unknown", "no specific answer", "not provide", "cannot answer", "no information provided", "no answer", "not contain", "no definitive answer"], pred):
384
+ return True
385
+ return False
386
+
387
+
388
+ def deal_answer(pred, answers):
389
+ if pred is None:
390
+ return 0, 0
391
+ if pred.lower().startswith("answer:"):
392
+ pred = pred[7:]
393
+ return EM_compute(answers, pred), F1_compute(answers, pred)
394
+
395
+
396
+ def deal_post(pred):
397
+ giveup, istrue = True, None
398
+ if pred is None:
399
+ return giveup, istrue
400
+ if has_answer(["unclear", "not clear", "unknown", "partially correct", "partially incorrect", "not correct", "cannot determine", "cannot answer", "not incorrect", "incomplete"], pred):
401
+ giveup = True
402
+ elif has_answer(["correct", "true"], pred):
403
+ giveup, istrue = False, True
404
+ elif has_answer(["incorrect", "false"], pred):
405
+ giveup, istrue = False, False
406
+ else:
407
+ giveup = True
408
+ return giveup, istrue
409
+
410
+
411
+ def str2paras(s):
412
+ if s is None:
413
+ return None
414
+ paras = []
415
+ for text in s.split('\n'):
416
+ if text.strip() != '':
417
+ paras.append(": " + text)
418
+ return paras
419
+
420
+
421
+ # if __name__ == "__main__":
422
+ # file_list = os.listdir('d:/pycharmfiles/chat')
423
+
424
+ # for file in file_list:
425
+ # if not file.endswith('post'):
426
+ # continue
427
+ # print(file)
428
+ # indir = os.path.join('d:/pycharmfiles/chat', file)
429
+ # outdir = os.path.join('d:/pycharmfiles/llm_re/nq/data', file)
430
+ # outstr = ""
431
+ # infile = open(indir, 'r', encoding='utf-8')
432
+ # for line in tqdm(infile.readlines()):
433
+ # d = json.loads(line)
434
+ # if 'Prediction' in d.keys():
435
+ # d['Giveup'], d['EM'], d['F1'] = deal_answer(d['Prediction'], d['reference'])
436
+ # if 'Post' in d.keys():
437
+ # d['Post_Giveup'], d['Post_True']= deal_post(d['Post'])
438
+ # outstr += json.dumps(d) + '\n'
439
+ # infile.close()
440
+ # outfile = open(outdir, 'w', encoding='utf-8')
441
+ # outfile.write(outstr)
442
+ # outfile.close()
443
+
444
+
445
+ def load_source(file):
446
+ data = []
447
+ f = open(file, 'r', encoding='utf-8')
448
+ for line in f.readlines():
449
+ data.append(json.loads(line))
450
+ f.close()
451
+ return data
452
+
453
+
454
+ def remove_punctuation(s):
455
+ punctuation_pattern = r"^[^\w\s]+|[^\w\s]+$"
456
+ return re.sub(punctuation_pattern, '', s)
457
+
458
+
459
+ def save_file(args, results, add='res'):
460
+ save_dir = os.path.dirname(args.data)
461
+ model_base_file = os.path.basename(args.model) + \
462
+ "." + os.path.basename(args.data)[:-len(".json")]
463
+ if args.splits:
464
+ model_base_file += f".{args.worker}-{args.splits}"
465
+ with open(os.path.join(save_dir, f"{model_base_file}.{add}.json"), 'w') as f:
466
+ json.dump(results, f, indent=4)
467
+
468
+
469
+
470
+ def calculate_statistics(data):
471
+ if len(data) == 0:
472
+ return {
473
+ 'mean': 0,
474
+ 'std': 0,
475
+ 'median': 0,
476
+ 'min': 0,
477
+ 'max': 0,
478
+ '25th_percentile': 0,
479
+ '75th_percentile': 0,
480
+ }
481
+
482
+ return {
483
+ 'mean': np.mean(data),
484
+ 'std': np.std(data),
485
+ 'median': np.median(data),
486
+ 'min': np.min(data),
487
+ 'max': np.max(data),
488
+ '25th_percentile': np.percentile(data, 25),
489
+ '75th_percentile': np.percentile(data, 75),
490
+ }
491
+
492
+
493
+ def analyse_len(all_outputs_len, retrieval_outputs_len, no_retrieval_outputs_len, output_dir, output_stats_file):
494
+ all_outputs_len_stats = calculate_statistics(all_outputs_len)
495
+ retrieval_outputs_len_stats = calculate_statistics(retrieval_outputs_len)
496
+ no_retrieval_outputs_len_stats = calculate_statistics(no_retrieval_outputs_len)
497
+
498
+ # # 打印统计数据
499
+ # print("All outputs length statistics:", all_outputs_len_stats)
500
+ # print("Retrieval outputs length statistics:", retrieval_outputs_len_stats)
501
+ # print("No retrieval outputs length statistics:", no_retrieval_outputs_len_stats)
502
+
503
+ with open(output_stats_file, "a") as f: # 使用 "a" 模式追加写入
504
+ f.write("All outputs length statistics:\n")
505
+ for key, value in all_outputs_len_stats.items():
506
+ f.write(f"{key}: {value}\n")
507
+ f.write("\n")
508
+
509
+ f.write("Retrieval outputs length statistics:\n")
510
+ for key, value in retrieval_outputs_len_stats.items():
511
+ f.write(f"{key}: {value}\n")
512
+ f.write("\n")
513
+
514
+ f.write("No retrieval outputs length statistics:\n")
515
+ for key, value in no_retrieval_outputs_len_stats.items():
516
+ f.write(f"{key}: {value}\n")
517
+ f.write("\n")
518
+ # # 创建保存结果的目录
519
+ # if not os.path.exists(output_dir):
520
+ # os.makedirs(output_dir)
521
+
522
+ # 绘制直方图并保存图像
523
+ plt.figure(figsize=(12, 8))
524
+
525
+ # 绘制所有输出长度的直方图
526
+ plt.subplot(2, 2, 1)
527
+ sns.histplot(all_outputs_len, kde=True, bins=30, color='blue', label='All Outputs', stat='density')
528
+ plt.title('Distribution of All Outputs Length')
529
+ plt.xlabel('Length')
530
+ plt.ylabel('Density')
531
+ # plt.savefig(os.path.join(output_dir, 'all_outputs_length_distribution.png'))
532
+
533
+ # 绘制检索输出长度的直方图
534
+ plt.subplot(2, 2, 2)
535
+ sns.histplot(retrieval_outputs_len, kde=True, bins=30, color='green', label='Retrieval Outputs', stat='density')
536
+ plt.title('Distribution of Retrieval Outputs Length')
537
+ plt.xlabel('Length')
538
+ plt.ylabel('Density')
539
+ # plt.savefig(os.path.join(output_dir, 'retrieval_outputs_length_distribution.png'))
540
+
541
+ # 绘制没有检索输出长度的直方图
542
+ plt.subplot(2, 2, 3)
543
+ sns.histplot(no_retrieval_outputs_len, kde=True, bins=30, color='red', label='No Retrieval Outputs', stat='density')
544
+ plt.title('Distribution of No Retrieval Outputs Length')
545
+ plt.xlabel('Length')
546
+ plt.ylabel('Density')
547
+ # plt.savefig(os.path.join(output_dir, 'no_retrieval_outputs_length_distribution.png'))
548
+
549
+ # 总体输出长度分布
550
+ plt.subplot(2, 2, 4)
551
+ sns.histplot(all_outputs_len, kde=True, bins=30, color='blue', label='All Outputs', stat='density', alpha=0.5)
552
+ sns.histplot(retrieval_outputs_len, kde=True, bins=30, color='green', label='Retrieval Outputs', stat='density', alpha=0.5)
553
+ sns.histplot(no_retrieval_outputs_len, kde=True, bins=30, color='red', label='No Retrieval Outputs', stat='density', alpha=0.5)
554
+ plt.title('Overall Distribution of Outputs Length')
555
+ plt.xlabel('Length')
556
+ plt.ylabel('Density')
557
+ plt.legend()
558
+ # plt.savefig(os.path.join(output_dir, 'overall_output_length_distribution.png'))
559
+
560
+ # 保存所有图像
561
+ plt.tight_layout()
562
+ plt.savefig(os.path.join(output_dir, 'combined_output_length_distribution.png'))
563
+
564
+ plt.show()
565
+
566
+ def has_run_retrieve(sample):
567
+ return bool (sample["search_count"])
568
+
569
+ def cal_has_answer(sample):
570
+ reason_has, search_has, analyses_has = 0, 0, 0
571
+ for info in sample["all_info"]:
572
+ for k, v in info.items():
573
+ if "reason" in k:
574
+ reason_has = max(reason_has, has_answer(sample['answer'], v))
575
+ elif "search" in k:
576
+ search_has = max(search_has, has_answer(sample['answer'], v))
577
+ elif "analyses" in k:
578
+ analyses_has = max(analyses_has, has_answer(sample['answer'], v))
579
+ return {'reason': reason_has, 'search': search_has, 'analyse': analyses_has}
580
+
581
+ def extract_answer(sample):
582
+ output = sample.get('output', '')
583
+ match = re.search(r'\\boxed\{(.*?)\}', output)
584
+ if match:
585
+ return match.group(1)
586
+ return output.rsplit('\n', 1)[-1]
587
+
588
+ def cal_metrics(sample):
589
+ res = {}
590
+ pred = extract_answer(sample)
591
+ for m, func in {
592
+ 'em': EM_compute,
593
+ 'ac': AC_compute,
594
+ 'f1': F1_compute,
595
+ }.items():
596
+ res[m] = func(sample['answer'], pred)
597
+ res.update(cal_has_answer(sample))
598
+ res['search_count'] = sample['search_count']
599
+ return res
600
+
601
+ def add_eval(model_path, data_path):
602
+
603
+
604
+ output_dir = os.path.dirname(data_path)
605
+ output_stats_file = os.path.join(output_dir, "output_stats.txt")
606
+ # model = AutoModelForCausalLM.from_pretrained(model_path).to(torch.bfloat16).to("cuda")
607
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
608
+
609
+
610
+ with open(data_path, encoding="utf-8") as f:
611
+ results = json.load(f)
612
+
613
+ # 初始化累加器
614
+ total_metrics = {}
615
+ retrieval_true_metrics = {}
616
+ retrieval_false_metrics = {}
617
+ count_total = 0
618
+ count_retrieval_true = 0
619
+ count_retrieval_false = 0
620
+
621
+ # 计算平均长度
622
+ all_outputs_len = []
623
+ retrieval_outputs_len = []
624
+ no_retrieval_outputs_len = []
625
+
626
+ # 遍历每个样本并计算指标
627
+ for sample in results:
628
+ sample.update(sample["item"])
629
+ metrics = cal_metrics(sample)
630
+
631
+ output_ids = tokenizer(sample["output"], add_special_tokens=False)["input_ids"]
632
+ all_outputs_len.append(len(output_ids))
633
+
634
+ # 累加总的指标
635
+ for key, value in metrics.items():
636
+ total_metrics[key] = total_metrics.get(key, 0) + value
637
+
638
+ # 根据是否跑了检索进行分类累加
639
+ if has_run_retrieve(sample):
640
+ retrieval_outputs_len.append(len(output_ids))
641
+
642
+ for key, value in metrics.items():
643
+ retrieval_true_metrics[key] = retrieval_true_metrics.get(key, 0) + value
644
+ count_retrieval_true += 1
645
+ else:
646
+ no_retrieval_outputs_len.append(len(output_ids))
647
+ for key, value in metrics.items():
648
+ retrieval_false_metrics[key] = retrieval_false_metrics.get(key, 0) + value
649
+ count_retrieval_false += 1
650
+
651
+ count_total += 1
652
+
653
+ # 计算均值
654
+ mean_metrics = {key: value / count_total for key, value in total_metrics.items()}
655
+ mean_retrieval_true_metrics = {key: value / count_retrieval_true for key, value in retrieval_true_metrics.items()}
656
+ mean_retrieval_false_metrics = {key: value / count_retrieval_false for key, value in retrieval_false_metrics.items()}
657
+
658
+ mean_all_output_len = sum(all_outputs_len) / len(all_outputs_len) if len(all_outputs_len) != 0 else 0
659
+ mean_retrieval_outputs_len = sum(retrieval_outputs_len) / len(retrieval_outputs_len) if len(retrieval_outputs_len) != 0 else 0
660
+ mean_no_retrieval_outputs_len = sum(no_retrieval_outputs_len) / len(no_retrieval_outputs_len) if len(no_retrieval_outputs_len) != 0 else 0
661
+
662
+ analyse_len(all_outputs_len, retrieval_outputs_len, no_retrieval_outputs_len, output_dir, output_stats_file)
663
+
664
+ # print(count_retrieval_false/count_total)
665
+ # print(count_retrieval_true/count_total)
666
+
667
+ # # 打印结果
668
+ # print(f"model_path: {model_path}")
669
+ # print(f"data_path: {data_path}")
670
+ # print("Overall Mean Metrics:")
671
+ # for key, value in mean_metrics.items():
672
+ # print(f"{key}: {value}")
673
+ # print(f"output_len: {mean_all_output_len}")
674
+
675
+ # print("\nMean Metrics for Samples with Retrieval:")
676
+ # for key, value in mean_retrieval_true_metrics.items():
677
+ # print(f"{key}: {value}")
678
+ # print(f"output_len: {mean_retrieval_outputs_len}")
679
+
680
+ # print("\nMean Metrics for Samples without Retrieval:")
681
+ # for key, value in mean_retrieval_false_metrics.items():
682
+ # print(f"{key}: {value}")
683
+ # print(f"output_len: {mean_no_retrieval_outputs_len}")
684
+ with open(output_stats_file, "a") as f:
685
+ f.write(f"\nProportion of samples without retrieval: {count_retrieval_false / count_total}\n")
686
+ f.write(f"Proportion of samples with retrieval: {count_retrieval_true / count_total}\n")
687
+
688
+ f.write(f"model_path: {model_path}\n")
689
+ f.write(f"data_path: {data_path}\n")
690
+ f.write("Overall Mean Metrics:\n")
691
+ for key, value in mean_metrics.items():
692
+ f.write(f"{key}: {value}\n")
693
+ f.write(f"output_len: {mean_all_output_len}\n\n")
694
+
695
+ f.write("Mean Metrics for Samples with Retrieval:\n")
696
+ for key, value in mean_retrieval_true_metrics.items():
697
+ f.write(f"{key}: {value}\n")
698
+ f.write(f"output_len: {mean_retrieval_outputs_len}\n\n")
699
+
700
+ f.write("Mean Metrics for Samples without Retrieval:\n")
701
+ for key, value in mean_retrieval_false_metrics.items():
702
+ f.write(f"{key}: {value}\n")
703
+ f.write(f"output_len: {mean_no_retrieval_outputs_len}\n")
704
+
705
+
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/evaluate.py ADDED
@@ -0,0 +1,452 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ sys.path.append("..")
3
+
4
+ import re
5
+ import json
6
+ import numpy as np
7
+ from collections import Counter
8
+ import string
9
+ import os, time
10
+ from collections import defaultdict
11
+ from lcb_runner.evaluation import codegen_metrics
12
+ from utils.math_equivalence import is_equiv
13
+
14
+
15
+ def extract_answer(output, mode='gen'):
16
+ extracted_text = ''
17
+ if output is None:
18
+ output = "None"
19
+ if mode == 'codegen':
20
+ # Extract the code between ```python and ```
21
+ pattern = r'```python\s*(.*?)\s*```'
22
+ matches = re.findall(pattern, output, re.DOTALL | re.IGNORECASE)
23
+ if matches:
24
+ extracted_text = matches[-1].strip() # Take the last match
25
+ elif mode == 'infogen': # 提取模型基于网页内容生成的推理
26
+ # Extract content after **Final Information** or **Modified Reasoning Steps**
27
+ # pattern_info = "\n**Final Information**"
28
+ # pattern_step = "\n**Modified Reasoning Steps**"
29
+ pattern_info = "**Final Information**"
30
+ pattern_step = "**Modified Reasoning Steps**"
31
+ if pattern_info in output:
32
+ extracted_text = output.split(pattern_info)[-1].replace("\n","").strip("```").strip()
33
+ elif pattern_step in output:
34
+ extracted_text = output.split(pattern_step)[-1].strip("```").strip()
35
+ else:
36
+ # extracted_text = "No helpful information found."
37
+ extracted_text = output
38
+ else:
39
+ # Existing extraction logic for 'gen' and 'choose' modes
40
+ pattern = r'\\boxed\{(.*)\}'
41
+ matches = re.findall(pattern, output)
42
+ if matches:
43
+ extracted_text = matches[-1] # Take the last match
44
+ if mode in ['choose', 'qa']:
45
+ # Handle 'choose' mode
46
+ inner_pattern = r'\\text\{(.*)\}'
47
+ inner_matches = re.findall(inner_pattern, extracted_text)
48
+ if inner_matches:
49
+ extracted_text = inner_matches[-1] # Take the last match
50
+ extracted_text = extracted_text.strip("()")
51
+ return extracted_text
52
+
53
+
54
+ def normalize_answer(text):
55
+ text = text.lower()
56
+ text = " ".join(text.strip().split())
57
+ return text
58
+
59
+ def normalize_answer_qa(s):
60
+ def remove_articles(text):
61
+ return re.sub(r"\b(a|an|the)\b", " ", text)
62
+ def white_space_fix(text):
63
+ return " ".join(text.strip().split())
64
+ def remove_punc(text):
65
+ exclude = set(string.punctuation)
66
+ return "".join(ch for ch in text if ch not in exclude)
67
+ def lower(text):
68
+ return text.lower()
69
+ return white_space_fix(remove_articles(remove_punc(lower(s))))
70
+
71
+
72
+ def evaluate_predictions(output, labeled_answer, mode='gen'):
73
+ final_metric = {"is_valid_answer": False, "acc": 0, "em": 0, "f1": 0, 'math_equal': 0}
74
+ # is_valid_answer: 是否存在有效的预测答案。
75
+ # acc: 精度(accuracy)。指标注答案是否出现在预测答案中
76
+ # em: 完全匹配(exact match)。指预测答案是否与标注答案完全相同
77
+ # f1: F1 分数。
78
+ # math_equal: 数学上的相等性(通常用于判断数值是否相等)
79
+ pred_answer = extract_answer(output, mode=mode)
80
+ if pred_answer != '': # 模型给出了有效的预测答案
81
+ final_metric["is_valid_answer"] = True
82
+
83
+ if mode == 'qa':
84
+ normalized_pred_answer = normalize_answer_qa(pred_answer)
85
+ # print(f"normalized_pred_answer: {normalized_pred_answer}")
86
+ for answer in labeled_answer:
87
+ normalized_ground_truth = normalize_answer_qa(answer)
88
+ # print(f"normalized_ground_truth: {normalized_ground_truth}--")
89
+ em = int(normalized_pred_answer == normalized_ground_truth)
90
+ acc = int(normalized_ground_truth in normalized_pred_answer)
91
+
92
+ # 将预测答案和标注答案分割成单词或词汇 tokens,并计算它们的交集(即相同的词汇)。
93
+ # Counter 是一个字典类型的对象,用于统计词汇的频次,& 操作符求得两个 Counter 对象的交集
94
+ prediction_tokens = normalized_pred_answer.split()
95
+ ground_truth_tokens = normalized_ground_truth.split()
96
+ common = Counter(prediction_tokens) & Counter(ground_truth_tokens)
97
+ num_same = sum(common.values())
98
+ if num_same == 0:
99
+ continue
100
+ precision = 1.0 * num_same / len(prediction_tokens)
101
+ recall = 1.0 * num_same / len(ground_truth_tokens)
102
+ f1 = (2 * precision * recall) / (precision + recall)
103
+ for k in ["em", "acc", "f1"]:
104
+ final_metric[k] = max(eval(k), final_metric[k])
105
+
106
+ else:
107
+ normalized_pred_answer = normalize_answer(pred_answer)
108
+ normalized_ground_truth = normalize_answer(labeled_answer)
109
+
110
+ em = int(normalized_pred_answer == normalized_ground_truth)
111
+ acc = int(normalized_ground_truth in normalized_pred_answer)
112
+
113
+ prediction_tokens = normalized_pred_answer.split()
114
+ ground_truth_tokens = normalized_ground_truth.split()
115
+ common = Counter(prediction_tokens) & Counter(ground_truth_tokens)
116
+ num_same = sum(common.values())
117
+ if num_same == 0:
118
+ f1 = 0
119
+ else:
120
+ precision = 1.0 * num_same / len(prediction_tokens) if len(prediction_tokens) > 0 else 0
121
+ recall = 1.0 * num_same / len(ground_truth_tokens) if len(ground_truth_tokens) > 0 else 0
122
+ if (precision + recall) == 0:
123
+ f1 = 0
124
+ else:
125
+ f1 = (2 * precision * recall) / (precision + recall)
126
+
127
+ final_metric["em"] = em
128
+ final_metric["acc"] = acc
129
+ final_metric["f1"] = f1
130
+
131
+ final_metric["math_equal"] = is_equiv(normalized_pred_answer, normalized_ground_truth)
132
+
133
+ # print(em, acc, f1, normalized_pred_answer, '|', normalized_ground_truth)
134
+ return final_metric, pred_answer
135
+
136
+
137
+
138
+ def run_evaluation(filtered_data, input_list, output_list, dataset_name, output_dir, total_time, split, apply_backoff=False):
139
+ if dataset_name == 'livecode':
140
+ # Prepare samples and generations for codegen_metrics
141
+ samples_list = []
142
+ generations_list = []
143
+
144
+ # Collect difficulty levels for per-domain metrics
145
+ difficulties = []
146
+ per_difficulty_count = {}
147
+ num_valid_answer = 0
148
+
149
+ for item, input_prompt, result in zip(filtered_data, input_list, output_list):
150
+ if type(result) == str:
151
+ item['Output'] = result
152
+ else:
153
+ item['Output'] = result.outputs[0].text
154
+ difficulty = item.get("difficulty", "Unknown")
155
+ difficulties.append(difficulty)
156
+ # Track metrics per domain
157
+ if difficulty not in per_difficulty_count.keys():
158
+ per_difficulty_count[difficulty] = 0
159
+
160
+ pred_code = extract_answer(item['Output'], mode='codegen')
161
+ if pred_code != '':
162
+ num_valid_answer += 1
163
+ per_difficulty_count[difficulty] += 1
164
+ # Assuming each item has 'input_output' with 'inputs' and 'outputs'
165
+ public_test_cases = json.loads(item.get("public_test_cases", "{}"))
166
+
167
+ inputs, outputs = [], []
168
+ for case in public_test_cases:
169
+ inputs.append(case["input"])
170
+ outputs.append(case["output"])
171
+
172
+ sample = {
173
+ "input_output": json.dumps({
174
+ "inputs": inputs,
175
+ "outputs": outputs
176
+ }),
177
+ }
178
+
179
+ samples_list.append(sample)
180
+ generations_list.append([pred_code])
181
+ item['Pred_Answer'] = pred_code
182
+ item['Question'] = input_prompt
183
+
184
+
185
+ # Call codegen_metrics with pass@1
186
+ metrics, results, final_metadata = codegen_metrics(
187
+ samples_list,
188
+ generations_list,
189
+ k_list=[1], # Evaluate the top 1 generated result
190
+ num_process_evaluate=2, # Parallel evaluation
191
+ timeout=10, # Set timeout to 10 seconds
192
+ debug=False, # Enable debug mode
193
+ )
194
+ # print('samples_list', samples_list)
195
+ # print('generations_list', generations_list)
196
+ # print('metrics', metrics)
197
+
198
+ # Extract pass@1
199
+ pass_at_1 = metrics.get('pass@1', 0.0)
200
+ detail_pass_at_1 = metrics['detail']['pass@1']
201
+
202
+ for item, pass1, res, meta in zip(filtered_data, detail_pass_at_1.values(), results.values(), final_metadata):
203
+ item['Metrics'] = {'pass@1': pass1}
204
+ item['Results'] = res
205
+ item['Final_metadata'] = meta
206
+
207
+ # Initialize per-difficulty metrics
208
+ difficulty_metrics = defaultdict(list)
209
+ for idx, difficulty in enumerate(difficulties):
210
+ pass1 = detail_pass_at_1[idx]
211
+ difficulty_metrics[difficulty].append(pass1)
212
+
213
+ # Compute overall pass@1
214
+ overall_metrics = {
215
+ 'pass@1': pass_at_1, # / num_valid_answer * len(input_list),
216
+ 'num_valid_answer': f'{num_valid_answer} of {len(input_list)}',
217
+ 'query_latency': f'{(total_time / len(input_list) * 1000):.0f} ms',
218
+ }
219
+
220
+ # Compute per-difficulty pass@1
221
+ per_difficulty_metrics = {}
222
+ for difficulty, passes in difficulty_metrics.items():
223
+ avg_pass = np.mean(passes) if len(passes) > 0 else 0.0
224
+ num_valid_answer = per_difficulty_count[difficulty]
225
+ per_difficulty_metrics[difficulty] = {
226
+ 'pass@1': avg_pass,
227
+ 'num_valid_answer': f'{num_valid_answer} of {len(passes)}'
228
+ }
229
+
230
+ # Save the metrics
231
+ final_metrics = {
232
+ 'overall': overall_metrics,
233
+ 'per_domain': per_difficulty_metrics
234
+ }
235
+
236
+ else:
237
+ # Existing evaluation for other datasets
238
+ avg_em, avg_acc, avg_f1, avg_math = [], [], [], []
239
+ num_valid_answer = 0
240
+
241
+ # If the dataset is GPQA, track metrics per domain
242
+ domain_metrics = {}
243
+
244
+ for item, input_prompt, result in zip(filtered_data, input_list, output_list):
245
+ if type(result) == str:
246
+ item['Output'] = result
247
+ else:
248
+ item['Output'] = result.outputs[0].text
249
+ if dataset_name in ['gpqa', 'medmcqa']:
250
+ labeled_answer = item["Correct Choice"]
251
+ # labeled_choice_answer = item["Correct Answer"]
252
+ mode = 'choose'
253
+ elif dataset_name in ['math500', 'aime', 'amc']:
254
+ labeled_answer = item["answer"]
255
+ mode = 'gen'
256
+ elif dataset_name in ['dpo_484', 'no_error_data_871', 'eval_old_500', 'gaia_level3', 'gaia', 'hle','frames', 'realqa', 'realqa_new', 'syn_en', 'syn_zh','musique_syn', 'eval', 'new', 'chinese_simpleqa', 'simpleqa', 'nq', 'triviaqa', 'hotpotqa', 'musique', 'bamboogle', '2wiki']:
257
+ labeled_answer = item["answer"]
258
+ mode = 'qa'
259
+ elif dataset_name in ['pubhealth']:
260
+ labeled_answer = item["answer"]
261
+ mode = 'choose'
262
+ else:
263
+ # raise ValueError(f"Unknown dataset_name: {dataset_name}")
264
+ labeled_answer = item["answer"]
265
+ mode = 'qa'
266
+
267
+ metric, pred_answer = evaluate_predictions(output=item['Output'], labeled_answer=labeled_answer, mode=mode)
268
+ item['Pred_Answer'] = pred_answer
269
+ item['Metrics'] = metric
270
+ item['Question'] = input_prompt
271
+
272
+ # Determine the validity of the predicted answer
273
+ my_method_valid = (pred_answer != '' and not (mode == 'choose' and dataset_name == 'gpqa' and len(pred_answer) > 1))
274
+
275
+ avg_em.append(metric['em'])
276
+ avg_acc.append(metric['acc'])
277
+ avg_f1.append(metric['f1'])
278
+ avg_math.append(metric['math_equal'])
279
+
280
+ if my_method_valid:
281
+ num_valid_answer += 1
282
+
283
+ # If the dataset is GPQA, attempt to track metrics per domain
284
+ if dataset_name == 'gpqa':
285
+ domain = item.get("High-level domain", "Unknown")
286
+ if domain not in domain_metrics:
287
+ domain_metrics[domain] = {'em': [], 'acc': [], 'f1': [], 'math_equal': [], 'num_valid_answer': 0, 'total_num': 0}
288
+ domain_metrics[domain]['total_num'] += 1
289
+ domain_metrics[domain]['em'].append(metric['em'])
290
+ domain_metrics[domain]['acc'].append(metric['acc'])
291
+ domain_metrics[domain]['f1'].append(metric['f1'])
292
+ domain_metrics[domain]['math_equal'].append(metric['math_equal'])
293
+ if my_method_valid:
294
+ domain_metrics[domain]['num_valid_answer'] += 1
295
+
296
+ t = time.localtime()
297
+ result_json_name = f'{split}.{t.tm_mon}.{t.tm_mday},{t.tm_hour}:{t.tm_min}.json'
298
+ metrics_json_name = f'{split}.{t.tm_mon}.{t.tm_mday},{t.tm_hour}:{t.tm_min}.metrics.json'
299
+
300
+ # Compute overall metrics
301
+ overall_results = {
302
+ 'em': np.mean(avg_em) if len(avg_em) > 0 else 0.0,
303
+ 'acc': np.mean(avg_acc) if len(avg_acc) > 0 else 0.0,
304
+ 'f1': np.mean(avg_f1) if len(avg_f1) > 0 else 0.0,
305
+ 'math_equal': np.mean(avg_math) if len(avg_em) > 0 else 0.0,
306
+ 'num_valid_answer': f'{num_valid_answer} of {len(input_list)}',
307
+ 'query_latency': f'{(total_time / len(input_list) * 1000):.0f} ms',
308
+ }
309
+
310
+ # If the dataset is GPQA, output average metrics per domain
311
+ domain_avg_metrics = {}
312
+ if dataset_name == 'gpqa':
313
+ for dm, m in domain_metrics.items():
314
+ domain_avg_metrics[dm] = {
315
+ 'em': np.mean(m['em']) if len(m['em']) > 0 else 0,
316
+ 'acc': np.mean(m['acc']) if len(m['acc']) > 0 else 0,
317
+ 'f1': np.mean(m['f1']) if len(m['f1']) > 0 else 0,
318
+ 'math_equal': np.mean(m['math_equal']) if len(m['math_equal']) > 0 else 0,
319
+ 'num_valid_answer': f'{m["num_valid_answer"]} of {m["total_num"]}'
320
+ }
321
+
322
+ # 保存总体和分domain的指标
323
+ final_metrics = {'overall': overall_results}
324
+ if dataset_name == 'gpqa':
325
+ final_metrics['per_domain'] = domain_avg_metrics
326
+
327
+ t = time.localtime()
328
+ result_json_name = f'{split}.{t.tm_mon}.{t.tm_mday},{t.tm_hour}:{t.tm_min}.json'
329
+ metrics_json_name = f'{split}.{t.tm_mon}.{t.tm_mday},{t.tm_hour}:{t.tm_min}.metrics.json'
330
+ if apply_backoff:
331
+ result_json_name = output_dir
332
+ metrics_json_name = output_dir.replace('.json', '.metrics.backoff.json')
333
+
334
+ # Save prediction results and metrics
335
+ with open(os.path.join(output_dir, result_json_name), mode='w', encoding='utf-8') as json_file:
336
+ json.dump(filtered_data, json_file, indent=4, ensure_ascii=False)
337
+
338
+ with open(os.path.join(output_dir, metrics_json_name), mode='w', encoding='utf-8') as json_file:
339
+ json.dump(final_metrics, json_file, indent=4, ensure_ascii=False)
340
+
341
+
342
+
343
+ def run_evaluation_for_eval(filtered_data, input_list, output_list, dataset_name, output_dir, total_time, split, apply_backoff=False):
344
+ if dataset_name not in ['dpo_484', 'no_error_data_871', 'eval_old_500', 'gaia_level3', 'gaia', 'hle','eval', 'musique_syn', 'realqa_new', 'realqa']:
345
+ raise ValueError(f"Unsupported dataset: {dataset_name}")
346
+ else:
347
+ # Existing evaluation for other datasets
348
+ avg_em, avg_acc, avg_f1, avg_math = [], [], [], []
349
+ num_valid_answer = 0
350
+
351
+ # If the dataset is eval, track metrics per source
352
+ source_metrics = {}
353
+
354
+
355
+
356
+ for item, input_prompt, result in zip(filtered_data, input_list, output_list):
357
+ if type(result) == str:
358
+ item['Output'] = result
359
+ else:
360
+ item['Output'] = result.outputs[0].text
361
+ if dataset_name in ['gpqa', 'medmcqa']:
362
+ labeled_answer = item["Correct Choice"]
363
+ # labeled_choice_answer = item["Correct Answer"]
364
+ mode = 'choose'
365
+ elif dataset_name in ['math500', 'aime', 'amc']:
366
+ labeled_answer = item["answer"]
367
+ mode = 'gen'
368
+ elif dataset_name in ['dpo_484', 'no_error_data_871', 'eval_old_500', 'gaia_level3', 'gaia', 'hle','frames', 'realqa', 'realqa_new', 'syn_en', 'syn_zh', 'eval','musique_syn', 'new', 'chinese_simpleqa', 'simpleqa', 'nq', 'triviaqa', 'hotpotqa', 'musique', 'bamboogle', '2wiki']:
369
+ labeled_answer = item["answer"]
370
+ mode = 'qa'
371
+ elif dataset_name in ['pubhealth']:
372
+ labeled_answer = item["answer"]
373
+ mode = 'choose'
374
+ else:
375
+ raise ValueError(f"Unknown dataset_name: {dataset_name}")
376
+
377
+ metric, pred_answer = evaluate_predictions(output=item['Output'], labeled_answer=labeled_answer, mode=mode)
378
+ item['Pred_Answer'] = pred_answer
379
+ item['Metrics'] = metric
380
+ item['Question'] = input_prompt
381
+
382
+ # Determine the validity of the predicted answer
383
+ my_method_valid = (pred_answer != '' and not (mode == 'choose' and dataset_name == 'gpqa' and len(pred_answer) > 1))
384
+
385
+ avg_em.append(metric['em'])
386
+ avg_acc.append(metric['acc'])
387
+ avg_f1.append(metric['f1'])
388
+ avg_math.append(metric['math_equal'])
389
+
390
+ if my_method_valid:
391
+ num_valid_answer += 1
392
+
393
+ # If the dataset is GPQA, attempt to track metrics per source
394
+ if dataset_name in ['dpo_484', 'no_error_data_871', 'eval_old_500', 'gaia_level3', 'gaia', 'hle','eval', 'musique_syn', 'realqa_new', 'realqa']:
395
+ source = item.get("source", "Unknown")
396
+ if source not in source_metrics:
397
+ source_metrics[source] = {'em': [], 'acc': [], 'f1': [], 'math_equal': [], 'num_valid_answer': 0, 'total_num': 0}
398
+ source_metrics[source]['total_num'] += 1
399
+ source_metrics[source]['em'].append(metric['em'])
400
+ source_metrics[source]['acc'].append(metric['acc'])
401
+ source_metrics[source]['f1'].append(metric['f1'])
402
+ source_metrics[source]['math_equal'].append(metric['math_equal'])
403
+ if my_method_valid:
404
+ source_metrics[source]['num_valid_answer'] += 1
405
+
406
+ t = time.localtime()
407
+ result_json_name = f'{split}.{t.tm_mon}.{t.tm_mday},{t.tm_hour}:{t.tm_min}.json'
408
+ metrics_json_name = f'{split}.{t.tm_mon}.{t.tm_mday},{t.tm_hour}:{t.tm_min}.metrics.json'
409
+
410
+ # Compute overall metrics
411
+ overall_results = {
412
+ 'em': np.mean(avg_em) if len(avg_em) > 0 else 0.0,
413
+ 'acc': np.mean(avg_acc) if len(avg_acc) > 0 else 0.0,
414
+ 'f1': np.mean(avg_f1) if len(avg_f1) > 0 else 0.0,
415
+ 'math_equal': np.mean(avg_math) if len(avg_em) > 0 else 0.0,
416
+ 'num_valid_answer': f'{num_valid_answer} of {len(input_list)}',
417
+ 'query_latency': f'{(total_time / len(input_list) * 1000):.0f} ms',
418
+ }
419
+
420
+ # If the dataset is eval, output average metrics per source
421
+ source_avg_metrics = {}
422
+ if dataset_name in ['dpo_484', 'no_error_data_871', 'eval_old_500', 'gaia_level3', 'gaia', 'hle','eval', 'musique_syn', 'realqa_new', 'realqa']:
423
+ for dm, m in source_metrics.items():
424
+ source_avg_metrics[dm] = {
425
+ 'em': np.mean(m['em']) if len(m['em']) > 0 else 0,
426
+ 'acc': np.mean(m['acc']) if len(m['acc']) > 0 else 0,
427
+ 'f1': np.mean(m['f1']) if len(m['f1']) > 0 else 0,
428
+ 'math_equal': np.mean(m['math_equal']) if len(m['math_equal']) > 0 else 0,
429
+ 'num_valid_answer': f'{m["num_valid_answer"]} of {m["total_num"]}'
430
+ }
431
+
432
+ # 保存总体和分source的指标
433
+ final_metrics = {'overall': overall_results}
434
+ if dataset_name in ['dpo_484', 'no_error_data_871', 'eval_old_500', 'gaia_level3', 'gaia', 'hle', 'eval', 'musique_syn', 'realqa_new', 'realqa']:
435
+ final_metrics['per_source'] = source_avg_metrics
436
+
437
+ t = time.localtime()
438
+ result_json_name = f'{split}.{t.tm_mon}.{t.tm_mday},{t.tm_hour}:{t.tm_min}.json'
439
+ metrics_json_name = f'{split}.{t.tm_mon}.{t.tm_mday},{t.tm_hour}:{t.tm_min}.metrics.json'
440
+ if apply_backoff:
441
+ result_json_name = output_dir
442
+ metrics_json_name = output_dir.replace('.json', '.metrics.backoff.json')
443
+
444
+ # Save prediction results and metrics
445
+ with open(os.path.join(output_dir, result_json_name), mode='w', encoding='utf-8') as json_file:
446
+ json.dump(filtered_data, json_file, indent=4, ensure_ascii=False)
447
+
448
+ with open(os.path.join(output_dir, metrics_json_name), mode='w', encoding='utf-8') as json_file:
449
+ json.dump(final_metrics, json_file, indent=4, ensure_ascii=False)
450
+
451
+
452
+
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/google_search.py ADDED
@@ -0,0 +1,417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import requests
4
+ from requests.exceptions import Timeout
5
+ from bs4 import BeautifulSoup
6
+ from tqdm import tqdm
7
+ import time
8
+ import concurrent
9
+ from concurrent.futures import ThreadPoolExecutor
10
+ import pdfplumber
11
+ from io import BytesIO
12
+ import re
13
+ import string
14
+ from typing import Optional, Tuple
15
+ from nltk.tokenize import sent_tokenize
16
+
17
+ # os.environ['http_proxy'] = 'http://127.0.0.1:7890'
18
+ # os.environ['https_proxy'] = 'http://127.0.0.1:7890'
19
+
20
+
21
+ # ----------------------- Custom Headers -----------------------
22
+ headers = {
23
+ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
24
+ 'AppleWebKit/537.36 (KHTML, like Gecko) '
25
+ 'Chrome/58.0.3029.110 Safari/537.36',
26
+ 'Referer': 'https://www.google.com/',
27
+ 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
28
+ 'Accept-Language': 'en-US,en;q=0.5',
29
+ 'Connection': 'keep-alive',
30
+ 'Upgrade-Insecure-Requests': '1'
31
+ }
32
+
33
+ # Initialize session
34
+ session = requests.Session()
35
+ session.headers.update(headers)
36
+
37
+
38
+
39
+ def remove_punctuation(text: str) -> str:
40
+ """Remove punctuation from the text."""
41
+ return text.translate(str.maketrans("", "", string.punctuation))
42
+
43
+ def f1_score(true_set: set, pred_set: set) -> float:
44
+ """Calculate the F1 score between two sets of words."""
45
+ intersection = len(true_set.intersection(pred_set))
46
+ if not intersection:
47
+ return 0.0
48
+ precision = intersection / float(len(pred_set))
49
+ recall = intersection / float(len(true_set))
50
+ return 2 * (precision * recall) / (precision + recall)
51
+
52
+ def extract_snippet_with_context(full_text: str, snippet: str, context_chars: int = 2500) -> Tuple[bool, str]:
53
+ """
54
+ Extract the sentence that best matches the snippet and its context from the full text.
55
+
56
+ Args:
57
+ full_text (str): The full text extracted from the webpage.
58
+ snippet (str): The snippet to match.
59
+ context_chars (int): Number of characters to include before and after the snippet.
60
+
61
+ Returns:
62
+ Tuple[bool, str]: The first element indicates whether extraction was successful, the second element is the extracted context.
63
+ # 这个 extract_snippet_with_context 函数的作用是 从一段长文本中找到最符合给定片段(snippet)的句子,并返回包含该句子的一定上下文范围的文本。它的核心逻辑包括 文本预处理、句子匹配、F1 评分计算、上下文截取 等几个步骤。
64
+ """
65
+ try:
66
+ full_text = full_text[:50000]
67
+
68
+ snippet = snippet.lower()
69
+ snippet = remove_punctuation(snippet)
70
+ snippet_words = set(snippet.split())
71
+
72
+ best_sentence = None
73
+ best_f1 = 0.2
74
+
75
+ # sentences = re.split(r'(?<=[.!?]) +', full_text) # Split sentences using regex, supporting ., !, ? endings
76
+ sentences = sent_tokenize(full_text) # Split sentences using nltk's sent_tokenize
77
+
78
+ for sentence in sentences:
79
+ key_sentence = sentence.lower()
80
+ key_sentence = remove_punctuation(key_sentence)
81
+ sentence_words = set(key_sentence.split())
82
+ f1 = f1_score(snippet_words, sentence_words)
83
+ if f1 > best_f1:
84
+ best_f1 = f1
85
+ best_sentence = sentence
86
+
87
+ if best_sentence:
88
+ para_start = full_text.find(best_sentence)
89
+ para_end = para_start + len(best_sentence)
90
+ start_index = max(0, para_start - context_chars)
91
+ end_index = min(len(full_text), para_end + context_chars)
92
+ context = full_text[start_index:end_index]
93
+ return True, context
94
+ else:
95
+ # If no matching sentence is found, return the first context_chars*2 characters of the full text
96
+ return False, full_text[:context_chars * 2]
97
+ except Exception as e:
98
+ return False, f"Failed to extract snippet context due to {str(e)}"
99
+
100
+ def extract_text_from_url(url, use_jina=False, jina_api_key=None, snippet: Optional[str] = None):
101
+ """
102
+ Extract text from a URL. If a snippet is provided, extract the context related to it.
103
+
104
+ Args:
105
+ url (str): URL of a webpage or PDF.
106
+ use_jina (bool): Whether to use Jina for extraction.
107
+ snippet (Optional[str]): The snippet to search for.
108
+
109
+ Returns:
110
+ str: Extracted text or context.
111
+ """
112
+ try:
113
+ # print(f"extract_text_from_url use_jina: {use_jina}\n")
114
+ if use_jina:
115
+ jina_headers = {
116
+ 'Authorization': f'Bearer {jina_api_key}',
117
+ 'X-Return-Format': 'markdown',
118
+ # 'X-With-Links-Summary': 'true'
119
+ }
120
+ response = requests.get(f'https://r.jina.ai/{url}', headers=jina_headers).text
121
+ # Remove URLs
122
+ pattern = r"\(https?:.*?\)|\[https?:.*?\]"
123
+ text = re.sub(pattern, "", response).replace('---','-').replace('===','=').replace(' ',' ').replace(' ',' ')
124
+ print("use jina to extract text successfully")
125
+ else:
126
+ # print(f"don't use jina to extract text")
127
+ response = session.get(url, timeout=20) # Set timeout to 20 seconds
128
+ response.raise_for_status() # Raise HTTPError if the request failed
129
+ # Determine the content type
130
+ content_type = response.headers.get('Content-Type', '')
131
+ if 'pdf' in content_type:
132
+ # If it's a PDF file, extract PDF text
133
+ print("Extracting text from PDF...")
134
+ return extract_pdf_text(url)
135
+ # Try using lxml parser, fallback to html.parser if unavailable
136
+ try:
137
+ # print("use lxml parser to extract text")
138
+ soup = BeautifulSoup(response.text, 'lxml')
139
+ except Exception:
140
+ print("lxml parser not found or failed, falling back to html.parser")
141
+ soup = BeautifulSoup(response.text, 'html.parser')
142
+ text = soup.get_text(separator=' ', strip=True)
143
+
144
+ if snippet:
145
+ success, context = extract_snippet_with_context(text, snippet)
146
+ if success:
147
+ print("use extract_snippet_with_context to extract text successfully")
148
+ return context
149
+ else:
150
+ print("use extract_snippet_with_context to extract text failed")
151
+ return text
152
+ else:
153
+ # print("no snippet provided")
154
+ # If no snippet is provided, return directly
155
+ return text[:8000]
156
+ # return text[:10000]
157
+ except requests.exceptions.HTTPError as http_err:
158
+ return f"HTTP error occurred: {http_err}"
159
+ except requests.exceptions.ConnectionError:
160
+ return "Error: Connection error occurred"
161
+ except requests.exceptions.Timeout:
162
+ return "Error: Request timed out after 20 seconds"
163
+ except Exception as e:
164
+ return f"Unexpected error: {str(e)}"
165
+
166
+ def fetch_page_content(urls, max_workers=24, use_jina=False, jina_api_key=None, snippets: Optional[dict] = None):
167
+ """
168
+ Concurrently fetch content from multiple URLs.
169
+
170
+ Args:
171
+ urls (list): List of URLs to scrape.
172
+ max_workers (int): Maximum number of concurrent threads.
173
+ use_jina (bool): Whether to use Jina for extraction.
174
+ snippets (Optional[dict]): A dictionary mapping URLs to their respective snippets.
175
+
176
+ Returns:
177
+ dict: A dictionary mapping URLs to the extracted content or context.
178
+ 这段代码定义了一个名为 fetch_page_content 的函数,用于并发地从多个 URL 中提取内容。该函数使用 ThreadPoolExecutor 来并发执行提取操作
179
+ """
180
+ results = {}
181
+ max_workers=20
182
+ print(f"max_workers: {max_workers}")
183
+ with ThreadPoolExecutor(max_workers=max_workers) as executor:
184
+ # Use tqdm to display a progress bar
185
+ futures = {
186
+ executor.submit(extract_text_from_url, url, use_jina, jina_api_key,snippets.get(url) if snippets else None): url
187
+ for url in urls
188
+ }
189
+ for future in tqdm(concurrent.futures.as_completed(futures), desc="Fetching URLs", total=len(urls)):
190
+ url = futures[future]
191
+ # try:
192
+ # data = future.result()
193
+ # results[url] = data
194
+ # except Exception as exc:
195
+ # results[url] = f"Error fetching {url}: {exc}"
196
+ # time.sleep(0.2) # Simple rate limiting
197
+
198
+ for _ in range(5):
199
+ try:
200
+ data = future.result()
201
+ results[url] = data
202
+ break
203
+ except Exception as exc:
204
+ results[url] = f"Error fetching {url}: {exc}"
205
+ time.sleep(0.2)
206
+ return results
207
+
208
+
209
+ proxies = {
210
+ "http": "http://127.0.0.1:7890",
211
+ "https": "http://127.0.0.1:7890"
212
+ }
213
+
214
+
215
+ def google_web_search(query, subscription_key, endpoint, market='en-US', language='en', exclude_urls=[],timeout=2000):
216
+ """
217
+ Perform a search using the Bing Web Search API with a set timeout.
218
+
219
+ Args:
220
+ query (str): Search query.
221
+ subscription_key (str): Subscription key for the Bing Search API.
222
+ endpoint (str): Endpoint for the Bing Search API.
223
+ market (str): Market, e.g., "en-US" or "zh-CN".
224
+ language (str): Language of the results, e.g., "en".
225
+ timeout (int or float or tuple): Request timeout in seconds.
226
+ Can be a float representing the total timeout,
227
+ or a tuple (connect timeout, read timeout).
228
+
229
+ Returns:
230
+ dict: JSON response of the search results. Returns None or raises an exception if the request times out.
231
+ 函数的目标是使用 Bing Web Search API 执行搜索,并返回 JSON 格式的结果。
232
+ 如果请求超时或出现其他问题,返回空字典({})或抛出异常
233
+ """
234
+
235
+ if exclude_urls:
236
+
237
+ for site in exclude_urls:
238
+ query += f" -site:{site}"
239
+ print(f"qeury: {query}, exclude_urls: {exclude_urls}")
240
+ # query = query + " site:en.wikipedia.org"
241
+ # print(f"query: {query}")
242
+ payload = json.dumps({
243
+ "q": query, # 设置查询内容
244
+ "num": 11,
245
+ "mkt": market, # 设置市场
246
+ "setLang": language, # 设置语言
247
+ "textDecorations": True, # 启用文本装饰
248
+ "textFormat": "HTML" # 设置文本格式
249
+ })
250
+ print(f"query: {query}")
251
+
252
+ headers = {
253
+ 'X-API-KEY': subscription_key,
254
+ 'Content-Type': 'application/json'
255
+ }
256
+ error_cnt = 0
257
+ while True:
258
+ if error_cnt == 20:
259
+ print(f"qery: {query} has tried {error_cnt} times without success, just skip it.")
260
+ break
261
+ try:
262
+ # 发送POST请求
263
+ response = requests.request("POST", endpoint, headers=headers, data=payload, proxies=proxies, timeout=timeout)
264
+ # response = requests.request("POST", endpoint, headers=headers, data=payload, timeout=timeout)
265
+ response.raise_for_status() # Raise exception if the request failed 检查响应的状态码。如果返回的状态码是 4xx 或 5xx(表示客户端或服务器错误),它将引发 requests.exceptions.HTTPError 异常
266
+ search_results = response.json() #
267
+ return search_results
268
+ except Timeout:
269
+ error_cnt += 1
270
+ print(f"error_cnt: {error_cnt}, Bing Web Search request timed out ({timeout} seconds) for query: {query}")
271
+ time.sleep(5)
272
+ # return {} # Or you can choose to raise an exception
273
+ except requests.exceptions.RequestException as e:
274
+ error_cnt += 1
275
+ print(f"error_cnt: {error_cnt}, Error occurred during Bing Web Search request: {e}, payload: {payload}")
276
+ time.sleep(5)
277
+ # return {}
278
+
279
+
280
+ def extract_pdf_text(url):
281
+ """
282
+ Extract text from a PDF.
283
+
284
+ Args:
285
+ url (str): URL of the PDF file.
286
+
287
+ Returns:
288
+ str: Extracted text content or error message.
289
+ """
290
+ try:
291
+ response = session.get(url, timeout=20) # Set timeout to 20 seconds
292
+ if response.status_code != 200:
293
+ return f"Error: Unable to retrieve the PDF (status code {response.status_code})"
294
+
295
+ # Open the PDF file using pdfplumber
296
+ with pdfplumber.open(BytesIO(response.content)) as pdf:
297
+ full_text = ""
298
+ for page in pdf.pages:
299
+ text = page.extract_text()
300
+ if text:
301
+ full_text += text
302
+
303
+ # Limit the text length
304
+ cleaned_text = ' '.join(full_text.split()[:600])
305
+ return cleaned_text
306
+ except requests.exceptions.Timeout:
307
+ return "Error: Request timed out after 20 seconds"
308
+ except Exception as e:
309
+ return f"Error: {str(e)}"
310
+
311
+ # def extract_relevant_info(search_results):
312
+ # """
313
+ # Extract relevant information from Bing search results.
314
+
315
+ # Args:
316
+ # search_results (dict): JSON response from the Bing Web Search API.
317
+
318
+ # Returns:
319
+ # list: A list of dictionaries containing the extracted information.
320
+ # """
321
+ # useful_info = []
322
+
323
+ # if 'webPages' in search_results and 'value' in search_results['webPages']: # value 通常是一个列表,包含了搜索结果的每个页面信息
324
+ # for id, result in enumerate(search_results['webPages']['value']):
325
+ # info = {
326
+ # 'id': id + 1, # Increment id for easier subsequent operations 为每个结果分配一个 id,id + 1 是为了让 ID 从 1 开始,而不是从 0 开始。这对后续操作更直观
327
+ # 'title': result.get('name', ''), # 每个搜索结果中提取标题
328
+ # 'url': result.get('url', ''), # 每个搜索结果中提取 URL
329
+ # 'site_name': result.get('siteName', ''), # 每个搜索结果中提取站点名称
330
+ # 'date': result.get('datePublished', '').split('T')[0], # 提取搜索结果的发布时间
331
+ # 'snippet': result.get('snippet', ''), # Remove HTML tags : 提取搜索结果的简短描述(即摘要或片段),result.get('snippet', '')。这里的 snippet 可能包含 HTML 标签,因此需要在后续的处理中可能会清除这些标签
332
+ # # Add context content to the information
333
+ # 'context': '' # Reserved field to be filled later
334
+ # }
335
+ # useful_info.append(info)
336
+
337
+ # return useful_info
338
+
339
+ def extract_relevant_info(search_results):
340
+ """
341
+ Extract relevant information from Bing search results.
342
+
343
+ Args:
344
+ search_results (dict): JSON response from the Bing Web Search API.
345
+
346
+ Returns:
347
+ list: A list of dictionaries containing the extracted information.
348
+ """
349
+ useful_info = []
350
+
351
+ if search_results == None:
352
+ return useful_info
353
+
354
+ if 'organic' in search_results : # value 通常是一个列表,包含了搜索结果的每个页面信息
355
+ for id, result in enumerate(search_results['organic']):
356
+ info = {
357
+ 'id': id + 1, # Increment id for easier subsequent operations 为每个结果分配一个 id,id + 1 是为了让 ID 从 1 开始,而不是从 0 开始。这对后续操作更直观
358
+ 'title': result.get('title', ''), # 每个搜索结果中提取标题
359
+ 'url': result.get('link', ''), # 每个搜索结果中提取 URL
360
+ 'site_name': result.get('siteName', ''), # 每个搜索结果中提取站点名称
361
+ 'date': result.get('datePublished', '').split('T')[0], # 提取搜索结果的发布时间
362
+ 'snippet': result.get('snippet', ''), # Remove HTML tags : 提取搜索结果的简短描述(即摘要或片段),result.get('snippet', '')。这里的 snippet 可能包含 HTML 标签,因此需要在后续的处理中可能会清除这些标签
363
+ # Add context content to the information
364
+ 'context': '' # Reserved field to be filled later
365
+ }
366
+ useful_info.append(info)
367
+ else:
368
+ print("No organic results found.")
369
+ print(f"len of useful_info: {len(useful_info)}")
370
+ return useful_info
371
+
372
+
373
+ # ------------------------------------------------------------
374
+
375
+ if __name__ == "__main__":
376
+ # Example usage
377
+ # Define the query to search
378
+ # query = "Structure of dimethyl fumarate"
379
+
380
+ # # Subscription key and endpoint for Bing Search API
381
+ # BING_SUBSCRIPTION_KEY = "cb0d28279a826d7e5cf22d71f683c77ffd4ba27d"
382
+ # if not BING_SUBSCRIPTION_KEY:
383
+ # raise ValueError("Please set the BING_SEARCH_V7_SUBSCRIPTION_KEY environment variable.")
384
+
385
+ # bing_endpoint = "https://google.serper.dev/search"
386
+
387
+ # # Perform the search
388
+ # print("Performing Bing Web Search...")
389
+ # search_results = bing_web_search(query, BING_SUBSCRIPTION_KEY, bing_endpoint)
390
+ # print(search_results)
391
+
392
+
393
+
394
+ result = bing_web_search("when does season 14 of grey's anatomy come out", 'cb0d28279a826d7e5cf22d71f683c77ffd4ba27d', 'https://google.serper.dev/search')
395
+
396
+ print(result)
397
+ # print("Extracting relevant information from search results...")
398
+ # extracted_info = extract_relevant_info(search_results)
399
+
400
+ # print("Fetching and extracting context for each snippet...")
401
+ # for info in tqdm(extracted_info, desc="Processing Snippets"):
402
+ # full_text = extract_text_from_url(info['url'], use_jina=False, jina_api_key="jina_04d684ee4cc54d2ebe7c43bb7ad4aff0qlkdZGwm14NFBtm5BDkgK9KNf6vQ", snippet=info["snippet"]) # Get full webpage text
403
+ # if full_text and not full_text.startswith("Error"):
404
+ # success, context = extract_snippet_with_context(full_text, info['snippet'])
405
+ # if success:
406
+ # info['context'] = context
407
+ # print("-------------------")
408
+ # print(f"Snippet: {info['snippet']}\nContext: {context}")
409
+
410
+ # else:
411
+ # info['context'] = f"Could not extract context. Returning first 8000 chars: {full_text[:8000]}"
412
+ # else:
413
+ # info['context'] = f"Failed to fetch full text: {full_text}"
414
+
415
+ # print("Your Search Query:", query)
416
+ # print("Final extracted information with context:")
417
+ # print(json.dumps(extracted_info, indent=2, ensure_ascii=False))
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/inference.py ADDED
@@ -0,0 +1,759 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # run_search_o1.py
2
+ import os
3
+ import json
4
+ import time
5
+ import re
6
+ from tqdm import tqdm
7
+ import numpy as np
8
+ import torch
9
+ import string
10
+ from typing import Optional, Tuple, List, Dict
11
+ import argparse
12
+ from functools import partial
13
+
14
+ import multiprocessing
15
+ from transformers import AutoTokenizer
16
+ from vllm import LLM, SamplingParams
17
+
18
+ from google_search import (
19
+ google_web_search,
20
+ extract_relevant_info,
21
+ fetch_page_content,
22
+ extract_snippet_with_context
23
+ )
24
+ from evaluate import (
25
+ run_evaluation,
26
+ run_evaluation_for_eval,
27
+ extract_answer
28
+ )
29
+ from prompts import (
30
+ get_multiqa_search_o1_instruction,
31
+ get_task_instruction_openqa,
32
+ get_task_instruction_math,
33
+ get_webpage_to_reasonchain_instruction
34
+ )
35
+
36
+ from functools import partial
37
+ from openai import OpenAI
38
+
39
+ from add_eval import add_eval
40
+
41
+
42
+
43
+ # Define special tokens
44
+ BEGIN_SEARCH_QUERY = "<|begin_search_query|>"
45
+ END_SEARCH_QUERY = "<|end_search_query|>"
46
+ BEGIN_SEARCH_RESULT = "<|begin_search_result|>"
47
+ END_SEARCH_RESULT = "<|end_search_result|>"
48
+
49
+ def parse_args():
50
+ parser = argparse.ArgumentParser(description="Run Search O1 for various datasets and models.")
51
+
52
+
53
+ parser.add_argument(
54
+ '--dataset_name',
55
+ type=str,
56
+ required=True,
57
+ help="Name of the dataset to use."
58
+ )
59
+
60
+
61
+
62
+ parser.add_argument(
63
+ '--subset_num',
64
+ type=int,
65
+ default=-1,
66
+ help="Number of examples to process. Defaults to all if not specified."
67
+ )
68
+
69
+ # Search and document retrieval configuration
70
+ parser.add_argument(
71
+ '--max_search_limit',
72
+ type=int,
73
+ default=10,
74
+ help="Maximum number of searches per question."
75
+ )
76
+
77
+ parser.add_argument(
78
+ '--max_turn',
79
+ type=int,
80
+ default=15,
81
+ help="Maximum number of turns."
82
+ )
83
+
84
+ parser.add_argument(
85
+ '--top_k',
86
+ type=int,
87
+ default=10,
88
+ help="Maximum number of search documents to return."
89
+ )
90
+
91
+ parser.add_argument(
92
+ '--max_doc_len',
93
+ type=int,
94
+ default=3000,
95
+ help="Maximum length of each searched document."
96
+ )
97
+
98
+
99
+ # Model configuration
100
+ parser.add_argument(
101
+ '--model_path',
102
+ type=str,
103
+ required=True,
104
+ help="Path to the pre-trained model."
105
+ )
106
+
107
+ # Google API Configuration
108
+ parser.add_argument(
109
+ '--google_subscription_key',
110
+ type=str,
111
+ required=True,
112
+ help="Google Search API subscription key."
113
+ )
114
+
115
+ parser.add_argument(
116
+ '--google_endpoint',
117
+ type=str,
118
+ default="https://google.serper.dev/search",
119
+ help="Google Search API endpoint."
120
+ )
121
+
122
+ parser.add_argument(
123
+ '--cache_dir_base',
124
+ type=str,
125
+ required=True,
126
+ help="cache path."
127
+ )
128
+
129
+ parser.add_argument(
130
+ '--output_dir_base',
131
+ type=str,
132
+ required=True,
133
+ help="output_dir"
134
+ )
135
+
136
+ parser.add_argument(
137
+ '--is_exclude_urls',
138
+ action="store_true",
139
+ help="is_exclude_urls"
140
+ )
141
+
142
+ parser.add_argument(
143
+ '--summary_model_path',
144
+ type=str,
145
+ required=True,
146
+ help="Path to the summary model."
147
+ )
148
+
149
+
150
+ parser.add_argument(
151
+ '--base_url',
152
+ type=str,
153
+ required=True,
154
+ help="Base url of the summary model."
155
+ )
156
+
157
+ return parser.parse_args()
158
+
159
+
160
+
161
+ def webpage_analysis_single(model_path, base_url, prompt) -> str:
162
+ client = OpenAI(
163
+ base_url=base_url,
164
+ api_key="EMPTY"
165
+ )
166
+
167
+
168
+ for i in range(10):
169
+ try:
170
+ completion = client.chat.completions.create(
171
+ model=model_path,
172
+ max_tokens=8192,
173
+ temperature=0.6,
174
+ top_p=0.95,
175
+ messages=[prompt],
176
+ )
177
+ return completion.choices[0].message.content
178
+ except Exception as e:
179
+ print(e)
180
+ time.sleep(1)
181
+ continue
182
+ return "None"
183
+
184
+ def main():
185
+ args = parse_args()
186
+ # Extract arguments
187
+ dataset_name = args.dataset_name
188
+ subset_num = args.subset_num
189
+ MAX_SEARCH_LIMIT = args.max_search_limit
190
+ MAX_TURN = args.max_turn
191
+ top_k = args.top_k
192
+ max_doc_len = args.max_doc_len
193
+ model_path = args.model_path
194
+ summary_model_path = args.summary_model_path
195
+ cache_dir_base = args.cache_dir_base
196
+ output_dir_base = args.output_dir_base
197
+ is_exclude_urls = args.is_exclude_urls
198
+ base_url = args.base_url
199
+ google_subscription_key = args.google_subscription_key
200
+ google_endpoint = args.google_endpoint
201
+
202
+
203
+ # Data paths based on dataset
204
+ data_path = f"./data/eval/{dataset_name}.json"
205
+ print('-----------------------')
206
+ print(f'Using {dataset_name} set.')
207
+ print('-----------------------')
208
+
209
+ # ---------------------- Caching Mechanism ----------------------
210
+ # Define cache directories and file paths
211
+ # cache_dir = './cache'
212
+ model_name = model_path.split('/')[-1].replace('-instruct', '')
213
+ # cache_dir = f'./{cache_dir_base}_{dataset_name}_{model_name}'
214
+ cache_dir = cache_dir_base
215
+ search_cache_path = os.path.join(cache_dir, 'search_cache.json')
216
+ url_cache_path = os.path.join(cache_dir, 'url_cache.json')
217
+
218
+ # Ensure cache directory exists
219
+ os.makedirs(cache_dir, exist_ok=True)
220
+
221
+ # Load existing caches or initialize empty dictionaries
222
+ if os.path.exists(search_cache_path):
223
+ try:
224
+ with open(search_cache_path, 'r', encoding='utf-8') as f:
225
+ search_cache = json.load(f)
226
+ except Exception as e:
227
+ print(f"load search_cache.json error: {e}")
228
+ search_cache = {}
229
+ else:
230
+ search_cache = {}
231
+
232
+ if os.path.exists(url_cache_path):
233
+ try:
234
+ with open(url_cache_path, 'r', encoding='utf-8') as f:
235
+ url_cache = json.load(f)
236
+ except Exception as e:
237
+ print(f"load url_cache.json error: {e}")
238
+ url_cache = {}
239
+ else:
240
+ url_cache = {}
241
+
242
+ # Function to save caches
243
+ def save_caches():
244
+ with open(search_cache_path, 'w', encoding='utf-8') as f:
245
+ json.dump(search_cache, f, ensure_ascii=False, indent=2)
246
+ with open(url_cache_path, 'w', encoding='utf-8') as f:
247
+ json.dump(url_cache, f, ensure_ascii=False, indent=2)
248
+
249
+ # ---------------------- Model Loading ----------------------
250
+ print(f"Loading tokenizer from {model_path}...")
251
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
252
+ if tokenizer.pad_token is None:
253
+ tokenizer.pad_token = tokenizer.eos_token
254
+ tokenizer.padding_side = 'left' # 主要是左填充
255
+ print("Tokenizer loaded successfully.")
256
+
257
+ # Define output directory based on model and dataset
258
+
259
+ output_dir = os.path.join(output_dir_base, dataset_name)
260
+ os.makedirs(output_dir, exist_ok=True)
261
+
262
+ print(f"Loading model from {model_path}...")
263
+ print(f"device_count: {torch.cuda.device_count()}")
264
+
265
+ # Initialize the LLM
266
+ llm = LLM(
267
+ model=model_path,
268
+ tensor_parallel_size=torch.cuda.device_count(),
269
+ gpu_memory_utilization=0.95,
270
+
271
+ )
272
+ print("Model loaded successfully.")
273
+
274
+
275
+ # ---------------------- Data Loading ----------------------
276
+ print(f"Loading data from {data_path}...")
277
+ with open(data_path, 'r', encoding='utf-8') as json_file:
278
+ filtered_data = json.load(json_file)
279
+ print(f"Data loaded successfully. Total examples: {len(filtered_data)}")
280
+
281
+
282
+ # ---------------------- Batch Generation Function ----------------------
283
+ def generate_webpage_to_reasonchain_batch(
284
+ original_questions: List[str],
285
+ prev_reasonings: List[str],
286
+ search_queries: List[str],
287
+ documents: List[str],
288
+ dataset_name: str,
289
+ summary_model_path: str,
290
+ base_url: str,
291
+ batch_output_records: List[Dict], # New parameter to collect outputs
292
+ max_tokens: int = 32768,
293
+ coherent: bool = False,
294
+ ) -> List[str]:
295
+
296
+ user_prompts = [
297
+ get_webpage_to_reasonchain_instruction(r, sq, doc)
298
+ for r, sq, doc in zip(prev_reasonings, search_queries, documents)
299
+ ]
300
+
301
+
302
+ prompts = [{"role": "user", "content": up} for up in user_prompts]
303
+ print("webpage ana prompts[0]")
304
+ print(prompts[0])
305
+
306
+
307
+ webpage_analysis_single_to_map = partial(webpage_analysis_single, summary_model_path, base_url)
308
+ with multiprocessing.Pool(processes=50) as pool:
309
+ raw_outputs = list(tqdm(pool.imap(webpage_analysis_single_to_map, prompts), total=len(prompts), desc="generate webpage summarization"))
310
+
311
+
312
+ # 统计错误的数目
313
+ sum_error = 0
314
+ for output in raw_outputs:
315
+ if output is None or output == "None" or output == "":
316
+ sum_error += 1
317
+ print(f"sum_error: {sum_error}, ratios: {sum_error / len(raw_outputs)}")
318
+
319
+ extracted_infos = [extract_answer(raw, mode='infogen') for raw in raw_outputs]
320
+ for i, (p, r, e) in enumerate(zip(prompts, raw_outputs, extracted_infos)):
321
+ batch_output_records.append({
322
+ 'prompt': p,
323
+ 'raw_output': r,
324
+ 'extracted_info': e
325
+ })
326
+
327
+ return extracted_infos
328
+
329
+ # ---------------------- Preparation of Input Prompts ----------------------
330
+
331
+
332
+ input_list = []
333
+ for item in filtered_data:
334
+ question = item['Question']
335
+
336
+ if dataset_name in ['gaia', 'musique', 'bamboogle', '2wiki']:
337
+
338
+ instruction = get_multiqa_search_o1_instruction(MAX_SEARCH_LIMIT)
339
+ user_prompt = get_task_instruction_openqa(question)
340
+
341
+ elif dataset_name in ['aime']:
342
+ instruction = get_multiqa_search_o1_instruction(MAX_SEARCH_LIMIT)
343
+ user_prompt = get_task_instruction_math(question)
344
+
345
+ else:
346
+ instruction = get_multiqa_search_o1_instruction(MAX_SEARCH_LIMIT)
347
+ user_prompt = get_task_instruction_openqa(question)
348
+
349
+
350
+ prompt = [{"role": "user", "content": instruction + user_prompt}]
351
+ prompt = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
352
+ input_list.append(prompt)
353
+
354
+ if subset_num != -1:
355
+ input_list = input_list[:subset_num]
356
+ filtered_data = filtered_data[:subset_num]
357
+
358
+ # Initialize active sequences
359
+ active_sequences = [{
360
+ 'item': item,
361
+ 'prompt': prompt,
362
+ 'output': '',
363
+ 'finished': False,
364
+ 'history': [],
365
+ 'search_count': 0,
366
+ 'executed_search_queries': set(),
367
+ 'all_info': [],
368
+ } for item, prompt in zip(filtered_data, input_list)]
369
+
370
+ # ---------------------- Set Max Tokens ----------------------
371
+ if dataset_name in ['aime']:
372
+ max_tokens = 32768
373
+ else:
374
+ max_tokens = 20480
375
+ # ---------------------- Generation Function ----------------------
376
+ def run_generation(sequences: List[Dict], max_tokens: int) -> List:
377
+ prompts = [s['prompt'] for s in sequences]
378
+ sampling_params = SamplingParams(
379
+ max_tokens=max_tokens,
380
+ temperature=0.6,
381
+ top_p=0.95,
382
+ top_k=40,
383
+ stop=[END_SEARCH_QUERY, tokenizer.eos_token],
384
+ include_stop_str_in_output=True,
385
+ )
386
+ output_list = llm.generate(prompts, sampling_params=sampling_params)
387
+ print(f"run_generation completed {len(output_list)}")
388
+ return output_list
389
+
390
+ # Function to extract text between two tags
391
+ def extract_between(text: str, start_tag: str, end_tag: str) -> Optional[str]:
392
+ pattern = re.escape(start_tag) + r"(.*?)" + re.escape(end_tag)
393
+ matches = re.findall(pattern, text, flags=re.DOTALL)
394
+ if matches:
395
+ return matches[-1].strip()
396
+ return None
397
+
398
+ def replace_recent_steps(origin_str, replace_str):
399
+ """
400
+ Replaces specific steps in the original reasoning steps with new steps.
401
+ If a replacement step contains "DELETE THIS STEP", that step is removed.
402
+
403
+ Parameters:
404
+ - origin_str (str): The original reasoning steps.
405
+ - replace_str (str): The steps to replace or delete.
406
+
407
+ Returns:
408
+ - str: The updated reasoning steps after applying replacements.
409
+ """
410
+
411
+ def parse_steps(text):
412
+ """
413
+ Parses the reasoning steps from a given text.
414
+
415
+ Parameters:
416
+ - text (str): The text containing reasoning steps.
417
+
418
+ Returns:
419
+ - dict: A dictionary mapping step numbers to their content.
420
+ """
421
+ step_pattern = re.compile(r"Step\s+(\d+):\s*")
422
+ steps = {}
423
+ current_step_num = None
424
+ current_content = []
425
+
426
+ for line in text.splitlines():
427
+ step_match = step_pattern.match(line)
428
+ if step_match:
429
+ if current_step_num is not None:
430
+ steps[current_step_num] = "\n".join(current_content).strip()
431
+ current_step_num = int(step_match.group(1))
432
+ content = line[step_match.end():].strip()
433
+ current_content = [content] if content else []
434
+ else:
435
+ if current_step_num is not None:
436
+ current_content.append(line)
437
+
438
+ # Save the last step if any
439
+ if current_step_num is not None:
440
+ steps[current_step_num] = "\n".join(current_content).strip()
441
+
442
+ return steps
443
+
444
+ # Parse the original and replacement steps
445
+ origin_steps = parse_steps(origin_str)
446
+ replace_steps = parse_steps(replace_str)
447
+
448
+ # Apply replacements
449
+ for step_num, content in replace_steps.items():
450
+ if "DELETE THIS STEP" in content:
451
+ # Remove the step if it exists
452
+ if step_num in origin_steps:
453
+ del origin_steps[step_num]
454
+ else:
455
+ # Replace or add the step
456
+ origin_steps[step_num] = content
457
+
458
+ # Sort the steps by step number
459
+ sorted_steps = sorted(origin_steps.items())
460
+
461
+ # Reconstruct the reasoning steps as a single string
462
+ new_reasoning_steps = "\n\n".join([f"{content}" for num, content in sorted_steps])
463
+
464
+ return new_reasoning_steps
465
+
466
+ # ---------------------- Initialize Collection Structure ----------------------
467
+ # Initialize a list to collect batch outputs
468
+ batch_output_records = []
469
+
470
+ start_time = time.time()
471
+ turn = 0
472
+
473
+ # Main loop until all sequences are finished or maximum turns reached
474
+ while True:
475
+ # Identify sequences that need generation
476
+ sequences_needing_generation = [seq for seq in active_sequences if not seq['finished']]
477
+
478
+ if sequences_needing_generation:
479
+ turn += 1
480
+ print(f'\n-------------- Turn {turn} --------------')
481
+ print(f"We have {len(sequences_needing_generation)} sequences needing generation...")
482
+ outputs = run_generation(sequences_needing_generation, max_tokens)
483
+ print("Generation completed, processing outputs...")
484
+
485
+ # Initialize batch variables
486
+ batch_relevant_info = []
487
+ batch_original_questions = []
488
+ batch_prev_reasonings = []
489
+ batch_search_queries = []
490
+ batch_documents = []
491
+ batch_sequences = []
492
+
493
+ # Collect URLs to fetch across all sequences
494
+ all_urls_to_fetch = set()
495
+ url_snippets = {}
496
+ url_sequence_map = {} # Map URL to list of sequences needing it
497
+
498
+ start_search_time = time.time()
499
+ # Process each sequence and collect URLs
500
+ for seq, out in zip(sequences_needing_generation, outputs):
501
+ text = out.outputs[0].text
502
+ seq['history'].append(text)
503
+ # Append generated text to prompt and output
504
+ seq['prompt'] += text
505
+ seq['output'] += text
506
+ seq['all_info'].append({f"turn_{turn}_reason": text})
507
+ # Extract search query
508
+ search_query = extract_between(text, BEGIN_SEARCH_QUERY, END_SEARCH_QUERY)
509
+
510
+ # If a search query is present and needs to be executed
511
+ if search_query and seq['output'].rstrip().endswith(END_SEARCH_QUERY):
512
+ if seq['search_count'] < MAX_SEARCH_LIMIT and search_query not in seq['executed_search_queries']:
513
+ # Execute search, use cache if available
514
+ if search_query in search_cache:
515
+ results = search_cache[search_query]
516
+ print(f"Using cached search results for query: \"{search_query}\"")
517
+ else:
518
+ try:
519
+ if is_exclude_urls and "urls" in seq["item"]["metadata"]:
520
+ print(f"is_exclude_urls: {is_exclude_urls}")
521
+ exclude_urls = seq["item"]["metadata"]["urls"]
522
+ else:
523
+ exclude_urls = []
524
+
525
+ print(f"Execute and cache search for query: \"{search_query}\"")
526
+ results = google_web_search(search_query, google_subscription_key, google_endpoint, market='en-US', language='en', exclude_urls=exclude_urls) # 执行搜索
527
+ search_cache[search_query] = results
528
+ print(f"Executed and cached search for query: \"{search_query}\"")
529
+ except Exception as e:
530
+ print(f"Error during search query '{search_query}': {e}")
531
+ search_cache[search_query] = {}
532
+ results = {}
533
+
534
+ # Extract relevant information from Bing search results
535
+ relevant_info = extract_relevant_info(results)[:top_k]
536
+ seq['relevant_info'] = relevant_info
537
+
538
+ # Extract URLs and snippets
539
+ urls_to_fetch = [it['url'] for it in relevant_info]
540
+ snippets = {info['url']: info['snippet'] for info in relevant_info if 'snippet' in info}
541
+
542
+ # Filter URLs that are not cached
543
+ urls_to_fetch_filtered = [u for u in urls_to_fetch if u not in url_cache]
544
+ cached_urls = [u for u in urls_to_fetch if u in url_cache]
545
+
546
+ # Store info for all_urls_to_fetch and url_snippets
547
+ for url in urls_to_fetch_filtered:
548
+ all_urls_to_fetch.add(url)
549
+ url_snippets[url] = snippets.get(url, "")
550
+
551
+ all_reasoning_steps = seq['output']
552
+ all_reasoning_steps = all_reasoning_steps.replace('\n\n', '\n').split("\n")
553
+
554
+ truncated_prev_reasoning = ""
555
+ for i, step in enumerate(all_reasoning_steps):
556
+ truncated_prev_reasoning += f"Step {i + 1}: {step}\n\n"
557
+
558
+ prev_steps = truncated_prev_reasoning.split('\n\n')
559
+ if len(prev_steps) <= 5:
560
+ truncated_prev_reasoning = '\n\n'.join(prev_steps)
561
+ else:
562
+ truncated_prev_reasoning = ''
563
+ for i, step in enumerate(prev_steps):
564
+ if i == 0 or i >= len(prev_steps) - 4 or BEGIN_SEARCH_QUERY in step or BEGIN_SEARCH_RESULT in step:
565
+ truncated_prev_reasoning += step + '\n\n'
566
+ else:
567
+ if truncated_prev_reasoning[-len('\n\n...\n\n'):] != '\n\n...\n\n':
568
+ truncated_prev_reasoning += '...\n\n'
569
+ truncated_prev_reasoning = truncated_prev_reasoning.strip('\n')
570
+
571
+ # Collect parameters for batch processing
572
+ batch_relevant_info.append(relevant_info)
573
+ batch_original_questions.append(seq['item']['Question'])
574
+ batch_prev_reasonings.append(truncated_prev_reasoning)
575
+ batch_search_queries.append(search_query)
576
+ batch_sequences.append(seq)
577
+
578
+ # Update search count and executed queries
579
+ seq['search_count'] += 1
580
+ seq['executed_search_queries'].add(search_query)
581
+ elif seq['search_count'] >= MAX_SEARCH_LIMIT:
582
+ limit_message = f"\n{BEGIN_SEARCH_RESULT}\nThe maximum search limit is exceeded. You are not allowed to search.\n{END_SEARCH_RESULT}\n"
583
+ seq['prompt'] += limit_message
584
+ seq['output'] += limit_message
585
+ seq['history'].append(limit_message)
586
+ seq["all_info"].append({f"turn_{turn}_search_limited": limit_message})
587
+ print(f"Search limit reached for query: \"{search_query}\"")
588
+
589
+ elif search_query in seq['executed_search_queries']:
590
+ limit_message = f"\n{BEGIN_SEARCH_RESULT}\nYou have searched this query. Please refer to previous results.\n{END_SEARCH_RESULT}\n"
591
+ seq['prompt'] += limit_message
592
+ seq['output'] += limit_message
593
+ seq['history'].append(limit_message)
594
+ seq["all_info"].append({f"turn_{turn}_search_limited": limit_message})
595
+ print(f"Repeated search for query: \"{search_query}\"")
596
+
597
+
598
+ else:
599
+ # If no search query needs to be executed, mark the sequence as finished
600
+ seq['finished'] = True
601
+ print("Sequence marked as complete.")
602
+
603
+ print(f"get search time taken: {time.time() - start_search_time}")
604
+ print(f"all_urls_to_fetch len: {len(all_urls_to_fetch)}, url_cache len: {len(url_cache)}")
605
+ print(f"all_urls_to_fetch: {all_urls_to_fetch}")
606
+ # Batch fetch all URLs at once to optimize speed
607
+
608
+ if all_urls_to_fetch:
609
+ print(f"Fetching {len(all_urls_to_fetch)} URLs...")
610
+ try:
611
+ fetched_contents = fetch_page_content(
612
+ list(all_urls_to_fetch),
613
+ use_jina=False,
614
+ jina_api_key=None,
615
+ # snippets=url_snippets # Do not pass snippets when updating url_cache directly
616
+ )
617
+ print(f"Fetched {len(fetched_contents)} URLs successfully.")
618
+ except Exception as e:
619
+ print(f"Error during batch URL fetching: {e}")
620
+ fetched_contents = {url: f"Error fetching URL: {e}" for url in all_urls_to_fetch}
621
+ # Update cache with fetched contents
622
+ for url, content in fetched_contents.items(): #
623
+ url_cache[url] = content
624
+
625
+ # After fetching, prepare formatted documents for batch processing
626
+ for relevant_info in batch_relevant_info:
627
+ formatted_documents = ""
628
+ for i, doc_info in enumerate(relevant_info):
629
+ url = doc_info['url']
630
+ raw_context = url_cache.get(url, "") # 获取 url 对应的内容
631
+ doc_info['snippet'] = doc_info['snippet'].replace('<b>','').replace('</b>','')
632
+ success, filtered_context = extract_snippet_with_context(raw_context, doc_info['snippet'], context_chars=max_doc_len)
633
+ if success:
634
+ print("extract_snippet_with_context")
635
+ context = filtered_context
636
+ else:
637
+ print(f"use raw_context, {len(raw_context)}")
638
+ context = raw_context[:max_doc_len*2]
639
+
640
+ doc_info['context'] = context
641
+ formatted_documents += f"**Web Page {i + 1}:**\n"
642
+ formatted_documents += json.dumps(doc_info, ensure_ascii=False, indent=2) + "\n"
643
+ print(f'formatted_documents: {len(formatted_documents)}')
644
+ batch_documents.append(formatted_documents)
645
+
646
+ # After fetching, prepare for batch processing if there are any
647
+ if batch_sequences:
648
+ print(f"Batch processing {len(batch_sequences)} sequences with generate_webpage_to_reasonchain_batch...")
649
+ webpage_analyses = generate_webpage_to_reasonchain_batch(
650
+ original_questions=batch_original_questions,
651
+ prev_reasonings=batch_prev_reasonings,
652
+ search_queries=batch_search_queries,
653
+ documents=batch_documents,
654
+ dataset_name=dataset_name,
655
+ summary_model_path=summary_model_path,
656
+ base_url=base_url,
657
+ batch_output_records=batch_output_records, # Pass the collection list
658
+ max_tokens=max_tokens,
659
+ )
660
+ print("Batch generation completed, assigning outputs to sequences...")
661
+
662
+ for seq, analysis,doc in zip(batch_sequences, webpage_analyses, batch_documents):
663
+ if isinstance(analysis, str):
664
+ append_text = f"\n\n{BEGIN_SEARCH_RESULT}{analysis}{END_SEARCH_RESULT}\n\n"
665
+ seq['output'] += append_text
666
+ seq['history'].append(append_text)
667
+ seq['all_info'].extend([{f"turn_{turn}_search": doc}, {f"turn_{turn}_webpage_analyses": analysis}])
668
+ else:
669
+ append_text = replace_recent_steps(seq['output'], analysis)
670
+ seq['prompt'] += append_text
671
+ seq['output'] += append_text
672
+ seq['history'].append(append_text)
673
+ seq['all_info'].extend([{f"turn_{turn}_search": doc}, {f"turn_{turn}_webpage_analyses": analysis}])
674
+
675
+ # Check if all sequences are finished
676
+ # 保存active_sequences
677
+ active_sequences_part = [{
678
+ 'item': ele["item"],
679
+ 'prompt': ele['prompt'],
680
+ 'output': ele["output"],
681
+ 'finished': ele["finished"],
682
+ 'history':ele["history"],
683
+ 'search_count': ele["search_count"],
684
+ 'all_info': ele['all_info']
685
+ } for ele in active_sequences]
686
+ with open(os.path.join(output_dir, f"turn_{turn}.json"), 'w', encoding='utf-8') as f:
687
+ json.dump(active_sequences_part, f, ensure_ascii=False, indent=2)
688
+ unfinished = [seq for seq in active_sequences if not seq['finished']]
689
+ if not unfinished:
690
+ break
691
+ else:
692
+ if turn >= MAX_TURN:
693
+ print(f"Maximum number of turns ({MAX_TURN}) reached, stopping.")
694
+ break
695
+
696
+ total_time = time.time() - start_time
697
+ print(f"Total time taken: {total_time} seconds")
698
+
699
+ # ---------------------- Save Batch Output Records to JSON File ----------------------
700
+ # Define output JSON file path
701
+ t = time.localtime()
702
+ batch_output_file = os.path.join(output_dir, f'eval.{t.tm_mon}.{t.tm_mday},{t.tm_hour}:{t.tm_min}.info_extract.json')
703
+
704
+ # Save batch_output_records to JSON file
705
+ with open(batch_output_file, 'w', encoding='utf-8') as f:
706
+ json.dump(batch_output_records, f, ensure_ascii=False, indent=2)
707
+
708
+ print(f"Batch outputs saved to {batch_output_file}")
709
+
710
+ # Prepare output list for evaluation
711
+ output_list = [seq['output'] for seq in active_sequences]
712
+
713
+ # Run evaluation
714
+ if dataset_name in ["gaia"]:
715
+ run_evaluation_for_eval(filtered_data, input_list, output_list, dataset_name, output_dir, total_time, "test")
716
+ else:
717
+ run_evaluation(filtered_data, input_list, output_list, dataset_name, output_dir, total_time, "test")
718
+
719
+ # 评测has answer信息
720
+ turn_files = os.listdir(output_dir)
721
+ turn_files = [file for file in turn_files if file.startswith("turn_")]
722
+ max_turn_file = max(turn_files, key=lambda x: int(re.search(r'turn_(\d+)', x).group(1)))
723
+
724
+ max_turn_file_path = os.path.join(output_dir, max_turn_file)
725
+ print(f"max_turn_file_path: {max_turn_file_path}")
726
+ add_eval(model_path, max_turn_file_path)
727
+
728
+ # ---------------------- Update Search and URL Cache ----------------------
729
+ print('Updating Search and URL Cache...')
730
+ # Load existing caches or initialize empty dictionaries
731
+ if os.path.exists(search_cache_path):
732
+ try:
733
+ with open(search_cache_path, 'r', encoding='utf-8') as f:
734
+ search_cache_new = json.load(f)
735
+ except Exception as e:
736
+ print(f"Error loading search cache: {e}")
737
+ search_cache_new = {}
738
+ else:
739
+ search_cache_new = {}
740
+
741
+ if os.path.exists(url_cache_path):
742
+ try:
743
+ with open(url_cache_path, 'r', encoding='utf-8') as f:
744
+ url_cache_new = json.load(f)
745
+ except Exception as e:
746
+ print(f"Error loading url cache: {e}")
747
+ url_cache_new = {}
748
+ else:
749
+ url_cache_new = {}
750
+
751
+ search_cache.update(search_cache_new)
752
+ url_cache.update(url_cache_new)
753
+
754
+ save_caches()
755
+
756
+ print("Process completed.")
757
+
758
+ if __name__ == "__main__":
759
+ main()
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/benchmarks/__init__.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from lcb_runner.benchmarks.code_generation import (
2
+ CodeGenerationProblem,
3
+ load_code_generation_dataset,
4
+ load_code_generation_dataset_not_fast,
5
+ )
6
+ from lcb_runner.benchmarks.test_output_prediction import (
7
+ TestOutputPredictionProblem,
8
+ load_test_prediction_dataset,
9
+ )
10
+ from lcb_runner.benchmarks.code_execution import (
11
+ CodeExecutionProblem,
12
+ load_code_execution_dataset,
13
+ )
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/benchmarks/code_execution.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from enum import Enum
3
+ from datetime import datetime
4
+ from dataclasses import dataclass
5
+
6
+ from datasets import load_dataset
7
+
8
+
9
+ @dataclass
10
+ class CodeExecutionProblem:
11
+ question_id: str
12
+ contest_id: str
13
+ contest_date: datetime
14
+ difficulty: str
15
+ function_name: str
16
+ code: str
17
+ input: str
18
+ output: str
19
+ id: str
20
+ problem_id: str
21
+ numsteps: int
22
+
23
+ def __post_init__(self):
24
+ pass
25
+
26
+ def insert_output(self, output_list: list[str], pred_list: list[str]) -> dict:
27
+ return {
28
+ "question_id": self.question_id,
29
+ "contest_id": self.contest_id,
30
+ "contest_date": self.contest_date.isoformat(),
31
+ "difficulty": self.difficulty,
32
+ "function_name": self.function_name,
33
+ "code": self.code,
34
+ "input": self.input,
35
+ "output": self.output,
36
+ "id": self.id,
37
+ "problem_id": self.problem_id,
38
+ "numsteps": self.numsteps,
39
+ "output_list": output_list,
40
+ "pred_list": pred_list,
41
+ }
42
+
43
+ def insert_output_evaluation(
44
+ self, output_list: list[str], code_list: list[str], graded_list: list[bool]
45
+ ) -> dict:
46
+ output = self.insert_output(output_list, code_list)
47
+ output["graded_list"] = graded_list
48
+ output["pass@1"] = graded_list.count(True) / len(graded_list)
49
+ return output
50
+
51
+ def get_evaluation_sample(self) -> dict:
52
+ return {
53
+ "code": self.code,
54
+ "input": self.input,
55
+ "output": self.output,
56
+ }
57
+
58
+
59
+ def load_code_execution_dataset(release_version="release_v1") -> list[CodeExecutionProblem]:
60
+ dataset = load_dataset("livecodebench/execution-v2", split="test")
61
+ dataset = [CodeExecutionProblem(**p) for p in dataset] # type: ignore
62
+ print(f"Loaded {len(dataset)} problems")
63
+ return dataset
64
+
65
+
66
+ if __name__ == "__main__":
67
+ dataset = load_code_execution_dataset()
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/benchmarks/code_generation.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import zlib
3
+ import pickle
4
+ import base64
5
+ from enum import Enum
6
+ from datetime import datetime
7
+ from dataclasses import dataclass
8
+
9
+ from datasets import load_dataset
10
+
11
+
12
+ class Platform(Enum):
13
+ LEETCODE = "leetcode"
14
+ CODEFORCES = "codeforces"
15
+ ATCODER = "atcoder"
16
+
17
+
18
+ class Difficulty(Enum):
19
+ EASY = "easy"
20
+ MEDIUM = "medium"
21
+ HARD = "hard"
22
+
23
+
24
+ class TestType(Enum):
25
+ STDIN = "stdin"
26
+ FUNCTIONAL = "functional"
27
+
28
+
29
+ @dataclass
30
+ class Test:
31
+ input: str
32
+ output: str
33
+ testtype: TestType
34
+
35
+ def __post_init__(self):
36
+ self.testtype = TestType(self.testtype)
37
+ # if self.testtype == TestType.FUNCTIONAL:
38
+ # self.input = json.loads(self.input)
39
+ # self.output = json.loads(self.output)
40
+
41
+
42
+ @dataclass
43
+ class CodeGenerationProblem:
44
+ question_title: str
45
+ question_content: str
46
+ platform: Platform
47
+ question_id: str
48
+ contest_id: str
49
+ contest_date: datetime
50
+ starter_code: str
51
+ difficulty: Difficulty
52
+ public_test_cases: list[Test]
53
+ private_test_cases: list[Test]
54
+ metadata: dict
55
+
56
+ def __post_init__(self):
57
+ self.platform = Platform(self.platform)
58
+ self.difficulty = Difficulty(self.difficulty)
59
+ self.contest_date = datetime.fromisoformat(self.contest_date)
60
+
61
+ self.public_test_cases = json.loads(self.public_test_cases) # type: ignore
62
+ self.public_test_cases = [Test(**t) for t in self.public_test_cases]
63
+
64
+ try:
65
+ self.private_test_cases = json.loads(self.private_test_cases) # type: ignore
66
+ except:
67
+ self.private_test_cases = json.loads(
68
+ pickle.loads(
69
+ zlib.decompress(
70
+ base64.b64decode(self.private_test_cases.encode("utf-8")) # type: ignore
71
+ )
72
+ )
73
+ ) # type: ignore
74
+ self.private_test_cases = [Test(**t) for t in self.private_test_cases]
75
+
76
+ self.metadata = json.loads(self.metadata) # type: ignore
77
+
78
+ def insert_output(self, output_list: list[str], code_list: list[str]) -> dict:
79
+ return {
80
+ "question_title": self.question_title,
81
+ "question_content": self.question_content,
82
+ "platform": self.platform.value,
83
+ "question_id": self.question_id,
84
+ "contest_id": self.contest_id,
85
+ "contest_date": self.contest_date.isoformat(),
86
+ "starter_code": self.starter_code,
87
+ "difficulty": self.difficulty.value,
88
+ "output_list": output_list,
89
+ "code_list": code_list,
90
+ }
91
+
92
+ def insert_output_evaluation(
93
+ self,
94
+ output_list: list[str],
95
+ code_list: list[str],
96
+ graded_list: list[bool],
97
+ **kwargs,
98
+ ) -> dict:
99
+ output = self.insert_output(output_list, code_list)
100
+ output["graded_list"] = graded_list
101
+ output["pass@1"] = graded_list.count(True) / len(graded_list)
102
+ for k, v in kwargs.items():
103
+ output[k] = v
104
+ return output
105
+
106
+ def get_evaluation_sample(self):
107
+ return {
108
+ "input_output": json.dumps(
109
+ {
110
+ "inputs": [
111
+ t.input
112
+ for t in self.public_test_cases + self.private_test_cases
113
+ ],
114
+ "outputs": [
115
+ t.output
116
+ for t in self.public_test_cases + self.private_test_cases
117
+ ],
118
+ "fn_name": self.metadata.get("func_name", None),
119
+ }
120
+ ),
121
+ }
122
+
123
+
124
+ def load_code_generation_dataset(release_version="release_v1") -> list[CodeGenerationProblem]:
125
+ dataset = load_dataset("livecodebench/code_generation_lite", split="test", version_tag=release_version, trust_remote_code=True)
126
+ dataset = [CodeGenerationProblem(**p) for p in dataset] # type: ignore
127
+ print(f"Loaded {len(dataset)} problems")
128
+ return dataset
129
+
130
+
131
+ def load_code_generation_dataset_not_fast(release_version="release_v1") -> list[CodeGenerationProblem]:
132
+ dataset = load_dataset("livecodebench/code_generation", split="test")
133
+ dataset = [CodeGenerationProblem(**p) for p in dataset] # type: ignore
134
+ print(f"Loaded {len(dataset)} problems")
135
+ return dataset
136
+
137
+
138
+ if __name__ == "__main__":
139
+ dataset = load_code_generation_dataset()
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/benchmarks/test_output_prediction.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from enum import Enum
3
+ from datetime import datetime
4
+ from dataclasses import dataclass
5
+
6
+ from datasets import load_dataset
7
+
8
+
9
+ @dataclass
10
+ class Test:
11
+ input: str
12
+ output: str
13
+ testtype: str
14
+
15
+
16
+ @dataclass
17
+ class TestOutputPredictionProblem:
18
+ question_title: str
19
+ question_content: str
20
+ question_id: str
21
+ contest_id: str
22
+ contest_date: datetime
23
+ difficulty: str
24
+ test: list[Test]
25
+ starter_code: str
26
+ function_name: str
27
+ test_id: int
28
+
29
+ def __post_init__(self):
30
+ self.test = [Test(**t) for t in json.loads(self.test)] # type: ignore
31
+
32
+ def insert_output(self, output_list: list[str], pred_list: list[str]) -> dict:
33
+ return {
34
+ "question_title": self.question_title,
35
+ "question_content": self.question_content,
36
+ "question_id": self.question_id,
37
+ "contest_id": self.contest_id,
38
+ "contest_date": self.contest_date.isoformat(),
39
+ "difficulty": self.difficulty,
40
+ "output_list": output_list,
41
+ "pred_list": pred_list,
42
+ "test_id": self.test_id,
43
+ "function_name": self.function_name,
44
+ "starter_code": self.starter_code,
45
+ }
46
+
47
+ def insert_output_evaluation(
48
+ self, output_list: list[str], code_list: list[str], graded_list: list[bool]
49
+ ) -> dict:
50
+ output = self.insert_output(output_list, code_list)
51
+ output["graded_list"] = graded_list
52
+ output["pass@1"] = graded_list.count(True) / len(graded_list)
53
+ return output
54
+
55
+ def get_evaluation_sample(self) -> dict:
56
+ return {
57
+ "input": self.question_content,
58
+ "output": self.test[0].output,
59
+ }
60
+
61
+
62
+ def load_test_prediction_dataset(release_version="release_v1") -> list[TestOutputPredictionProblem]:
63
+ dataset = load_dataset("livecodebench/test_generation", split="test") # type: ignore
64
+ dataset = [TestOutputPredictionProblem(**d) for d in dataset]
65
+ print(f"Loaded {len(dataset)} prediction problems")
66
+ return dataset
67
+
68
+
69
+ if __name__ == "__main__":
70
+ dataset = load_test_prediction_dataset()
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__init__.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ from lcb_runner.evaluation.compute_code_generation_metrics import codegen_metrics
2
+ from lcb_runner.evaluation.compute_code_execution_metrics import code_execution_metrics
3
+ from lcb_runner.evaluation.compute_test_output_prediction_metrics import (
4
+ test_output_metrics,
5
+ )
6
+ from lcb_runner.evaluation.pass_k_utils import extract_instance_results
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (591 Bytes). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/__init__.cpython-311.pyc ADDED
Binary file (643 Bytes). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/__init__.cpython-39.pyc ADDED
Binary file (560 Bytes). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_execution_metrics.cpython-310.pyc ADDED
Binary file (1.77 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_execution_metrics.cpython-311.pyc ADDED
Binary file (3.07 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_execution_metrics.cpython-39.pyc ADDED
Binary file (1.74 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_generation_metrics.cpython-310.pyc ADDED
Binary file (6.24 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_generation_metrics.cpython-311.pyc ADDED
Binary file (11.9 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_code_generation_metrics.cpython-39.pyc ADDED
Binary file (6.2 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_test_output_prediction_metrics.cpython-310.pyc ADDED
Binary file (2.41 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_test_output_prediction_metrics.cpython-311.pyc ADDED
Binary file (4.09 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/compute_test_output_prediction_metrics.cpython-39.pyc ADDED
Binary file (2.38 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/pass_k_utils.cpython-310.pyc ADDED
Binary file (2.84 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/pass_k_utils.cpython-311.pyc ADDED
Binary file (5.45 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/pass_k_utils.cpython-39.pyc ADDED
Binary file (2.84 kB). View file
 
deep_search/search_o1/scripts/SimpleDeepSearcher/inference/lcb_runner/evaluation/__pycache__/testing_util.cpython-310.pyc ADDED
Binary file (15.7 kB). View file