diff --git a/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_content_list.json b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bcef1c1379183edc5f58bf592dc6a293fa737299 --- /dev/null +++ b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18f0a2c4f7b7e42d838c3bee9d4889ed0b18a9c217e8cfdebdcf6648dd30a6ae +size 137336 diff --git a/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_model.json b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bedf3cfce122c2b3dccf210352e411e8c8887c10 --- /dev/null +++ b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe575a6024c9bc3f5c3e8037b6e45efdc2005be9380ef42bd61926343daee62c +size 162123 diff --git a/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_origin.pdf b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7de3263a0c78f752a849b439a73bd7becd6b76a4 --- /dev/null +++ b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d91cc77a237ccddc1ddae9b9adbb09fb51daee7e46e45e40aa864c873ad87f37 +size 3477634 diff --git a/aaar10assessingaispotentialtoassistresearch/full.md b/aaar10assessingaispotentialtoassistresearch/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4848153ec1008ec3ed722d3134a7fc558b1ade4d --- /dev/null +++ b/aaar10assessingaispotentialtoassistresearch/full.md @@ -0,0 +1,450 @@ +Renze Lou1 Hanzi Xu2 Sijia Wang3 Jiangshu Du4 Ryo Kamoi1 Xiaoxin Lu1 Jian Xie5 Yuxuan Sun5 Yusen Zhang1 Jihyun Janice Ahn1 Hongchao Fang1 Zhuoyang Zou1 Wenchao Ma1 Xi Li6 Kai Zhang7 Congying Xia5 Lifu Huang3 Wenpeng Yin1 + +# Abstract + +Numerous studies have assessed the proficiency of AI systems, particularly large language models (LLMs), in facilitating everyday tasks such as email writing, question answering, and creative content generation. However, researchers face unique challenges and opportunities in leveraging LLMs for their own work, such as brainstorming research ideas, designing experiments, and writing or reviewing papers. In this study, we introduce AAAR-1.0, a benchmark dataset designed to evaluate LLM performance in three fundamental, expertise-intensive research tasks: (i) EQUATIONINFERENCE, assessing the correctness of equations based on the contextual information in paper submissions; (ii) EXPERIMENTDESIGN, designing experiments to validate research ideas and solutions; and (iii) PAPERWEAKNESS, identifying weaknesses in paper submissions. AAAR-1.0 differs from prior benchmarks in two key ways: first, it is explicitly researched-oriented, with tasks requiring deep domain expertise; second, it is researcher-oriented, mirroring the primary activities that researchers engage in on a daily basis. An evaluation of both open-source and closed-source LLMs reveals their potential as well as limitations in conducting sophisticated research tasks. We will keep iterating AAAR-1.0 to new versions. Project Webpage: https://renzelou.github.io/AAAR-1.0/ + +![](images/44e999bb32205b682c1dd79a60bb66513a8cedc3474d91b4c3a32f4ce2b67454.jpg) +Task #2: Experiment Design +Figure 1: The input-output illustration of three tasks in the proposed AAAR-1.0 benchmark. + +# 1. Introduction + +Although AI has brought transformative changes to various aspects of life, its impact on researchers unfolds in a nuanced manner. On the one hand, AI assists in various research disciplines, such as Social Science (Neuman et al., 2023), Finance (Gu et al., 2024), Medicine (Rakhimov et al., 2022), GeoScience (Praskievicz, 2018), etc., significantly expediting academic processes. However, many of these applications are superficial, often limited to data-driven clustering or classification. On the flip side, the AI era poses challenges for researchers. Despite its ability to streamline some activities, researchers still face demanding, cognitively intensive tasks such as staying current through extensive paper reading, rapidly generating ideas in response to fast-paced advancements, conducting rigorous experiments to substantiate claims, and managing an increasing volume of peer reviews. Then a question looms: How effectively can AI assist researchers in tasks that are domain-specific, expertise-demanding, and reasoning-intensive? + +Existing works proved the promising potential for using LLMs in assisting AI research. Si et al. (2024) conducted a large-scale human study and found that LLMs can gen + +erate creative research ideas. Lu et al. (2024) proposed an autonomous agent to handle complicated research workflow and write a whole research paper. However, most of these works focus on addressing highly subjective problems that require a high degree of expertise, making evaluation laborious and hard to reproduce. This underscores the need for a comprehensive benchmark that rigorously assesses LLMs' capabilities in expertise-intensive research activities. + +To this end, in this work, we introduce AAAR-1.0, a novel benchmark that aims to comprehensively assess the LLMs' capacity on expert-level research tasks. As illustrated in Figure 1, AAAR-1.0 decomposes three distinct expert-level AI research tasks from the researcher's daily activities, including i) EQUATIONINFERENCE, investigating whether the LLMs can infer the equation correctness based on the paper context; ii) EXPERIMENTDESIGN, validating LLMs' ability on designing reliable experiments for a research idea; and iii) PAPERWEAKNESS, testing the quality of weaknesses discovered by LLMs from paper drafts. To ensure data quality, senior AI researchers with extensive domain expertise perform data annotation for AAAR-1.0, followed by rigorous multi-round data examination and filtering. All three tasks require models to possess strong domain knowledge covering various cutting-edge research findings, as well as expert-level research experience, to the extent that even humans need substantial research accumulation to tackle the tasks we designed. Crucially, tasks here are singular, standalone challenges (with clear input and output expectations) rather than a complicated task chain (Li et al., 2024; Lu et al., 2024), providing a more transparent assessment of the model's intermediate output. Benefiting from the proposed automatic metrics, we conduct extensive experiments across numerous mainstream LLMs, where we find that: + +- With a random guess baseline of $40\%$ $\mathrm{F}_1$ , the performance of most LLMs on EQINFER hovers just slightly above chance, with the top models reaching around $46\%$ . This highlights the difficulty of the task, despite its reliance primarily on local context reasoning. +- In EXPDESIGN, LLM-designed experiments are innovative and more diverse than those by humans; however, many are trivial, lack feasibility, and stray from the original research objectives. +- In PAPERWEAKNESS, LLM-identified weaknesses often lack depth and specificity, making them broadly applicable and less useful for providing feedback on paper drafts. + +# 2. Related Work + +LLMs for AI Research. With the rapid evolution of pertaining techniques, LLMs are found to be useful in assisting various research disciplines (Yu et al., 2024a; Labrak et al., + +2024), particularly in AI research, such as generating novel research ideas (Kumar et al., 2024; Yu et al., 2024b), reviewing research draft (Gao et al., 2024; Du et al., 2024; Liang et al., 2024; Zhu et al., 2025), and writing scientific papers (Chamoun et al., 2024; Lu et al., 2024; Weng et al., 2024). For example, Si et al. (2024) conducted a large-scale human investigation on LLM-generated research ideas and found that LLMs can generate novel ideas compared with humans while lacking feasibility. Du et al. (2024) found that while LLMs are effective at summarizing papers, they tend to overly trust the authors' claimed strengths and struggle to identify weaknesses specific to the paper. Furthermore, some works try to employ LLMs to solve more complicated research tasks that are composed of multiple steps (Li et al., 2024; Tang et al., 2023). Notably, Lu et al. (2024) proposed AI-SCIENTIST, an autonomous agent framework that can handle a series of challenging research tasks consecutively, including generating research ideas, coming up with the corresponding experiments along with the implementations, and then writing the final research paper — exactly how human conduct a whole research pipeline. However, there is still a lack of systematic evaluations and quantitative analyses on the LLMs' (intermediate) output of each single-step research task. Accordingly, our work focuses on building a benchmark consisting of individual research steps with clear input-output expectations, making it suitable for comprehensive LLM evaluation. Moreover, we emphasize that relying on LLMs to fully replace human effort might compromise academic integrity. While our benchmark primarily serves an educational purpose — LLMs assist junior researchers by providing imperfect but insightful ideas, rather than by governing the entire research process. + +Benchmarks for AI Research Tasks. Existing "LLM assists research" benchmarks mainly focus on the implementation and execution part of the research pipeline (Lu et al., 2024; Chen et al., 2024a; Li et al., 2024; Chan et al., 2024). For instance, Huang et al. (2024) proposed MLAgentBench to test the LLMs' capacity for writing project code and training the ML models, where the evaluation metric is the test performance of the models trained by LLMs. However, real-world AI research activities are diverse and some of them are hard to assess for quality, such as generating research ideas, which requires intensive manual assessment (Si et al., 2024; Liang et al., 2024). Our work centers on tasks that emphasize a comprehensive mastery of the scientific research field and core elements of a researcher's daily workload, and we try to build curated task-specific metrics for every single task for a more efficient and accurate LLMs appraisal. + +# 3. AAAR-1.0 + +Figure 2 provides a data construction overview. In the following sections, we elaborate on the data collection de + +![](images/9439aa8825a3c5ec13bb7c072b1da5ac19ef20fea05a84a507e03900d0ca72d9.jpg) +Figure 2: Data construction workflows of the three tasks in AAAR-1.0. + +tails, including § 3.1 EQUATION INFERENCE (EQINFER), § 3.2 EXPERIMENT DESIGN (EXPDESIGN), and § 3.3 PAPER WEAKNESS (WEAKNESS). + +# 3.1. EQUATIONINFERENCE + +Crafting a correct scientific equation in paper writing or validating an equation in paper reviewing is challenging, as it requires a thorough understanding of an algorithm or the intricate relationships among numerous variables. Directly prompting LLMs to generate equations proves overly demanding. Therefore, this work formulates EQINFER (Figure 1) as a binary inference task. $^{1}$ + +$①$ Data crawling and cleaning. For the data source, we adopt the pre-Compilation LaTeX code for two reasons: i) existing PDF parsing tools, such as PyMuPDF and PaperMage (Lo et al., 2023), can introduce considerable noise to the parsed equation text; ii) considering most of exiting LLMs are capable with processing LaTeX code, using LaTeX source instead of parsed text can be more accurate and provide LLMs with richer information. Meanwhile, we only crawl those peer-reviewed papers accepted by top-tier conferences to avoid using low-quality human-written equations. Accordingly, we first obtain the accepted paper list from ACL Anthology, from year 2019 to 2023. Next, we search each paper on arXiv to crawl its LaTeX source (if it exists). Finally, we get a total of 1,762 papers' source LaTeX packages. We then clean the LaTeX sources by deleting all the comments and combining multiple cross-referred .tex files into a main file. Afterward, we use regex to randomly extract (at most) 3 equations' code snippets per paper, resulting in 3,877 human-written equations. + +② LLM-based equation synthesis. As EQINFER assessing whether the LLMs can infer the correctness of equation (i.e., binary classification), for each human-written positive equation, we have to craft counterpart negative equations. To this end, for each positive equation, we prompt GPT-4 to synthesize a negative equation based on the paper context. We repeat this prompt (with a high decoding temperature) until three different negative equations are synthesized. + +$③$ LLM-based filtering. However, the LLM-synthetic equations can be context-unaligned, i.e., some synthesized equations contain notation that is never defined in the paper context, which becomes a superficial shortcut and too effortless for LLMs to identify. To improve data quality, we prompt GPT-4 to identify context-unaligned negative equations. We then eliminate the positive equation and its negative counterparts, where all three negative counterparts are unaligned. This filtering leads to a final of 1,449 positive equations and 4,347 negative equations (each positive equation has three negative counterparts, and at least one negative counterpart is "challenging"). + +$④$ Expert-based examination. Furthermore, it's also possible that synthesized negative equations are actually correct (i.e., false negative) — even if the negative and positive equations are written differently, the final compiled results might be the same. We then employ human experts to review the data further and filter out false negative equations, checking the classification instances for accuracy. + +We asked 5 senior PhD students who are experienced in AI research to check all instances. We ask human experts to consider the following criteria for each positive equation and its negative counterparts (each pair): i) Are all equations + +grammatically correct? ii) After compilation, are all negative equations different from the positive ones? We ask every human expert to use external LaTeX compilation tools (e.g., TeXlive), and identify the pairs that cannot meet the criteria. Each pair is examined by at least two experts, and we only keep pairs that all experts decide to keep. After this strict examination, a total of 1,049 pairs are eventually kept (27.6% pairs are filtered) + +Final data. We finally obtain 1,049 positive equations (each has three negative counterparts). We show data statistics of EQINFER in Table 7 and data examples in Figure 8. + +# 3.2. EXPERIMENTDESIGN + +Given a research topic, such as a novel ML algorithm, a qualified researcher can design a solid experiment plan for it, and clarify underlying motivation to ensure the reliability of the designed experiment. Unlike the concurrent works that focus on the experiment implementation (Lu et al., 2024; Huang et al., 2024), we emphasize the importance of assessing the high-level experiment design of LLMs before the subsequent implementation to avoid any expensive execution iteration. Therefore, as shown in Figure 1, we formulate EXPDESIGN as a text-generation task that takes pre-experiment paper context as input, and then generates the experiment and explanation list. + +$①$ Data crawling. As for the data source, we first collect $\geq 10\mathrm{k}$ papers' data from arXiv, including LaTeX sources and PDFs, which cover broad AI categories, including cs.AI, cs.CL, and cs.CV, from year 2018 to 2023. Similarly, to ensure the source data quality, we only use papers that have appeared at well-known conferences. + +$②$ Domain-expert annotation. Making a reliable and executable experiment plan requires solid foundation knowledge of a specific research area. Consequently, we set a high standard for choosing annotators: i) be a senior Ph.D. student with at least one peer-reviewed publication in leading AI venues; ii) have more than 4 years of AI research experience; iii) frequently serve as conference reviewers. Finally, we invite a total of 10 qualified experts to participate in our data collection procedure. Given the $10\mathrm{k}$ crawled papers, we first ask every annotator to bid on the papers that they are interested in. After bidding, each of them is assigned 10 papers, i.e., a total of 100 papers to be annotated. During annotation, we post each paper PDF on online Google Drive and ask the annotator to first carefully read the whole paper. Then, we ask them to identify and locate the key experiments in each paper (i.e., highlighting the relevant paragraphs of each experiment). We don't consider some trivial experiments, such as those supplemental analyses in the appendix section. For each identified experiment, the + +annotator has to concisely answer two questions: i) What did this experiment do? ii) Why did the paper authors conduct this experiment? In other words, we ask the annotator to summarize all the key experiments in this paper and explain the underlying motivations based on their rich domain experience. + +$③$ Multi-round peer discussion. Intuitively, different experts might have different opinions on the same research topic. Particularly, when explaining the underlying motivation of an experiment, adopting only a single expert's opinion might introduce bias to our annotation. Hence, we conduct a further multi-round peer discussion. For each paper, where all the key experiments are identified, summarized, and explained, we ask a different expert (reviewer) to review the annotation by considering the following three criteria: i) Are the identified experiments all the key experiments? ii) Does each experiment summarization covers all key information? iii) Does each explanation sound reasonable and reliable? Each reviewer must leave comments on the online PDF regarding the above criteria, and then the annotator must respond to each comment — either accept the suggestion and revise the previous annotation or provide a "rebuttal" to the reviewer to uphold the annotation. This discussion is iterative until both opinions align. Eventually, for each paper, we collect two lists: i) the experiment list, summarizing each experiment step of the paper; ii) the explanation list, the underlying motivations that are one-one corresponding to the experiment. + +Final data. After annotation, we use the pre-experiment context of each paper (according to the first-experiment location identified by the annotator) as the input. Furthermore, we use GPT-4 to delete any sentence that potentially leaks the experiment from the input.3 Similar to the EQINFER, we utilize the source LaTeX as the input text to avoid PDF sparing noise. As for the image input, we collect those figures within each paper's source LaTeX package and only keep figures that are used in the pre-experiment context. Overall, a total of 100 instances are collected. As shown in Figure 1, the input of each instance is the pre-experiment context (including the figures), and the ground-truth output is the expert-annotated experiment plan and the explanations. Table 8 shows data statistics and Figure 9 illustrates the sample case in EXPDESIGN. + +# 3.3. PAPERWEAKNESS + +Another critical research task is paper review. Previous works have demonstrated the usefulness of the LLM-based review feedback (Gao et al., 2024; Jin et al., 2024; Lu et al., 2024). However, as indicated by Du et al. (2024); Liang et al. (2024), LLMs only excel at summarizing the research + +strengths while falling significantly short on weakness criticism. Hence, we build WEAKNESS for particularly investigating the LLM-generated weaknesses. + +$①$ Data crawling. We first crawl a total of 3,779 anonymous submissions of ICLR 2023 from OpenReview, $^{4}$ including PDF and other meta information (e.g., scores, decisions, and tracks). As the ICLR 2023 has 13 distinct tracks while the paper distribution across different tracks is highly biased, we then uniformly sample papers from different research tracks to improve the domain diversity. Meanwhile, during sampling, we also keep the accept/reject papers distributed equally to avoid data bias. In a word, we finally collect a total of 1,000 papers (500 accepted; 500 rejected), uniformly covering all 13 tracks. Please refer to Figure 3 for the track and score distribution of the 1,000 papers. + +$②$ Extraction of human-written weaknesses. Since the raw comments crawled from $ICLR 2023$ are mixed with both strengths and weaknesses, we further employ GPT-4 to extract all the weaknesses from each reviewer's comments and compose multiple weaknesses into a list. Notably, we force GPT-4 to keep the original text of the reviewer, i.e., all weaknesses in our dataset are those original sentences written by the reviewer without any modifications. What's more, sometimes one reviewer might repeatedly mention the same weakness throughout the comment. In this case, we simply keep all the repeated weaknesses because, if one weakness is repeatedly mentioned by the reviewer, it's intuitively an important weakness that the reviewer wants to emphasise; accordingly, keeping the repeat items can penalize LLMs more on missing this weakness. + +For each paper, we can finally get multiple weakness lists (one weakness list per reviewer, one paper can have multiple reviewers). We further delete a few papers without any weaknesses found in the raw comments, resulting in a total of 993 instances, i.e., 993 {paper, weakness lists} pairs. + +$③$ Input data processing. As we mentioned before, we crawl papers from OpenReview instead of arXiv because the under-review paper draft is required for this task. However, not every paper from OpenReview can be found on arXiv, i.e., the source LaTeX code and figures of most under-review papers are unavailable. Therefore, we utilize VILA (Lin et al., 2023) to parse text data out from the PDF; we also employ PDFFigures-2.0 (Clark & Divvala, 2016) to extract all the figures and tables (in image) from the paper, as Vila is not good at processing the table data. + +4We adopt ICLR because it releases full submissions, while some other conferences only release accepted papers. + +5We manually checked GPT-4's extraction results of 200 cases — GPT-4 only missed $\leq 1\%$ of reviewer-written weaknesses and maintained almost all the original text. + +Final data. Our final data is composed of 993 instances, each input is paper text along with figure/table images, and each output is peer reviewers' weakness lists. Table 9 shows data statistics; Figure 10 presents an example of the data instances. We show the data diversity (score and track distribution) in Figure 3. + +# 4. Evaluation Criteria + +For EQINFER, we adopt $\mathrm{F}_1$ as the classification criterion. For EXPDESIGN and WEAKNESS, since both tasks have free-form outputs, we develop several novel task-specific metrics in addition to the conventional ROUGE (Lin, 2004). + +We use LLMs to evaluate the experiment list of EXPDESIGN. Specifically, given a model-predicted experiment list $p$ , and the ground-truth list $g$ , we calculate: + +$$ +\text {E n - P r e c i s i o n} = \frac {1}{m} \sum_ {i = 1} ^ {m} f \left(p _ {i}, g\right) \tag {1} +$$ + +$$ +\text {E n - R e c a l l} = \frac {1}{n} \sum_ {j = 1} ^ {n} f (g _ {j}, p) \tag {2} +$$ + +where the $m$ and $n$ are the list length of $p$ and $g$ ; $f(.)$ represents the LLM prompting, where we prompt LLM to decide whether each predicted experiment item $(p_i)$ is entailed by the whole ground-truth list $(g)$ , proceeding with binary output, and vice versa. Intuitively, En-Precision reflects how many prediction experiments match ground-truth experiments. In this work, we used GPT-4o as an evaluator. + +While for the explanation generation of EXPDESIGN, as the prediction experiments are one-on-one corresponding to the ground truth, we adopt a semantic-based metric: + +$$ +\mathrm {S} - \text {M a t c h} = \frac {1}{m} \sum_ {i = 1} ^ {m} \operatorname {s i m} \left(p _ {i}, g _ {i}\right) \tag {3} +$$ + +where we use SentenceBERT (Reimers, 2019) to measure the semantic similarity between $p_i$ and $g_j$ . + +Unlike EXPDESIGN, the ground truth of WEAKNESS is multiple reviewers' weakness lists. Instead of merely merging the opinions of various reviewers into one flattened list and keeping LLM-as-judge as the metric (which is not only costly but also loses the structural information of diverse research perspectives), we employ the following semantic-based metric to efficiently evaluate predicted weaknesses: + +$$ +\text {S - P r e c i s i o n} = \frac {1}{m} \sum_ {i = 1} ^ {m} \left(\frac {1}{r} \sum_ {k = 1} ^ {r} \max _ {j} \sin \left(p _ {i}, g _ {j} ^ {k}\right)\right) \tag {4} +$$ + +$$ +\text {S - R e c a l l} = \frac {1}{r} \sum_ {k = 1} ^ {r} \left(\frac {1}{n _ {k}} \sum_ {j = 1} ^ {n _ {k}} \max _ {i} \sin \left(g _ {j} ^ {k}, p _ {i}\right)\right) \tag {5} +$$ + +where $r$ is the number of reviewers of the given paper, $n_k$ means the length of $k$ -th reviewer's weakness list, and $g_j^k$ + +indicates the $j$ -th item in $k$ -th reviewer's weakness list. + +Additionally, in the real world, we would think a review weakness is reliable if it is specific to a paper. Meanwhile, we also hope the review is informative, i.e., no excessive similar weaknesses in one review. Inspired by the classic TF-IDF, we propose a novel review diversity metric: + +$$ +\text {I T F - I D F} = \frac {1}{w} \sum_ {j = 1} ^ {w} \left(\frac {1}{m _ {j}} \sum_ {i = 1} ^ {m _ {j}} \log \left(\frac {m _ {j}}{O _ {i} ^ {j}}\right) \times \log \left(\frac {w}{R _ {i} ^ {j}}\right)\right) \tag {6} +$$ + +$$ +O _ {i} ^ {j} = \sum_ {k = 1} ^ {m _ {j}} \operatorname {s i m} \left(p _ {i} ^ {j}, p _ {k} ^ {j}\right) \tag {7} +$$ + +$$ +R _ {i} ^ {j} = \sum_ {l = 1} ^ {w} \max _ {s} \sin \left(p _ {i} ^ {j}, p _ {s} ^ {l}\right) \tag {8} +$$ + +where the $w$ is the total number of papers in the dataset, $p^j$ is $j$ -th paper's prediction weakness list, $p_i^j$ is the $i$ -th weakness in $p^j$ . Moreover, $O_i^j$ calculates the intra-paper occurrence frequency of $p_i^j$ ; $R_i^j$ is the "soft" number of papers that also contain the $p_i^j$ , which is computed by summing the maximum similarity scores between $p_i^j$ and other paper's weaknesses. In a word, $O_i^j$ measures informativeness, and $R_i^j$ measures specificity. The complete ITF-IDF consider both aspects and reflects the overall weakness diversity. + +# 5. Experiments and Analyses + +In this section, we conduct extensive experiments on AAAR-1.0, across various mainstream LLMs, to quantify the current LLMs' capacity to tackle high-level research tasks. Specifically, § 5.1 for EQINFER, § 5.2 for EXPDESIGN, and § 5.3 for WEAKNESS. Please refer to Appendix B.2 for running details of the LLMs. + +# 5.1. EQUATIONINFERENCE + +Settings. As different LLMs have distinct context windows, to ensure a fair comparison, we fix the maximum input length for all models. According to Table 7, we empirically use 1,000 words for both contexts before and after equations, i.e., 2,000 surrounding words. + +Main results. Table 1 shows the main results. Firstly, a simple baseline that predicts all equations as positive achieves $40\%$ $\mathrm{F_1}$ (due to the 1:3 of positive and negative equations), while nearly all open-source LLMs even cannot beat this naive baseline. Notably, though the performance of Mixtral is slightly superior to the baseline, the extremely biased precision and recall scores imply that Mixtral is also simply predicting almost all samples as positive instead of truly inferring. Meanwhile, compared to the All-Positive baseline, the performance superiority of the strong close-source LLMs is not significant, the best LLM on this task only obtains $47.98\%$ , which demonstrates the challenge of EQINFER compared with other similar benchmarks (Song + +Table 1: Various LLMs' performances on EQINFER task (1,049 positive and 3,147 negative samples). "All-positive" indicates a baseline that predicts all equations as positive. + +
MethodsF1Prec.Rec.
All-Positive40.0025.00100.00
Open-source LLMs
OLMo-7B (Groeneveld et al., 2024)13.6411.9315.91
Mistral-7B (Jiang et al., 2023)28.4519.2854.24
Mixtral-8x22B-MoE (Jiang et al., 2024)40.9026.1593.80
Qwen 2.5-72B (Qwen Team, 2024)31.2226.2857.40
Llama 3.1-70B (MetaAI, 2024)33.0822.1465.39
Closed-source LLMs
Gemini 1.5 Pro (Anil et al., 2023)46.7432.0586.27
Claude 3.5 sonnet (Anthropic, 2024)45.1329.4896.18
GPT-4o (OpenAI, 2024a)40.3530.7958.53
o1-preview (OpenAI, 2024b)46.3531.4388.27
o3-mini (OpenAI, 2025)47.9834.3479.59
+ +et al., 2023). The generally high recall with low precision of all LLMs also indicates real-world risks, e.g., relying on LLMs to check the validity of equations in paper review. + +$\mathcal{Q}$ : Do more contexts boost performance? EQINFER places high demands on reasoning within the scientific context. To quantify the impact of input context length, we scale the input length (per side) from 100 to 1,500 words. As shown in Figure 4, for the open-source LLMs (Llama and Qwen), an appropriate context length can boost the performance; while for GPT-4o, scaling up the context length doesn't contribute much to the $\mathrm{F}_1$ . However, during the scaling, we find that the precision of GPT-4o is gradually increased, and the recall is decreased accordingly; considering the label distribution of EQINFER, we believe precision can better reflect the model's true capacities on this task. Thus, we anticipate that scaling up context shall be beneficial to those strong close-source LLMs such as GPT-4o. + +# 5.2. EXPERIMENTDESIGN + +Settings. Similarly, we unify the input context length of different LLMs to ensure a fair comparison. According to Table 8, we set 2,000 and 3,000 input words for open-and closed-source LLMs, respectively. Meanwhile, as experiment explanation is the subsequent task of experiment design, using model-generated experiments can propagate errors in explanation, leading to inferior results for most LLMs. To this end, we provide LLMs with the oracle experiments when generating explanations. + +Main results. Table 2 shows the main results. For the experiment design, the closed-source LLMs generally outperform open-source LLMs. However, the score values of all LLMs are relatively low $(20\% \sim 30\%)$ , implying the LLMs consistently miss ground-truth experiments from the origin paper (low recall), and they tend to generate + +Table 2: Various LLMs' performances on the 100 instances of EXPDESIGN. The explanation generation is based on the oracle experiments to prevent error propagation. "Copy Input" directly copies each experiment idea as the explanation. + +
MethodsExperiment DesignExperiment Explanation
En-F1En-PrecisionEn-RecallS-MatchROUGE-LROUGE-1
Copy Input40.3222.0625.28
Open-source LLMs
OLMo-7B (Groeneveld et al., 2024)14.8017.5019.8045.7826.3030.38
Mistral-7B (Jiang et al., 2023)18.9624.8321.3850.1830.2034.69
Mixtral-8x22B-MoE (Jiang et al., 2024)23.1624.4530.5749.0729.9634.53
Llama 3.1-70B (MetaAI, 2024)22.9223.1029.7650.0529.3334.11
Qwen 2.5-72B (Qwen Team, 2024)24.2822.4834.4451.1229.4634.68
Closed-source LLMs
Gemini 1.5 Pro (Anil et al., 2023)27.2528.6634.9252.8728.5233.80
Claude 3.5 sonnet (Anthropic, 2024)27.9924.4842.0953.0318.7526.15
GPT-4o (OpenAI, 2024a)25.0322.2536.5954.7927.5434.31
o1-preview (OpenAI, 2024b)30.1328.1338.5958.5529.1136.70
o3-mini (OpenAI, 2025)30.1728.7037.6754.0120.7129.14
+ +more novel experiments that didn't show in the origin paper (low precision). As for the experiment explanation, the S-Match scores of closed-source LLMs still surpass the open-source LLMs. Furthermore, there is a negative correlation between S-Match and ROUGE score, where the ROUGE scores of closed-source LLMs are broadly inferior. We find that the open-source LLMs often try to copy the terms or phrases from the given experiment, or even simply paraphrase the experiment instead of explaining, which results in a high superficial overlap with the ground-truth explanation. This observation highlights the importance of adopting the proposed S-Match to avoid evaluation bias of traditional generation metrics. + +$\mathcal{Q}_1$ : What is the quality of the model-generated novel experiments? The low En-Precision of LLMs in Table 2 indicates the creativity of LLMs in generating novel experiments. We then randomly sample 15 papers from the EXPDESIGN and ask 3 experts to manually review the model-generated novel experiments. Specifically, we ask the experts to judge the necessity of the novel experiments, where we set three necessity levels: "A" indicates the experiment is necessary/mandatory to support the main claim, "B" represents optional/supplementary experiments, and "C" for those unrelated experiments (see Appendix C.2 for evaluation details). Table 3 shows the necessity scores of the three strongest LLMs. We find that LLMs consistently generate a lot of novel experiments, especially the Claude; though most of them are optional, even fancy/unrelated experiments, there are still a considerable amount of necessary experiments generated, e.g., the results of o1. We further find that some novel experiments can be regarded as useful supplementary analyses w.r.t. the human experiments. Table 11 shows examples of model-suggested experiments. + +Table 3: The human evaluation results on the novel experiments suggested by LLMs. "A", "B", and "C" represent the different quality level (i.e., necessity); "A" is the best level. + +
Models# of novel EXPNecessity (%)
AB
Gemini 1.5 Pro5930.5945.76
Claude 3.5 sonnet11221.7850.00
o1-preview7135.8436.61
+ +Table 4: The impact on S-Match scores of maintaining the experiment's self-containment for EXPDESIGN. + +
ModelsOne-by-OneWhole-List
Llama 3.1-70B50.0549.36 (↓ 0.7)
Qwen 2.5-72B51.1248.56 (↓ 2.6)
Gemini 1.5 Pro52.8757.48 (↑ 4.6)
Claude 3.5 sonnet53.0359.11 (↑ 6.1)
GPT-455.0356.95 (↑ 1.9)
GPT-4o54.7958.54 (↑ 3.8)
o1-preview58.5561.58 (↑ 3.0)
+ +$\mathcal{Q}_2$ : Can self-contained experiment design enhance the experiment explanation? When generating the explanation in Table 2, we provide LLMs with each individual experiment and let them explain one by one, because we find that, when providing the whole experiment list, those open-source models only explain partial experiments because of their poor instruction-following capacity. However, there are intuitively some semantic or logical relations between different experiments, e.g., some experiments are prerequisite + +Table 5: The human evaluation results on LLMs' output explanations of EXPDESIGN. "Acc. ratio" means how many model outputs are accepted by the annotator. + +
ModelsAcc. ratio
Llama 3.1-70B22.93
Gemini 1.5 Pro55.07
Claude 3.5 sonnet61.46
GPT-4o69.72
o1-preview76.14
+ +sites to others. Therefore, this one-by-one prompting might break the self-containment of an experiment plan. Consequently, we test with the "whole-list" prompting, where the LLMs are given the complete experiment list and are asked to explain all experiment steps together. + +As shown in Table 4, unlike the open-source LLMs, the explanation performances of those closed-source LLMs are generally improved after adopting whole-list prompting. According to further manual checking, after maintaining the self-containment of the experiments, the LLMs can refer to other experiments and better grasp the underlying motivation of the current experiment. + +$\mathcal{Q}_3$ : Do human evaluation results align with automatic metrics for explanation? As the explanation can be open-ended, in this paragraph, we provide the human evaluation results on different LLMs' experiment explanation outputs. In detail, we randomly select 20 out of 100 papers and ask 5 annotators to read the experiments along with each model's explanations; we then let the annotator decide whether each model's explanation is acceptable (see Appendix C.3 for more details). Table 5 illustrates the results, where the score variance is higher than Table 2. However, the performance ranking of both tables is perfectly correlated with each other (Spearman's rank correlation coefficient $= 1$ ), demonstrating the effectiveness of S-Match. + +$\mathcal{Q}_4$ : Do more contexts boost performance? We also investigate the impact of input context length for EXPDESIGN. As shown in Figure 5, we scale up the input pre-experiment context length from 0.1k to 10k tokens (10k is the length of the longest paper). For the experiment design, more input context does improve the performance of different LLMs, while this benefit stops after exceeding 8k tokens, which means that after the necessary information has been covered, scaling context becomes inefficient. Meanwhile, the explanation generation results reveal that LLMs primarily depend on given experiments rather than paper context to explain motivations. However, we do not expect this as we hope LLMs can explain the motivation based on a thorough + +understanding of the paper, just like how human experts do. Hence, there is still a considerable gap between the LLMs and humans in terms of grasping research motivations. + +$\mathcal{Q}_5$ : Does multi-modal input boost performance? Intuitively, besides the text, when designing experiments for a given research topic, the figures can provide rich supplementary information, such as an algorithm illustration that can help better understand this research topic and underlying motivations. Hence, we test the performance of different LMMs (Large Multimodal Models), including GPT4-o and InternVL2 (Chen et al., 2024b). Table 12 shows the ablation results on the figure data. To our surprise, the figure data doesn't improve the LMMs' results in this task, even harming the performances. This might be due to the low informativeness of the figures, as figures usually consume more input tokens but act only as supplementary information to the text, indicating future work on developing LMMs that can effectively leverage the scientific figures. + +# 5.3. PAPERWEAKNESS + +Settings. Intuitively, full paper content is necessary for paper reviewing. Therefore, instead of setting a maximum input length, in WEAKNESS, we try to utilize the whole paper. As the input length of WEAKNESS is extremely long (see Table 9), we adopt a "split-combine" method — we first split the whole paper into smaller pieces and let LLMs predict the weaknesses of each piece separately; after that, we merge all pieces' weaknesses as a final prediction. For the length of each small piece, we set 2,000 and 3,000 words for open- and closed-source LLMs, respectively. Additionally, in this task, we also examine the performance of AI-SCI (Lu et al., 2024), which enhances LLMs' paper review ability by leveraging advanced prompting techniques, e.g., self-reflection (Shinn et al., 2024) and response ensembling (Wang et al., 2023).6 + +Main results. Table 6 shows the main results, where the closed-source LLMs' overall performances are generally superior to the results of open-source LLMs. Similarly, closed-source LLMs are particularly excellent in S-Recall because of more generated weaknesses. However, there is still a considerable gap in the weakness diversity between the LLMs and human experts.7 Compared with human review, most LLM-generated weaknesses are vague and lack the necessary knowledge about some frontier research works. Surprisingly, AI-SCI performs worse than backbone GPT + +Table 6: Various LLMs' performances on the 993 instances of WEAKNESS. + +
MethodsS-F1 (%)S-Precision (%)S-Recall (%)Weakness Diversity +ITF-IDF (↑)
Human Review7.69
Open-source LLMs
OLMo-7B (Groeneveld et al., 2024)43.2540.3847.042.45
Mistral-7B (Jiang et al., 2023)42.0343.8040.771.17
Mixtral-8x22B-MoE (Jiang et al., 2024)43.2344.5942.230.98
Llama 3.1-70B (MetaAI, 2024)42.7843.1942.702.60
Qwen 2.5-72B (Qwen Team, 2024)42.7443.8042.051.21
Closed-source LLMs
Gemini 1.5 Pro (Anil et al., 2023)48.7543.9755.085.88
Claude 3.5 sonnet (Anthropic, 2024)47.8541.9756.003.91
GPT-4o (OpenAI, 2024a)47.7342.0955.485.95
o1-preview (OpenAI, 2024b)48.6242.5457.085.63
o3-mini (OpenAI, 2025)46.3342.0051.995.85
LLM Agent Framework
AI-SCI (GPT-4o) (Lu et al., 2024)45.0540.0251.912.23
+ +4o, especially on ITF-IDF, which suggests the challenge of WEAKNESS, i.e., simply adopting popular prompting techniques cannot well address this task. + +$Q_{1}$ : Is the split-combine effective? Ideally, if the LLM has a sufficient context window size, splitting the input papers for separate processing is unnecessary. Consequently, in this paragraph, we utilize the LLMs accepting long context input to compare "split-combine" with "no-split", i.e., letting LLMs write weaknesses by giving the full paper. In practice, we set the maximum number of input words to $20k$ , which ensures $\geq 95\%$ papers in the WEAKNESS can be fully processed. As shown in Table 10, compared with giving the full paper contexts, split-combine generally brings about superior performances. During manual checking, we find that, when full paper is available, LLMs frequently neglect some important sections and omit weaknesses accordingly, while split-combine ensures that the LLMs can carefully brainstorm weaknesses within each smaller piece. Surprisingly, the LLMs' performances with full paper context can be even worse than just remaining the first 3,000 words. This implies that even the current powerful long-context LLMs still fall short when processing long scientific documents. + +$\mathcal{Q}_2$ : Does multi-modal input boost performance? Our dataset covers both tables and figure illustrations extracted from the paper PDF as inputs. Intuitively, when reviewing a paper, both figures and tables are critical, not only for a better understanding, but also because some weaknesses are related to tables/figures.8 Therefore, in Table 13, we adopt + +two LMMs to investigate the effectiveness of image inputs. Overall, image information, including both figures and tables, doesn't bring significant performance improvement, i.e., only InternVL2 gains a performance boost after incorporating figures; while tables slightly drop both models' results. This is probably because the LMMs cannot reason well over the information-intensive images, especially the table images. + +# 6. Conclusion + +In this work, we propose AAAR-1.0, a novel benchmark targeting a comprehensive evaluation of the current LLMs' capacity in assisting AI research. AAAR-1.0 consists of distinct expertise-intensive tasks along with the curated evaluation metrics. We collect high-quality data by employing senior AI researchers and conducting strict data examinations. Extensive experiments highlight the challenges and values of AAAR-1.0. + +# Acknowledgments + +The authors would like to thank Ibraheem Moosa and Sarkar Snigdha Sarathi Das for assisting in the data collection. + +# Impact Statement + +Our study explores whether LLMs can assist human researchers in AI research. We do not advocate for AI replacing human researchers. Instead, we stress that the primary responsibility for scientific research should remain with humans to prevent societal risks, with LLMs serving as tools to + +enhance research efficiency. Specifically, our work analyzes the strengths and weaknesses of LLMs to ensure researchers remain judicious in their use of these tools. Our goal is to mitigate risks while maximizing the benefits offered by LLMs. We are committed to the careful distribution of data collected in our research, ensuring it is used solely for research purposes. + +# References + +Almazrouei, E., Alobeidli, H., Alshamsi, A., Cappelli, A., Cojocaru, R., Debbah, M., Goffinet, E., Heslow, D., Lau nay, J., Malartic, Q., Noune, B., Pannier, B., and Penedo, G. Falcon-40B: an open large language model with state-of-the-art performance, 2023. +Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A. M., Hauth, A., Team, G., et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. +Anthropic. Introducing claude 3.5 sonnet. https:// www.anthropic.com/news/claude-3-5-sonnet, June 2024. +Chamoun, E., Schlichktrull, M., and Vlachos, A. Automated focused feedback generation for scientific writing assistance. arXiv preprint arXiv:2405.20477, 2024. +Chan, J. S., Chowdhury, N., Jaffe, O., Aung, J., Sherburn, D., Mays, E., Starace, G., Liu, K., Maksin, L., Patwardhan, T., et al. Mle-bench: Evaluating machine learning agents on machine learning engineering. arXiv preprint arXiv:2410.07095, 2024. +Chen, Z., Chen, S., Ning, Y., Zhang, Q., Wang, B., Yu, B., Li, Y., Liao, Z., Wei, C., Lu, Z., et al. Scienceagentbench: Toward rigorous assessment of language agents for data-driven scientific discovery. arXiv preprint arXiv:2410.05080, 2024a. +Chen, Z., Wang, W., Tian, H., Ye, S., Gao, Z., Cui, E., Tong, W., Hu, K., Luo, J., Ma, Z., et al. How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. arXiv preprint arXiv:2404.16821, 2024b. +Clark, C. and Divvala, S. Pdfigures 2.0: Mining figures from research papers. In Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, pp. 143-152, 2016. +Du, J., Wang, Y., Zhao, W., Deng, Z., Liu, S., Lou, R., Zou, H. P., Venkit, P. N., Zhang, N., Srinath, M., Zhang, H. R., Gupta, V., Li, Y., Li, T., Wang, F., Liu, Q., Liu, T., Gao, P., Xia, C., Xing, C., Cheng, J., Wang, Z., Su, Y., Shah, R. S., Guo, R., Gu, J., Li, H., Wei, K., Wang, + +Z., Cheng, L., Ranathunga, S., Fang, M., Fu, J., Liu, F., Huang, R., Blanco, E., Cao, Y., Zhang, R., Yu, P. S., and Yin, W. Llms assist NLP researchers: Critique paper (meta-)reviewing. In The 2024 Conference on Empirical Methods in Natural Language Processing, 2024. doi: 10.48550/ARXIV.2406.16253. URL https://doi.org/10.48550/arXiv.2406.16253. +Gao, Z., Brantley, K., and Joachims, T. Reviewer2: Optimizing review generation through prompt generation. arXiv preprint arXiv:2402.10886, 2024. +Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A. H., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M. E., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N. A., and Hajishirzi, H. Olmo: Accelerating the science of language models. Preprint, 2024. +Gu, J., Ye, J., Yin, W., and Wang, G. Adaptive and explainable margin trading via large language models on portfolio management. In Proceedings of the 5th ACM International Conference on AI in Finance (ICAIF'24), 2024. +Huang, Q., Vora, J., Liang, P., and Leskovec, J. Mlagent-bench: Evaluating language agents on machine learning experimentation. In *Forty-first International Conference on Machine Learning*, 2024. +Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., Casas, D. d. l., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. +Jiang, A. Q., Sablayrolles, A., Roux, A., Mensch, A., Savary, B., Bamford, C., Chaplot, D. S., Casas, D. d. l., Hanna, E. B., Bressand, F., et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024. +Jin, Y., Zhao, Q., Wang, Y., Chen, H., Zhu, K., Xiao, Y., and Wang, J. Agentreview: Exploring peer review dynamics with llm agents. arXiv preprint arXiv:2406.12708, 2024. +Kumar, S., Ghosal, T., Goyal, V., and Ekbal, A. Can large language models unlock novel scientific research ideas? arXiv preprint arXiv:2409.06185, 2024. +Labrak, Y., Bazoge, A., Morin, E., Gourraud, P.-A., Rouvier, M., and Dufour, R. Biomistral: A collection of open-source pretrained large language models for medical domains. arXiv preprint arXiv:2402.10373, 2024. + +Li, H., Jiang, H., Zhang, T., Yu, Z., Yin, A., Cheng, H., Fu, S., Zhang, Y., and He, W. Traineragent: Customizable and efficient model training through llm-powered multi-agent system. arXiv preprint arXiv:2311.06622, 2023. +Li, R., Patel, T., Wang, Q., and Du, X. Mlr-copilot: Autonomous machine learning research based on large language models agents. arXiv preprint arXiv:2408.14033, 2024. +Liang, W., Zhang, Y., Cao, H., Wang, B., Ding, D. Y., Yang, X., Vodrahalli, K., He, S., Smith, D. S., Yin, Y., et al. Can large language models provide useful feedback on research papers? a large-scale empirical analysis. NEJM AI, 1(8):A1oa2400196, 2024. +Lin, C.-Y. Rouge: A Package for Automatic Evaluation of Summaries. In Text summarization branches out, pp. 74-81, 2004. +Lin, J., Yin, H., Ping, W., Lu, Y., Molchanov, P., Tao, A., Mao, H., Kautz, J., Shoeybi, M., and Han, S. Vila: On pre-training for visual language models, 2023. +Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., and Liang, P. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157-173, 2024. +Lo, K., Shen, Z., Newman, B., Chang, J. Z., Authur, R., Bransom, E., Candra, S., Chandrasekhar, Y., Huff, R., Kuehl, B., et al. Papermage: A unified toolkit for processing, representing, and manipulating visually-rich scientific documents. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 495-507, 2023. +Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., and Ha, D. The AI Scientist: Towards fully automated open-ended scientific discovery. arXiv preprint arXiv:2408.06292, 2024. +MetaAI. Introducing llama 3.1: Our most capable models to date. https://ai.meta.com/blog/meta-llama-3-1/, July 2024. +Neuman, Y., Cohen, Y., and Yin, W. Identifying social norm violation in movie plots: from borat to american pie. Digit. Scholarsh. Humanit., 38(4):1636-1645, 2023. doi: 10.1093/LLC/FQAD052. URL https://doi.org/10.1093/llc/fqad052. +OpenAI. Hello gpt-4o. https://openai.com/index/hello-gpt-4o/, May 2024a. +OpenAI. Introducing openai o1. https://openai.com/index/introducing-openai-o1-preview/, September 2024b. + +OpenAI. Openai o3-mini. https://openai.com/index/openai-o3-mini/, January 2025. +Praskievicz, S. River classification as a geographic tool in the age of big data and global change. Geographical Review, 108(1):120-137, 2018. +Rakhimov, M., Akhmadjonov, R., and Javliev, S. Artificial intelligence in medicine for chronic disease classification using machine learning. In 2022 IEEE 16th International Conference on Application of Information and Communication Technologies (AICT), pp. 1-6. IEEE, 2022. +Reimers, N. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019. +Shinn, N., Cassano, F., Gopinath, A., Narasimhan, K., and Yao, S. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems, 36, 2024. +Si, C., Yang, D., and Hashimoto, T. Can llms generate novel research ideas? a large-scale human study with $100+$ nlp researchers. arXiv preprint arXiv:2409.04109, 2024. +Song, L., Zhang, J., Cheng, L., Zhou, P., Zhou, T., and Li, I. Nlpbench: Evaluating large language models on solving nlp problems. arXiv preprint arXiv:2309.15630, 2023. +Tang, X., Liu, Y., Cai, Z., Shao, Y., Lu, J., Zhang, Y., Deng, Z., Hu, H., An, K., Huang, R., et al. Ml-bench: Evaluating large language models and agents for machine learning tasks on repository-level code. arXiv e-prints, pp. arXiv-2311, 2023. +Team, G. Google launches gemma 2, its next generation of open models. https://blog.google/technology/ developers/google-gemma-2/, Jun 2024a. +Team, Q. Qwen2.5: A party of foundation models, September 2024b. URL https://qwenlm.github.io/blog/qwen2.5/. +Wang, M., Chen, L., Fu, C., Liao, S., Zhang, X., Wu, B., Yu, H., Xu, N., Zhang, L., Luo, R., et al. Leave no document behind: Benchmarking long-context llms with extended multi-doc qa. arXiv preprint arXiv:2406.17419, 2024. +Wang, X., Wei, J., Schuurmans, D., Le, Q. V., Chi, E. H., Narang, S., Chowdhery, A., and Zhou, D. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=1PL1NIMMrw. + +Weng, Y., Zhu, M., Bao, G., Zhang, H., Wang, J., Zhang, Y., and Yang, L. Cycleresearcher: Improving automated research via automated review. arXiv preprint arXiv:2411.00816, 2024. +Yu, B., Baker, F. N., Chen, Z., Ning, X., and Sun, H. Llasmol: Advancing large language models for chemistry with a large-scale, comprehensive, high-quality instruction tuning dataset. arXiv preprint arXiv:2402.09391, 2024a. +Yu, H., Hong, Z., Cheng, Z., Zhu, K., Xuan, K., Yao, J., Feng, T., and You, J. Researchtown: Simulator of human research community. arXiv preprint arXiv:2412.17767, 2024b. +Zhu, M., Weng, Y., Yang, L., and Zhang, Y. Deepreview: Improving llm-based paper review with human-like deep thinking process. arXiv preprint arXiv:2503.08569, 2025. + +# Appendices + +Within this supplementary material, we elaborate on the following aspects: + +- Appendix A: Data Statistics and Diversity +- Appendix B: Implementation Details +- Appendix C: More Experiment Results and Details +- Appendix D: Data Cases and Annotation Platform Illustration +- Appendix E: Prompt Templates + +# A. Data Statistics and Diversity + +We provide the detailed data statistics of three datasets in our benchmark, as shown in Table 7, 8, and 9. We use the NLTK package to tokenize words and count the length. When calculating the length of equations, we use the pylatexenc tool to simplify the equations first. + +Meanwhile, for the WEAKNESS, we also plot the review scores distribution of the papers used in the dataset, as well as the track distribution. As can be found in Figure 3, our dataset has a decent distribution, where the papers are uniformly distributed across 13 tracks, and most papers' scores ranged from 5 to 8 (i.e., most papers are weakly rejected or accepted). + +Table 7: The statistics of EQINFER. Here, the "left" and "right" input context indicates the paper contexts before and after the missed equation; "pos." means the ground-truth equations (written by the source paper authors), while "neg." is the GPT4-synthetic wrong equations. + +
# of positive equations1,049
# of negative equations3,147
# of source papers869
ave. “left” input context length (in words)4,377
ave. “right” input context length (in words)6,362
max “left” input context length (in words)24,849
max “right” input context length (in words)32,948
min “left” input context length (in words)711
min “right” input context length (in words)8
ave. “pos.” output equation length (in character)55
ave. “neg.” output equation length (in character)48
max “pos.” output equation length (in character)1,039
max “neg.” output equation length (in character)306
min “pos.” output equation length (in character)6
min “neg.” output equation length (in character)4
+ +# B. Implementation Details + +# B.1. Metric Details + +When calculating the metrics, specifically for the similarity-based scores, we utilize SentenceBERT (Reimers, 2019) to encode each segment (e.g., each experiment idea in the list) into a dense vector, and then calculate the cosine similarity, $^{11}$ which takes about 1GB of memory when running on a single A100 GPU. + +Table 8: The statistics of EXPDESIGN. + +
# of instances100
# of source papers100
ave. input context length (in words)4,288
max input context length (in words)9,799
min input context length (in words)698
ave. # of input figures2.6
max # of input figures16.0
min # of input figures0.0
ave. length of Experiment&Explanation list5.7
ave. length per experiment (in words)34.3
ave. length per explanation (in words)27.1
max length of Experiment&Explanation list13
max length per experiment (in words)135
max length per explanation (in words)89
min length of Experiment&Explanation list2
min length per experiment (in words)9
min length per explanation (in words)9
+ +# B.2. LLMs Running Details + +In our experiments, we utilize various LLMs, including both closed and open-sourced. We list the model weight sources for the open-source LLMs: + +- OLMo-7B (Groeneveld et al., 2024): https://huggingface.co/allenai/OLMo-7B +- Falcon-40B (Almazrouei et al., 2023): https://huggingface.co/tiiuae/falcon-40b +- Gemma 2-27B (Gemma Team, 2024): https://huggingface.co/google/gemma-2-27b +- Mistral-7B (Jiang et al., 2023): https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3 +- Mixtral-8x22B-MoE (Jiang et al., 2024): https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1 +- Llama 3.1-70B (MetaAI, 2024): https://huggingface.co/meta-llama/Llama-3.1-70B +Qwen 2.5-72B (Qwen Team, 2024): https://huggingface.co/Qwen/Qwen2.5-72B + +We use VLLM to unify the inference endpoints of all the above models.12 We use Pytorch 2.4.0 with CUDA 12.1, and use 8 NVIDIA A100 GPUs for the LLMs inference. + +Meanwhile, we use the gpt-4o-2024-08-06, gpt-4-1106-preview, o1-preview-2024-09-12, gemini-1.5-pro-002, and claude-3-5-sonnet-20240620 for the closed-source LLMs. We use LiteLLM to unify the API calling for all these LLMs. $^{13}$ + +Given the unstable performance of LLMs, particularly closed-source ones, we run each model thrice during our experiments, selecting the median result from these repeated runs. + +Table 9: The statistics of WEAKNESS. + +
# of instances993
# of source papers993
ave. input context length (in words)9,811
max input context length (in words)49,195
min input context length (in words)24
ave. # of input figures7.0
max # of input figures37.0
min # of input figures0.0
ave. # of input tables4.3
max # of input tables53.0
min # of input tables0.0
ave. # of reviewers per paper3.8
max # of reviewers per paper9.0
min # of reviewers per paper3.0
ave. # of weaknesses per reviewer4.8
max # of weaknesses per reviewer39.0
min # of weaknesses per reviewer1.0
ave. length of weakness (in words)39.1
max length of weakness (in words)371.0
min length of weakness (in words)2.0
+ +# C. More Experiment Results and Details + +# C.1. Input Context Scaling Investigation + +Figure 4, Figure 5, and Table 10 show the context scaling results of EQINFER, EXPDESIGN, and WEAKNESS. + +Table 10: The performance comparison of different input processing methods for WEAKNESS. We use GPT-4o and GPT-4-Turbo because both accept a maximum of 128k tokens input. We also put the results of AI-SCI in the table for reference. Here, "split-combine" splits the input paper into several pieces, where each piece's length is denoted as "window size"; "no-split" means the conventional input cutting, for example, if the window size is 3,000, then only the first 3,000 words in the paper are used. According to the data statistics, 20,000 words can cover maximum lengths of more than $95\%$ of the papers in our dataset. + +
ModelsInput Context ProcessingWindow Size (in words)S-F1S-PrecisionS-RecallITF-IDF
GPT-4osplit-combine3,00047.7342.0955.485.95
no-split3,00045.7443.4548.545.92
no-split20,00045.4742.9748.516.02
AI-SCIsplit-combine3,00045.0540.0251.912.23
no-split3,00042.5640.9044.652.53
no-split20,00042.5340.7544.782.58
+ +# C.2. Human Evaluation on LLM-Generated Novel Experiments + +Figure 6 illustrates the evaluation guideline for novel experiments generated by LLMs. We ask 3 senior PhD students to evaluate each paper; that is, if the first two annotators disagree with each other, a third annotator will make a final decision. Table 11 presents several human evaluation cases. + +# C.3. Human Evaluation on LLM-Generated Explanation + +We ask 5 annotators to evaluate the LLM-generated explanations. Specifically, each of them is assigned 4 or 5 papers, along with the corresponding experiment lists. For each paper, the annotator is given 5 different models' outputs (model names are anonymized), and the annotator has to decide if each LLM-generated explanation is acceptable according to the experiment. We show the human evaluation results in Table 5. + +# C.4. Multi-Modal Input Ablation + +We post the multi-modal ablation study of EXPDESIGN and WEAKNESS in Table 12 and Table 13. + +# D. Data cases and Annotation Platform Illustration + +As shown in Figure 8, 9, and 10, we show the sample cases of the three tasks in AAAR-1.0. Meanwhile, we illustrate the screenshot of our annotation platform in Figure 7. + +# E. Prompt Templates + +In this appendix, we attach all the prompts used in this work, including prompts in data collection and model prediction, as shown in Figure 11, 12, and 13. + +![](images/2fd10c8b422534bcd3e7b642b085961a9a9ffb748922dfbe471d9fac101b37da.jpg) +(a) The review score distribution of the papers used in WEAKNESS. + +![](images/d0af40cf0c1dfcf04706306a42d7846ab35d90c1a3d8597569a300e488da0cd4.jpg) +(b) The track distribution of the papers used in WEAKNESS. +Figure 3: The data diversity illustration of WEAKNESS, including the score distribution and track distribution of the papers used in our dataset. + +![](images/84a69d3a4f44bd69f28969cceb0dbc09d68d735c05a23dd2ba14caf8ae59f8ca.jpg) +Figure 4: The input context length scaling trend on the EQINFER task. + +![](images/6a8b4e117ea798e50f244a82f002727290fd7e0d9aec643566e941fba07d1a59.jpg) + +![](images/38dea6385f52115a4174d2cc18977c88a9ce43adef0ecfb902c0070ec301ce06.jpg) +Figure 5: The input context length scaling trend of different LLMs on the EXPDESIGN task. + +
For each paper, you are given this paper's human-annotated experiments (Column C), along with three different models' prediction experiments (Columns D, G, J)
Those model-generated experiments are all novel experiments that the original human-annotated experiments (Column C) didn't mention. And your task is to evaluate whether these novel experiments are good or not.
Based on the original paper and its experiments, pls rate the quality of each model-generated experiment.
A (necessary experiment): Label an experiment with "A" if you think this experiment is necessary for this paper.
A "necessary" experiment means if the authors don't include this experiment in the paper, this paper will be highly likely be rejected by the reviewer.
For example, if this paper proposes a novel neural adaptor model, then an ablation study is required to see if having the proposed adaptor can contribute to the performance.
B (optional experiment): label an experiment with "B", if you think this experiment is an optional choice for this paper.
For example, if a paper proposes a new metric learning algorithm, conducting a representation space visualization is not required but can be useful for enhancing the explainability of this algorithm.
C (unrelated experiment): label an experiment with "C" if you think this experiment is unrelated to the core motivation of this paper. Such as those fancy experiments that we can just omit without any impact.
Note that, if the model-generated experiments are too general, such as simply suggesting an "ablation study" without any details, then you can also categorise it as an unrelated experiment.
In the "Your Assessment" column, write down your assessment of the model-generated experiments,
For example, if there are five novel experiments, write a list with a length of 5: [A, B, C, A, B]
Leave any comments if you are not confident with any of your ratings.
+ +Figure 6: The human guideline for evaluating the LLM-generated novel experiments. + +
ABCD
Here, I provide a suggested annotation pipeline:1. Click the PDF link (Column B, Google Drive link) and read the "Experiment" section of the paper you are going to annotate. If you are not familiar with this paper, we also encourage you to read the full paper.2. For each experiment within the "Experiment" section, try to answer the following two questions:- What experiments do you suggest doing? (column C in this sheet)- Why do you suggest these experiments? (column D in this sheet)Write the "suggestion-style" answers to the above two questions by making comments on the PDF file directly --- i.e., highlighting the related paragraphs/tables/figures (this comment location information is a crucial part of your annotation, which will be used to ask you to go to see my annotation examples for a better understanding.3. After finishing all the annotations on the PDF file, copy all your annotations into this sheet.4. Organize all the experiment suggestions into the list. For example, in columns C and D, you should write something like:1. AAA ...2. BBB ...3. CCC ...Make sure all your lists are consistent! For example, if you make 7 experiment comments in the PDF, make sure there are also 7 items in columns C and D in this sheet.I ask all of you to go to see my annotation sheet and please use the same annotation format as mine (e.g., how to write the list, how to make comments on the PDF).Other notes:Usually, we only consider the experiments in the paper's main body and exclude the appendix, unless you think the experiments in the appendix are also critical to this paper ---the author explicitly claimed the importance or frequently mentioned this experiment in the paper's main body.Paper TitlePDF LinkWhat experiments do you suggest doing?Why do you suggest these experiments?1. Few-shot instruction tuning coverage speed comparison across diff 1. To investigate whether the current LMs can truly understand the semantics2. Zero-shot instruction-following performances among different instruc 2. To see if different instructions can impact the models' zero-shot instruc3. Few-shot instruction-following performance among different instruti 3. To see if different instructions can impact the models' few-shot instruci4. The effect of the target words. The author should also investigate whether 4. To see if the model can truly follow instructions to solve the task or just1. Cross-task instruction-following performance evaluation: the authors s 1. To prove that the task instructions in the proposed dataset (in both train2. Ablation study on the different components of the task instruction: the 2. Since the author proposed various components for the task instructions
+ +Figure 7: The annotation platform for collecting the annotation of EXPDESIGN. We ask annotators to first make comments on the Google Drive PDF, then move all the annotations to the online Google Doc (for further verification and discussion). + +
Context BeforeContext AfterEquationAnswer
In this paper, we investigate what types of stereotypical information are captured by pretrained language models. We present the first dataset comprising stereotypical attributes of a range of social groups and propose a method to elicit stereotypes encoded by pretrained language models in an unsupervised fashion. Moreover, we link the emergent stereotypes to their manifestation as basic emotions as a means to study their emotional effects in a more generalized manner [...]We then define emotion vectors $\\hat{v} \\in\mathsf{mathcal{R}})^{\wedge}\{10\}$ for each group $TGT$ [...]\\"textnormal{S}_({emo})\\"texttt{TGT})=\\"sum\\"limits^{"W_TGT}|_i=w\"(i)/({"W_TGT})\"correct
In this paper, we investigate what types of stereotypical information are captured by pretrained language models. We present the first dataset comprising stereotypical attributes of a range of social groups and propose a method to elicit stereotypes encoded by pretrained language models in an unsupervised fashion. Moreover, we link the emergent stereotypes to their manifestation as basic emotions as a means to study their emotional effects in a more generalized manner [...]We then define emotion vectors $\\hat{v} \\in \mathsf{mathcal{R}})^{\wedge}\{10\}$ for each group $TGT$ [...]\\"textnormal{S}_({emo})\\"texttt{TGT})=\\"frac{1}{{"W_TGT}|_i}\\sum_{-w} \\in W_TGT}\){score(w, emo)}\\"incorrect
+ +Figure 8: Two sample cases of EQINFER. + +Table 11: Examples of human evaluation on the model-generated novel experiments. + +
Paper TitleOriginal Experiments (by human)Novel Experiment (by LLMs)Rating
WiCE: Real-World Entailment for Claims in Wikipedia1. Analysis in Verification Problem Distribution: This paper should provide detailed analysis and statistics about the verification problems in the proposed dataset.2. Off-the-shelf entailment classification performance: The authors should provide entailment classification performance of existing models on the proposed dataset without fine-tuning.3. Human Performance: The authors should show human performance on the proposed dataset.4. Performance of fine-tuned models: The authors should provide the performance of models fine-tuned on the proposed dataset.5. Performance on the evidence retrieval task: The authors should show the performance on the evidence retrieval task, which is a sub-task of the proposed dataset.6. Performance of LLMs: The authors should provide the performance of LLMs on the proposed dataset.7. Retrieval+Entailment: Authors should provide experiments on a framework of retrieving evidence sentences and evaluate entailment by using the retrieved sentences.8. Analysis of Claim-Split on Downstream Tasks: The authors should analyze how claim-split, the proposed method, is effective on tasks other than the proposed dataset.Assess model performance on WiCE without fine-tuning to test domain generalization from traditional NLI datasets.A
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models1. Results of multiple LLMs on popular math datasets: The authors should show the performance of multiple LLMs fine-tuned on their dataset on popular math datasets.2. Performance on open-source models with different sizes: The authors should show the performance of models with different sizes trained on the proposed dataset.3. Comparison to SOTA closed-source models: The authors show compare the performance of open-source models trained on the proposed dataset and strong close-source models.4. Evaluate the effect of augmentations: The authors need to perform an ablation study to compare the different argumentation methods they proposed.5. Analyze Training on Incorrect Answers: The authors should analyze whether wrong answers generated in data augmentation can harm the performance.6. Evaluate other ways to increase the size of training data: The authors should evaluate other ways to increase the training data size and compare the performance with models trained on their proposed train data.7. Error Analysis: The authors should analyze the performance of their models in different conditions (e.g., lengths of questions).Prompt Sensitivity Analysis: Evaluate the sensitivity of MetaMath to different prompt formats or phrasings of mathematical questions.B
Large Language Models Cannot Self-Correct Reasoning Yet1. Self-Correction with Oracle Labels: The authors should evaluate self-correction performance with oracle labels.2. Intrinsic Self-Correction: The authors should show performance without using the oracle labels.3. Analysis of Mistakes in Self-Correction: The authors should analyze the properties of mistakes made in the self-correction framework.4. Multi-Agent Debate: The authors should evaluate self-correction with multi-agent debate.5. Prompt Design Analysis: The authors should analyze the influence of prompt design for the initial responses on self-correction performance.Visualization of learned representations or attention mechanisms to provide insights into the model's inner workings.C
+ +Table 12: The figure inputs ablation of EXPDESIGN. For the maximum text input length, same as the setting in Table 2, we use 2,000 and 3,000 words for open- and closed-source models, respectively. For the closed-source GPT-4o and GPT-4, as they have long context window sizes, we use all the figures of each paper. While for InternVL2, we randomly select two figures per input paper. + +
ModelsExperiment DesignExperiment Explanation
En-F1En-PrecisionEn-RecallS-MatchROUGE-LROUGE-1
GPT-4o25.0322.2536.5958.5429.2535.50
w/ figures25.3924.3532.8058.5327.8734.30
InternVL2-26B24.2639.5014.9150.0329.1334.26
w/ figures15.0438.508.6450.2929.2934.06
+ +Table 13: The ablation study about the paper tables and figures of WEAKNESS. Based on the conclusion in Table 10, we use the "split Combine" to process the text input here (2,000 and 3,000 words context window size for open- and closed-source models). For GPT-4o, we use all the table/figure images; while for InternVL2, we randomly select two images per paper, i.e., two random figures, two random tables, or one random figure + table. + +
ModelsS-F1S-PrecisionS-RecallITF-IDF
GPT-4o47.7342.0955.485.95
w/ tables46.7641.3254.175.53
w/ figures46.6241.2054.045.48
w/ tables & figures46.5841.1753.985.36
InternVL2-26B41.9141.0243.281.48
w/ tables40.5540.3742.911.46
w/ figures42.8842.1043.761.46
w/ tables & figures42.4442.0043.311.44
+ +
Pre-Experiment Context (Input)Experiment Design (Output)Motivation Explanation (Output)
In this paper, we show that Multilingual BERT (\\mbert{}), released by \cite{devlin2018bert} as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language [...]1. Expand the definition of overlap. The authors should calculate overlap based on all the words shared between two languages, instead of just shared vocabulary on just the entities. +2. Report performance gains for using some popular language similarity criterion, e.g., WALS. +3. Effect of tokens per word. The authors should perform experiments on more scripts, specifically looking at the effect of words being split into multiple tokens. +4. Control for vocabulary overlap among languages. Choose languages that have large vocabulary overlap and different word order feature. Train on one set of languages and then perform zero shot evaluation on the rest. +5. Ablate the effect of common word pieces by using a non-overlapping tokenizer for different languages.1. To check whether non-entity overlap between two languages also contribute to better performance on recognizing the entities. The model may use information from non-entity words to recognize an entity. +Additionally, successfully recognizing that a word is not an entity also contributes the performance on the NER task. +2. To understand which features the language model can exploit for cross-lingual transfer. This will give us insights into what typological similarity the multilingual language model can pick up during pretraining. +3. To understand the effect of POS label frequency. The idea is that two languages with similar token to word ratio will result in better cross-lingual transfer. The reason is that continuation tokens should be classified properly and the change in the training corpus of the frequency of continuation tokens will result in different performance. +4. To properly control for the effect of vocabulary overlap. Since large overlap in vocabulary can lead to performance gain, the reported results does not reflect the true impact of word order. +5. To understand the effect of structure of sentences in different languages for cross-lingual understanding of multilingual language models. Since there will be no overlap between different languages the model must learn cross-lingual representations based on syntactic and semantic properties of the languages.
+ +Figure 9: A sample case of EXPDESIGN. + +
Paper Context (Input)Weaknesses (Output)
A Neural Process (NP) (Garnelo et al., 2018a;b) meta-learns a stochastic process describing the relationship between inputs and outputs in a given data stream, where each task in the data stream consists of a meta-training set of input-output pairs and also a meta-validation set. The NP then defines an implicit stochastic process whose functional form is determined by a neural network taking the meta-training set as an input [...]Reviewer#1: +1. The writing is not on par with the idea. +Reviewer#2: +1. It would be informative to see how MPNPs scale with higher dimensionality. For example, empirical comparisons on a high-D regression task complementing the 10 one. +2. The results of the Lotka-Volterra task would deserve further analysis: Why is BNP/BANP seemingly more apt at dealing with misspecification than MPNPs? My understanding is that model data-mismatch is a problem general to Bayesian inference, i.e., should also affect B(A)NP. +Reviewer#3: +1. The consistent outperformance of BNP/BANP over MPNP/MPANP weakens the central hypothesis of the paper. +2. The comparisons appear to be against relatively old versions of NPs. I wonder how the proposed method compares against more recent versions of NPs than ANPs (2018) and BNPs (2020), for instance Evidential Turing Processes (2022). +3. I find that the adaptation of the MPNP idea to CANP a bit dilutes the main message of the paper. It is after all a heavy pipeline with many components. +4. It is great that the paper points out the limitations of the presented method, but would be even better if it also gave an educated guess on which properties of the method cause them.
+ +Figure 10: A sample case of WEAKNESS. + +
LLM-based Equation SynthesisLLM-based Equation FilteringModel Prediction
##### Task:You are asked to complete the equation in an NLP paper. Given the context before and after an equation, where the equation is deleted, you should help me recover that equation.##### Task:You are given a source code of a latex equation. Based on your knowledge regarding the Machine Learning and NLP, you should help me identify if this equation has obvious flaw.##### Task:You are given the latex source code of the context before and after an equation in an NLP paper, while this equation is masked. Your task is to identify the correctness of the given candidate equation.Only provide either 'Correct' or 'Wrong'. Avoid any explanations.
##### Requirements:1. Give me the latex source code of the missed the equation.2. Only give me the equation, avoid any other explanations.
##### Context Before:{The context before the equation}
##### Context After:{The context after the equation}.
##### Equation:{Left part of the ground truth equation}##### Your Answer:##### Equation:{equation}
##### Your Answer:
+ +Figure 11: The prompts used in EQINFER, including both data collection and model prediction. + +
LLM-based Leaking Sentence DeletionModel Prediction (Experiment Design)Model Prediction (Motivation Explanation)
You are given a sentence (or a short paragraph) from an ML paper, along with a list of the experiments from this paper; help me decide whether this sentence discusses any experiments in the list. Let's say, if one sentence includes clues for coming up with any experiments in the list, we call this sentence a 'leaking sentence'; otherwise, if any experiment ideas cannot be inferred from the sentence, we call it a 'non-leak sentence'. Please give me a '1' if this sentence is a 'leaking sentence'; otherwise, give me a '0'. ### Experiment List: {The experiment list}. ### Sentence: {The sentence}. Now, give me your decision (give me either '0' or '1', only the number, without any explanations):You are partially given an ML paper (in latex), including some useful sections (e.g., 'abstract' and 'introduction') having some basic introductions to the research of this paper, where all the 'experiment' related sections are deleted. Please first help me carefully read these sections and try to understand the motivations of this research, such as 'what the authors are trying to propose/demonstrate?' and 'what are the main contributions/differences of this paper from others?' Then, based on your in-depth understanding of this paper, imagine that you are the authors of this paper; what experiments do you have to conduct to prove your research? Namely, you have to "recover the deleted experiments** by providing me with **a list of experiment ideas**, where the list briefly summarizes the experiments the authors should conduct. Here is an example: {few-shot examples} Here is the target ML paper (partial content): {The context input}. Now, based on this paper, give me a list of experiments the author has to do. Please only give me the list, without any other words. ### Your Experiment List:You are partially given an NLP paper (in latex), including some useful sections (e.g., 'abstract' and 'introduction') having some basic introductions to this research, where all the 'experiment' related sections are deleted. Meanwhile, you are also given a list of experiments that try to predict the missed experiments in this paper. Now, imagine the experiment list you created; you have to explain **why you suggested these experiments**. Here is an example experiment list: {few-shot examples} Here is the example corresponding explanation list: {few-shot examples} Now, help me look at the following paper: ### Paper: {The context input}. ### Experiment List: {The experiment list}. Please give me your explanation list, which should be the same length as the 'Experiment List'; the items of the two lists correspond one-to-one. Only give me the list without any other useless words. ### Explanation List:
+ +Figure 12: The prompts used in EXPDESIGN, including both data collection and model prediction. + +
Model Prediction (Weaknesses)
You are given an NLP paper, along with its figure illustrations. Imagine you are a machine learning expert with rich research experience. Please carefully review this paper and identify the weaknesses of this research.
Here is the paper (it might be in partial content):
The context input.
Now, based on the provided context, give me a list of weaknesses of this research paper (such as '1. XXX\n2. XXX', one point per line).Note that if the given context is irrelevant to research, such as it is talking about 'acknowledgement', just generate 'No research content'.Please either give me the weakness list of this research paper or generate 'No research content' to clarify this is not a research paper, without any other words.
Your Answer:
+ +Figure 13: The prompts used in WEAKNESS. \ No newline at end of file diff --git a/aaar10assessingaispotentialtoassistresearch/images.zip b/aaar10assessingaispotentialtoassistresearch/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fa7f5d6fd30ff7efcc4a01d74ad46f8135d29579 --- /dev/null +++ b/aaar10assessingaispotentialtoassistresearch/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d68d8ce0541bfd100bc43dbcbdf45dc2cf4f1e0c3a28c36241a3dabf7a5db0b +size 2114005 diff --git a/aaar10assessingaispotentialtoassistresearch/layout.json b/aaar10assessingaispotentialtoassistresearch/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..85ed0abec83329ecd204436ed4e66fe3be8a6779 --- /dev/null +++ b/aaar10assessingaispotentialtoassistresearch/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5109a201ce296c857e2808c91f984400b10c298fdeaedfa00e98829b63dd5cad +size 539383 diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_content_list.json b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..dd140247ea3854f6def66e14e9249c999b451769 --- /dev/null +++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:672207b0ab3f9251544a74055040f1b97e714aee6def49ff0ecc720fdd50b794 +size 123947 diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_model.json b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fac0924fe872aa4740ee2cab887ba629a443773a --- /dev/null +++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13dc41e252e1be2190432c77945e1f50ef08ab05323986fadbe6c50ddfd31be7 +size 147979 diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_origin.pdf b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..031cfd568ddddcaf272a0dd607e5f0a2a889b43c --- /dev/null +++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:359664e9e82636cddac56e31ee8dc1a4687b0ec5c42f8b36311a469125670446 +size 591272 diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/full.md b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c5b858b8cba58d4e3029c9bb9eb80fc7565a1ba3 --- /dev/null +++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/full.md @@ -0,0 +1,494 @@ +# Ab Initio Nonparametric Variable Selection for Scalable Symbolic Regression with Large $p$ + +Shengbin Ye1 2 Meng Li1 + +# Abstract + +Symbolic regression (SR) is a powerful technique for discovering symbolic expressions that characterize nonlinear relationships in data, gaining increasing attention for its interpretability, compactness, and robustness. However, existing SR methods do not scale to datasets with a large number of input variables (referred to as extreme-scale SR), which is common in modern scientific applications. This "large $p$ " setting, often accompanied by measurement error, leads to slow performance of SR methods and overly complex expressions that are difficult to interpret. To address this scalability challenge, we propose a method called PAN+SR, which combines a key idea of ab initio nonparametric variable selection with SR to efficiently pre-screen large input spaces and reduce search complexity while maintaining accuracy. The use of nonparametric methods eliminates model misspecification, supporting a strategy called parametric-assisted nonparametric (PAN). We also extend SRBench, an open-source benchmarking platform, by incorporating high-dimensional regression problems with various signal-to-noise ratios. Our results demonstrate that PAN+SR consistently enhances the performance of 19 contemporary SR methods, enabling several to achieve state-of-the-art performance on these challenging datasets. + +# 1. Introduction + +Symbolic regression (SR) is a mathematical technique for finding a symbolic expression that matches data from an unknown function. An early example of SR dates back to + +$^{1}$ Department of Statistics, Rice University, Houston, TX, USA $^{2}$ Department of Statistics and Data Science, Northwestern University, Evanston, IL, USA. Correspondence to: Meng Li . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +the 1600s when Johannes Kepler used astronomical data to discover that Mars' orbit was elliptical. This discovery, along with Kepler's other parsimonious and analytically tractable laws of planetary motion, helped launch a scientific revolution. + +With the recent progress in theoretical modeling and experimental instrumentation, researchers have entered a new era of big data. The development of SR models is particularly important, as they have emerged as a powerful tool for developing machine learning models that are intelligible, interpretable, and compact. Unlike large numerical models, the mathematical expressions used in SR models enable an easy understanding of their behavior, making them valuable in fields such as physics, where they can connect newly discovered physical laws with theory to facilitate subsequent theoretical developments (Wu & Tegmark, 2019). Moreover, SR models offer a safe and responsible option for machine learning applications with high societal stakes, such as those related to human lives, as they are well-suited for human interpretability and in-depth analysis. As such, SR models have found successful applications across a range of fields, including astrophysics (Lemos et al., 2023), chemistry and materials science (Hernandez et al., 2019; Liu et al., 2020; 2022), control (Derner et al., 2020), economics (Verstyuk & Douglas, 2022), mechanical engineering (Kronberger et al., 2018), medicine (Virgolin et al., 2020), and space exploration (Märtens & Izzo, 2022), among others (Matsubara et al., 2024). + +SR literature has traditionally focused on datasets with low-dimensional inputs, often with $p \leq 10$ , and primarily considered only relevant variables—those used in the ground truth (La Cava et al., 2021; Kamienny et al., 2022; Shojaee et al., 2023; Tenachi et al., 2023; Li et al., 2024). In these settings, variable selection has not been critical, as SR has largely been viewed as an optimization problem under low-noise conditions. However, modern scientific applications increasingly involve datasets with far larger numbers of variables ( $p = 102$ to 459 in this work), often including irrelevant variables, rendering variable selection a critical yet underexplored concept in SR pipelines. + +While variable selection is a well-established topic in statistics, its adoption in SR has been limited and its effective + +ness in SR remains unclear. Existing approaches, such as random forest (RF)-based pre-selection in PySR (Cranmer, 2023), have demonstrated limited utility. Indeed, the PySR documentation explicitly notes that options like select_k_features are rarely used, suggesting that current methods are not well-suited to SR tasks. This observation is further supported by our analysis in Appendix D.2, where RF is shown to perform unsatisfactorily. The limited performance of off-the-shelf methods like RF highlights the unique challenges of variable selection in the context of SR. Unlike typical variable selection tasks, SR variable selection demands a near-zero false negative rate (FNR), as excluding even a single relevant variable from the search space prevents the recovery of the true underlying function. While false positives (FPs) primarily increase computational burden, they do not fundamentally impede the discovery of the underlying model. This asymmetry in performance requirements explains why standard methods often fall short and underscores the importance of designing variable selection methods specifically tailored to SR. + +In this paper, we introduce a versatile framework, PAN+SR, for improving SR methods at extreme scales. PAN+SR leverages the Parametric Assisted by Nonparametrics (PAN) strategy (Ye et al., 2024) for an ab initio screening of large influx of input variables before expression synthesis, enabling SR tasks at extreme scales. In light of the unique challenge of SR pre-screening, we propose a novel non-parametric variable selection method designed to minimize FN; we refer to this method as PAN throughout this paper. Furthermore, to evaluate PAN+SR at extreme scales, we extend the open-source SR benchmarking database, SRBench (La Cava et al., 2021), with high-dimensional problems containing white noise at various signal-to-noise ratios. In Section 6, we showcase the performance uplift of 19 contemporary SR methods under PAN+SR. The PAN+SR framework is available as an open-source project at https://github.com/mattsheng/PAN_SR. + +# 2. Background and Motivation + +Given a dataset $(\pmb{y},\pmb{X})$ with target $\pmb{y} \in \mathbb{R}^n$ and features $\pmb{X} = (\pmb{x}_1,\dots,\pmb{x}_p) \in \mathbb{R}^{n\times p}$ , SR assumes the existence of an analytical data-generating function that links $\pmb{X}$ to $\pmb{y}$ : + +$$ +y _ {i} = f _ {0} \left(x _ {i 1}, \dots , x _ {i p}\right) + \varepsilon_ {i}, \quad \text {f o r} \quad i = 1, \dots , n, \tag {1} +$$ + +in the presence of observation noise $\varepsilon_{i}$ . The goal of SR is to recover the unknown regression function $f_{0}(\cdot)$ symbolically. For example, consider regressing the gravitational force between two objects, $F$ , on their masses $(m_{1}, m_{2})$ and the distance between their centers $(r)$ . An SR algorithm would ideally re-discover the Newton's Law of Universal Gravitation, $F = 6.6743 \times 10^{-11} \cdot m_{1}m_{2} / r^{2}$ . This is typically done by randomly constructing mathematical expressions using the + +features, $\mathbf{X} = (m_{1}, m_{2}, r)$ in this case, and a set of mathematical operations, e.g., $\mathcal{O} = \{+, -, \times, \div, \exp, \log, \cdot^{2}\}$ . Even for this low-dimensional problem, it has been shown that exploring all expressions $\mathcal{F}(\mathbf{X}, \mathcal{O})$ , induced by $\mathbf{X}$ and $\mathcal{O}$ , is NP-hard (Virgolin & Pissis, 2022). Hence, typical SR algorithms only traverse through a small subset of the full search space, such as limiting the complexity of the candidate SR models, total runtime, number of mathematical operations, etc. + +In realistic scientific applications, particularly in the era of big data, scientists often include as many intuitively reasonable features as possible, many of which may be irrelevant to the target $\mathbf{y}$ . This practice causes the search space $\mathcal{F}(X, \mathcal{O})$ to expand double-exponentially quick (Ye et al., 2024), making it extremely challenging—if not impossible—to recover $f_0(\cdot)$ using algorithmic approaches alone. To this end, we propose the PAN+SR framework, which integrates the non-parametric module of PAN as a model-based pre-screening step. This framework excludes irrelevant features prior to applying SR methods, thereby mitigating the explosion of the search space in high-dimensional problems. Here, we assume that a high-dimensional SR problem in (1) can be reduced to + +$$ +y _ {i} = f _ {0} \left(\boldsymbol {X} _ {i, S _ {0}}\right) + \varepsilon_ {i}, \quad \text {f o r} \quad i = 1, \dots , n, \tag {2} +$$ + +where only a small subset $S_0$ of $p_0 = |\mathcal{S}_0| \ll p$ of features exert influence on $\pmb{y}$ . Then the oracle search space $\mathcal{F}(X_{\mathcal{S}_0},\mathcal{O})$ is a significantly smaller subspace of the full search space $\mathcal{F}(\boldsymbol {X},\mathcal{O})$ . Thus, the successful identification of $S_0$ , or at least a superset of $S_0$ , is critical for reducing high-dimensional SR problems into manageable low-dimensional ones. With this reduction, the dataset $(\pmb {y},\pmb{X}_{\mathcal{S}_0})$ becomes sufficient for discovering $f_{0}(\cdot)$ , enabling SR methods to handle high-dimensional problems without requiring any modifications to their algorithms. + +# 3. Related Work + +SRBench (La Cava et al., 2021) is a reproducible and open-source benchmarking platform for SR that has made significant strides in the field through its curation of 122 real-world datasets and 130 ground-truth problems and its comprehensive evaluations of 14 contemporary SR methods. SRBench has quickly gained adaptations with numerous studies leveraging it to evaluate accuracy, exact solution rate, and solution complexity (Kamienny et al., 2022; Landajuela et al., 2022; Kamienny et al., 2023; Keren et al., 2023; Shojaee et al., 2023; Makke & Chawla, 2024). Despite its widespread use, SRBench primarily focuses on low-dimensional problems, which limits its applicability in the context of high-dimensional problems, a hallmark of the era of big data. In particular, the 130 ground-truth problems from the Feynman Symbolic Regression Database (Udrescu & Tegmark, 2020) + +and the ODE-Strogatz repository (Strogatz, 2015) contain only the oracle features $X_{S_0}$ with at most $p = 9$ features. This low and narrow dimensional scope leaves SRBench less suited for analyzing SR at extreme scales, underscoring the need for a high-dimensional SR database. + +# 4. Method + +Inspired by PAN, the PAN+SR framework utilizes a one-step nonparametric variable selection strategy to pre-screen a high-dimensional dataset $(\pmb{y},\pmb{X})$ and parse the reduced dataset $(\pmb{y},\pmb{X}_{\widehat{\mathcal{S}}})$ to SR methods for subsequent expression synthesis and selection. Unlike traditional variable selection literature, where the primary focus is controlling the false discovery rates, the PAN criterion calls for minimizing the false negative rate (FNR) while controlling the false positive rate (FPR) is secondary. In other words, the selected set of features $\widehat{\mathcal{S}}$ should be a superset of $S_0$ and as small as possible. When $\widehat{\mathcal{S}}$ fails to be the superset of $S_0$ (i.e., there is at least one FN), the reduced search space $\mathcal{F}(\pmb{X}_{\widehat{\mathcal{S}}},\mathcal{O})$ no longer contains $f_0(\cdot)$ , rendering any subsequent discovery based on $\pmb{X}_{\widehat{\mathcal{S}}}$ to be false. + +Nonparametric or model-free variable selection has been extensively studied in the literature. Lafferty and Wasserman (2008) propose the RODEO method for nonparametric variable selection through regularization of the derivative expectation operator. Candès et al. (2018) propose a model-free knockoff procedure controlling FDR with no assumptions on the conditional distribution of the response. Fan et al. (2011) propose a sure independence screening method for B-spline additive model. In the Bayesian literature, Bleich et al. (2014) design permutation tests for variable inclusion proportion of Bayesian Additive Regression Tree (BART); Liu et al. (2021) deploy spike-and-slab priors directly on the nodes of Bayesian forests. + +Despite this diverse array of methods, few meet the unique proposition of the PAN criterion. Among the few recent methods investigated in Ye et al. (2024), they found BART-G.SE (Bleich et al., 2014), a BART-based permutation variable selection method, to be particularly suitable for PAN. However, our comprehensive simulation study in Appendix D.2 reveals that BART-G.SE, along with three other methods, exhibit insufficient TPR, particularly under noisy or low-sample-size conditions. This deficiency renders these methods unsuitable for the PAN+SR framework. + +In this paper, we introduce a novel BART-based variable selection method and demonstrate its PAN criterion consistency through an extensive simulation study in Section 6.2. The key idea behind BART is to model the regression func + +tion $f_0(\cdot)$ by a sum of regression trees, + +$$ +\boldsymbol {y} = \sum_ {i = 1} ^ {M} \mathcal {T} _ {i} \left(\boldsymbol {x} _ {1}, \dots , \boldsymbol {x} _ {p}\right) + \varepsilon , \quad \varepsilon \sim \mathcal {N} _ {n} \left(\boldsymbol {0}, \sigma^ {2} \boldsymbol {I} _ {n}\right), \tag {3} +$$ + +where each regression tree $\mathcal{T}_i(x_1,\ldots ,x_p)$ partitions the feature space based on the values of $x_{1},\ldots ,x_{p}$ . For each posterior sample, we calculate the proportion of splits in the ensemble (3) that use $x_{j}$ as the splitting variable, for $j = 1,\dots ,p$ . The variable inclusion proportion (VIP) $q_{j}$ of $x_{j}$ is then estimated as the posterior mean of these proportions across all posterior samples (Chipman et al., 2010). Intuitively, $q_{1},\ldots ,q_{p}$ encode the relative importance of each feature, where a large VIP $q_{j}$ suggests $x_{j}$ being an important driver of the response $\pmb{y}$ . However, deciding on how large a VIP value must be to indicate relevance remains a challenge. For instance, BART-G.SE addresses this by using a permutation test on $q_{1},\ldots ,q_{p}$ to identify significant features, whereby controlling the family-wise error rate. + +Here, we propose an alternative approach that utilizes the rankings of VIPs instead of their raw values. Specifically, let $r_j$ denote the ranking of the VIP $q_j$ . Relevant features $X_{S_0}$ are expected to occupy top-ranking positions, namely $\{1, \ldots, p_0\}$ , due to their strong associations with $y$ . In contrast, irrelevant features $X_{S_1}$ , $S_1 = [p] \setminus S_0$ , are expected to appear in lower-ranking positions, namely $\{p_0 + 1, \ldots, p\}$ , since they are only selected sporadically or by chance (Chipman et al., 2010; Bleich et al., 2014). Consequently, a natural decision rule is to select feature $x_j$ if $r_j$ falls within $\{1, \ldots, p_0\}$ . + +However, this decision rule is impractical in real-world applications since the sparsity $p_0$ is unknown. To address this limitation, we propose a method that leverages multiple independent runs of BART to estimate the feature rankings more robustly. Let $r_{j,k}$ denote the VIP ranking of $\boldsymbol{x}_j$ in the $k$ th run. Assume that the rankings of $\boldsymbol{x}_j$ are randomly distributed over the $K$ independent runs (see Appendix D.1 for empirical justification): + +$$ +r _ {j, 1}, \ldots , r _ {j, K} \stackrel {{\text {i i d}}} {{\sim}} \left\{ \begin{array}{l l} \operatorname {U n i f} (\{1, \ldots , p _ {0} \}), & \text {i f} j \in \mathcal {S} _ {0} \\ \operatorname {U n i f} (\{p _ {0} + 1, \ldots , p \}), & \text {i f} j \notin \mathcal {S} _ {0} \end{array} \right. +$$ + +Then the average ranking $\bar{r}_j = \sum_{k=1}^{K} r_{j,k} / K$ of $\pmb{x}_j$ across $K$ independent runs forms two distinct clusters, $\mathcal{C}_0$ for $X_{S_0}$ and $\mathcal{C}_1$ for $X_{S_1}$ . Specifically, $\bar{r}_j$ . for $X_{S_0}$ are expected to cluster in $\mathcal{C}_0$ with mean $(1 + p_0) / 2$ , while those for $X_{S_1}$ tend to cluster in $\mathcal{C}_1$ with mean $(p_0 + 1 + p) / 2$ . Although both cluster means are unknown due to the unknown sparsity $p_0$ , their separation can be identified using clustering techniques. + +To illustrate, consider the extended Feynman I-38-12 dataset (defined in Section 5.2) with $p = 204$ features, of which + +$p_0 = 4$ are relevant. Without loss of generality, we assume that the relevant features $X_{S_0}$ are $\boldsymbol{x}_1,\boldsymbol{x}_2,\boldsymbol{x}_3,\boldsymbol{x}_4$ , i.e., $S_0 = \{1,2,3,4\}$ and $S_{1} = \{5,\dots ,204\}$ . When $K = 20$ independent BART models are trained on the dataset, the rankings $r_{1,k},r_{2,k},r_{3,k},r_{4,k}$ frequently fall within $\{1,2,3,4\}$ across all $k = 1,\ldots ,20$ runs. This is because the relevant features are frequently selected for tree splits due to their strong associations with the response variable $\mathbf{y}$ , leading to high VIPs and consistently top rankings. In contrast, irrelevant features $\boldsymbol{x}_5,\dots ,\boldsymbol{x}_{204}$ are included sporadically in BART, with $r_{5,k},\dots ,r_{204,k}$ distributed randomly across $\{5,\dots ,204\}$ . As evident in Figure 5 in Appendix D.1, the average VIP rankings $\bar{r_j}$ of the relevant features form a low-mean cluster $\mathcal{C}_0$ with a cluster mean of $(1 + p_0) / 2 = 2.5$ , while those of the irrelevant features form a high-mean cluster $\mathcal{C}_1$ , concentrating around $(p_0 + 1 + p) / 2 = 104.5$ . + +However, the sparse regression setting naturally leads to a class imbalance problem as $|\mathcal{C}_0| = p_0$ is much smaller than $|\mathcal{C}_1| = p - p_0$ . To this end, we propose to apply agglomerative hierarchical clustering (AHC) with Euclidean distance and average linkage to $(\bar{r}_1, \dots, \bar{r}_{p^{\cdot}})$ and cut the dendrogram to form two clusters: $\widehat{\mathcal{C}}_0$ and $\widehat{\mathcal{C}}_1$ . Then, features in $\widehat{\mathcal{C}}_0$ are retained, while those in $\widehat{\mathcal{C}}_1$ are discarded. Notably, the proposed data-driven selection criterion does not require any knowledge about the sparsity level $p_0$ or a tunable selection threshold. An ablation study evaluating the effect of different clustering algorithms on selection accuracy is available in Appendix D.3. We herein refer to this variable selection method for SR pre-screening as PAN; see Appendix C.2 for implementation details. + +# 5. Experiment Design + +Using an open-source benchmarking platform, SRBench, we evaluate the PAN+SR framework on two separate tasks. First, we assess its ability to make accurate predictions on "black-box" regression problems in which the underlying regression function remains unknown. Second, we test PAN+SR's ability to find the correct data-generating function $f_{0}$ on synthetic datasets with known data-generating functions originating from Feynman Lectures on Physics (Feynman et al., 2010; Udrescu & Tegmark, 2020). + +The experiment settings are summarized in Table 1. All experiments were run on a heterogeneous cluster. Each algorithm was trained on each dataset in 10 repeated trials with a different random state to control both the train/test split and the seed of the algorithm. Each run was performed until a 24-hour time limit was reached or up to 500,000 expression evaluations for black-box problems or 1,000,000 for ground-truth problems. For ground-truth problems, we chose a few representative algorithms in the black-box problems and investigated additional settings of sample size and + +signal-to-noise ratio. Datasets were split $75\% / 25\%$ in training and testing. For black-box problems, hyperparameters were either set to the optimal values published by SRBench or to values recommended by the original authors of the respective methods. The best hyperparameter settings in black-box regression problems were used in ground-truth problems. Instructions for reproducing the experiment is available in Appendix A, and detailed experimental settings are described in Appendix C. + +# 5.1. Symbolic Regression Methods + +Here we summarize the SR methods evaluated in this paper. A long strand of SR methods is based on genetic programming (GP), a technique for evolving executable data structures, such as expression trees. The most vanilla version we test is gplearn (Stephens, 2020), which performs random expression proposal and iterates through the steps of tournament selection, mutation, and crossover. Advanced GP-based methods utilize different evolutionary strategies and optimization objectives, ranging from Pareto optimization for efficient trade-offs between accuracy and model complexity to program semantics optimization for increasing coherence in expression. Here we test an array of advanced GP-based SR algorithms, including Age-Fitness Pareto optimization (AFP) (Schmidt & Lipson, 2010), AFP with co-evoloved fitness estimate (AFP_FE) (Schmidt & Lipson, 2010), Epigenetic Hill Climber (EHC) (La Cava et al., 2014), $\varepsilon$ -lexicase selection (EPLEX) (La Cava et al., 2019a), Feature Engineering Automation Tool (FEAT) (La Cava et al., 2019b), Fast Function Extraction (FFX) (McConaghy, 2011), GP version of Gene-pool Optimal Mixing Evolutionary Algorithm (GP-GOMEA) (Virgolin et al., 2021), Interaction-Transformation Evolutionary Algorithm (ITEA) (de Franca & Aldeia, 2021), Multiple Regression Genetic Programming (MRGP) (Arnaldo et al., 2014), Operon (Burlacu et al., 2020), PySR (Cranmer, 2023), and Semantic Back-propagation Genetic Programming (SBP-GP) (Virgolin et al., 2019). + +Additional methods include Bayesian Symbolic Regression (BSR) (Jin et al., 2020), which places a prior on the expression tree; Deep Symbolic Regression (DSR) (Petersen et al., 2021), Unified Deep Symbolic Regression (uDSR) (Landajuela et al., 2022), and Dynamic Symbolic Network (DySymNet) (Li et al., 2024) utilize recurrent neural networks to propose symbolic expressions; Transformer-based Planning for Symbolic Regression (TPSR) (Shojae et al., 2023) leverages pretrained transformer models; AIFeynman 2.0 (Udrescu et al., 2020) which uses a divide-and-conquer technique to recursively decomposing complex problems into lower-dimensional sub-problems. + +Table 1: Settings used in the experiments. + +
SETTINGBLACK-BOX PROBLEMSGROUND-TRUTH PROBLEMS
# OF DATSETS35100
# OF ALGORITHMS1919
# OF TRIALS PER DATASET1010
TRAIN/TEST SPLIT.75/.25.75/.25
TERMINATION CRITERIA500K EVALUATIONS OR 24 HOURS1M EVALUATIONS OR 24 HOURS
SAMPLE SIZEALL500, 1000, 1500, 2000
SIGNAL-TO-NOISE RATIONONE0.5, 1, 2, 5, 10, 15, 20, NONE
TOTAL COMPARISONS12250142000
COMPUTATION COST34K CORE HOURS104K CORE HOURS
MEMORY ALLOCATION16 GB16 GB
+ +# 5.2. Datasets + +We curated a database of high-dimensional regression problems for testing the capability of PAN+SR. We selected 35 black-box regression problems available in PMLB v1.0 (Romano et al., 2021) using the following criteria: $n < 200$ and $p \geq 10$ or $n \geq 200$ and $p \geq 20$ . These problems were used in SRBench and overlap with various open-source repositories, including OpenML (Vanschoren et al., 2014) and the UCI Machine Learning Repository (Kelly et al., 2013). + +We also curated 100 high-dimensional ground-truth regression problems by modifying the Feynman Symbolic Regression Database (Udrescu & Tegmark, 2020) to include irrelevant features and white noise. For each equation $f_{0}(\cdot)$ in the Feynman Lectures on Physics, we generated the relevant features $X_{S_0}$ following Udrescu and Tegmark (2020): + +$$ +\left(x _ {1, j}, \dots , x _ {n, j}\right) \stackrel {\text {i d}} {\sim} \operatorname {U n i f} \left(a _ {j}, b _ {j}\right), \quad \text {f o r} 1 \leq j \leq p _ {0}, \tag {4} +$$ + +where $p_0 = |\mathcal{S}_0|$ is the number of relevant features, $n$ is the sample size, and $a_j$ and $b_j$ are the lower and upper bounds for feature $x_j$ described in Udrescu and Tegmark (2020). To study the effect of noise on PAN+SR, we tuned the signal-to-noise ratio (SNR) by adding a Gaussian error term when generating the response variable: + +$$ +y _ {i} = f _ {0} \left(x _ {i, 1}, \dots , x _ {i, p _ {0}}\right) + \varepsilon_ {i}, \quad \text {f o r} 1 \leq i \leq n, \tag {5} +$$ + +where $\varepsilon_{i}\stackrel {\mathrm{iid}}{\sim}N(0,\sigma_{\varepsilon}^{2})$ $\sigma_{\varepsilon}^{2} = \sigma_{f}^{2} / \mathrm{SNR}$ . When $\sigma_{\varepsilon}^{2} = 0$ or $\mathrm{SNR} = \infty$ (4) and (5) generate the original Feynman Symbolic Regression Database. + +In addition to the relevant features $X_{\mathcal{S}_0} = (x_1,\dots ,x_{p_0})$ we included an array of irrelevant features $X_{\mathrm{irr}}$ representing the era of big data where all reasonable features are included in the dataset. Specifically, for each relevant feature $\boldsymbol {x}_j$ $j\in S_0$ we generate $(\boldsymbol{x}_{j,\mathrm{irr}}^{1},\ldots ,\boldsymbol{x}_{j,\mathrm{irr}}^{s})\stackrel {\mathrm{id}}{\sim}\mathrm{Unif}(a_j,b_j)$ representing $s$ copies of independent and irrelevant features coming from the same distribution as $\boldsymbol {x}_j$ . Then, the final feature matrix is $\pmb {X} = [X_{\mathcal{S}_0},X_{\mathrm{irr}}^1,\dots ,X_{\mathrm{irr}}^{p_0}]\in \mathbb{R}^{n\times p}$ ,where + +$\pmb{X}_{\mathrm{irr}}^{j} = (\pmb{x}_{j,\mathrm{irr}}^{1},\dots ,\pmb{x}_{j,\mathrm{irr}}^{s})\in \mathbb{R}^{n\times s}$ is the irreverent feature matrix induced by the $j$ th relevant feature for $j = 1,\ldots ,p_0$ totaling $p = p_0(1 + s)$ features. In Section 6.2, we fix $s = 50$ so the total number of features is $p = 51p_{0}$ . Additional dataset information and sampling process are available in Appendix B. + +Besides the 3,200 distinct simulation settings described in Table 1 (100 datasets, 8 SNRs, and 4 sample sizes), we include additional simulation settings in Appendix D.4 to further assess PAN+SR's behavior under alternative feature structures. These include (1) additive noise in features, (2) duplicated features, and (3) correlated features. + +# 5.3. Metrics + +Predictive Accuracy We assessed predictive accuracy using the coefficient of determination, defined as + +$$ +R ^ {2} = 1 - \frac {\sum_ {i = 1} ^ {n} (y _ {i} - \widehat {y} _ {i}) ^ {2}}{\sum_ {i = 1} ^ {n} (y _ {i} - \bar {y}) ^ {2}}. +$$ + +Model Complexity In line with SRBench, we define model complexity as the total number of mathematical operators, features, and constants in the model. To avoid redundancy, symbolic models are first simplified using SymPy (Meurer et al., 2017), a Python library for symbolic mathematics. + +Solution Criteria For ground-truth regression problems, we follow SRBench's definition of symbolic solution. A model $\widehat{f}(\mathbf{X})$ is considered a solution to the SR problem of $y = f_0(\mathbf{X}) + \varepsilon$ if $\widehat{f}(\mathbf{X})$ does not reduce to a constant and (1) $\widehat{f} - f_0 = a$ for some $a \in \mathbb{R}$ or (2) $\widehat{f} / f_0 = b$ for some $b \neq 0$ . That is, the predicted model $\widehat{f}$ only differs from the true model $f_0$ by either an additive or a multiplicative constant. + +While predictive accuracy can be influenced by the simulation design, the symbolic solution criterion offers a more reliable metric for assessing whether an SR method can uncover the true data-generating process. However, since + +![](images/df19edc3dd5d2fa4e40af17968a474e2c660aa821715059a27220190d39d86fd.jpg) +Figure 1: Results on the black-box regression problems. Points indicate the mean test set performance and bars represent the $95\%$ confidence intervals. Training time for PAN+SR includes the runtime of PAN, which averages only 74.14 seconds. + +SymPy's simplification process is not always optimal, it is possible that some symbolic solutions are not identified in the process. + +Feature Usage Accuracy The irrelevant features present a unique challenge for SR methods to identify the correct data-generating model $f_{0}$ . When the predictive model $\widehat{f}$ includes irrelevant features (FPs), it cannot be considered a symbolic solution to $f_{0}$ . Conversely, if $\widehat{f}$ excludes some relevant features (FNs), it also fails to meet the symbolic solution criteria. Although neither FPR nor FNR corresponds directly to symbolic solution rate, they can provide insights into why $\widehat{f}$ does not qualify as a symbolic solution. + +# 6. Results + +# 6.1. Blackbox Datasets + +Figure 1 shows that PAN+SR consistently improves test set $R^2$ across 18 out of 19 SR algorithms, with the largest gains observed in lower-performing methods such as BSR, AIfeynman, and ITEA. For top-performing SR algorithms, the improvements are more modest due to the natural upper limit of $R^2$ , but the uplift remains significant. For instance, PAN boosted uDSR from 14th to 5th place in the overall ranking and to 2nd among the standalone SR methods. Furthermore, these $R^2$ improvements are not accompanied by increased model complexity. In some cases, PAN+SR even reduces model complexity, enhancing both parsimony and interpretability. + +In addition to accuracy gain, PAN+SR significantly reduces training times for several SR algorithms, including SBP-GP, uDSR, AFP_FE, AIfeynman, and BSR. Notably, AIfeynman, the 2nd slowest running SR algorithm, achieves a 5-fold speedup (from 71250 seconds to 13997 seconds), while uDSR benefits from nearly a 3-fold speedup (from 7628 seconds to 2612 seconds) with PAN pre-screening. The computational overhead introduced by PAN is minimal, averaging only 74.14 seconds on a single core. As PAN relies on independent MCMC chains, this overhead can be further reduced through parallel processing, making PAN+SR both efficient and scalable. + +# 6.2. Ground-truth Datasets + +Figure 2 summarizes performance on the ground-truth regression problems with $n = 1000$ , $\mathrm{SNR} = \infty$ , and $s = 50$ . Methods are sorted by their standalone $R^2$ on the test set. PAN+SR consistently improves both $R^2$ and solution rate across all 19 SR methods. Due to the high dimensionality of the ground-truth problems, the standalone AIfeynman encountered out-of-memory errors and failed to complete any of the 1000 runs. However, PAN significantly improves AIfeynman's performance, lifting it from last place to 2nd overall in symbolic solution rate. Furthermore, PAN consistently outperforms all other nonparametric variable selection methods tested, achieving the highest TPR among four other methods and delivering the best $R^2$ when paired with SR, as detailed in Appendix D.2. This underscores the effectiveness and necessity for nonparametric pre-screening in + +![](images/c96dea3f493277694dfca69bb4fb92eb304fb9ea3b7fdb3cd8e0822766859dbe.jpg) +Figure 2: Results on the ground truth regression problems with $n = 1000$ , $\mathrm{SNR} = \infty$ , and $s = 50$ . Points indicate the mean test set performance and bars represent the $95\%$ confidence intervals. Training time for $\mathrm{PAN} + \mathrm{SR}$ includes the runtime of PAN, which averages 325 seconds. AIfeynman fails to complete any run in the standalone setting. + +high-dimensional SR problems. + +Similar to our findings in the black-box regression problems, this performance gain is not driven by increased model size, and PAN's average computational overhead of 325 seconds remains insignificant to many SR methods. Remarkably, uDSR benefited from nearly a 6-fold speedup with PAN (from 9573 seconds to 1596 seconds) while almost doubling its solution rate (from $36.6\%$ to $71.8\%$ ), making it the best performer in solution rate. Additionally, PAN elevated several mid-tier performers such as Operon, AFP_FE, AFP, and EHC, enabling them to surpass the 4th place method, GP-GOMEA, in the standalone SR solution rate ranking. + +Beyond the specific simulation setting of $n = 1000$ and $\mathrm{SNR} = \infty$ , we also investigated the sensitivity of $\mathrm{PAN} + \mathrm{SR}$ across a range of sample sizes and SNR. In particular, we evaluated $\mathrm{PAN} + \mathrm{SR}$ with all combinations of sample size $n \in \{500, 1000, 1500, 2000\}$ and $\mathrm{SNR} \in \{0.5, 1, 2, 5, 10, 15, 20, \infty\}$ . Given the extreme computational burden, we select Operon, the best-performing algorithm in black-box regression problems, to be the SR module for the sensitivity analysis. + +Figure 3a demonstrates that both Operon and PAN+Operon maintain consistently lower FPR across all settings of $n$ and SNR, with negligible differences between them. This low FPR reflects the rare inclusion of irrelevant features in the final symbolic models. In noisy settings, we notice a significant increase in PAN's FPR, from $0\%$ at $\mathrm{SNR} = \infty$ to over $30\%$ at $\mathrm{SNR} = 0.5$ . While this noise sensitivity could + +be a concern for typical variable selection applications, it is crucial to emphasize that PAN's primary objective is to scale up SR methods by reliably identifying a superset of the relevant features $S_0$ . In this context, minimizing FNs during pre-screening is more critical than avoiding FPs. + +Figure 3b illustrates that PAN achieves a near $0\%$ FNR across most simulation settings, highlighting its ability to identify a superset of the true feature set $S_0$ . This is crucial to ensure that the pre-screened dataset $(y, X_{\hat{S}})$ used for subsequent SR modeling is comprehensive enough to generate the correct expression $f_0$ . However, in the most extreme case, where $n = 500$ and $\mathrm{SNR} = 0.5$ , PAN's FNR rises to over $5\%$ , and caution is advised when relying on PAN in such cases. On the other hand, the standalone Operon often fails to include all relevant features in its final models across all $n$ and SNR settings, while PAN consistently lower Operon's FNR, enhancing its chance to identify the true function $f_0$ . Even with PAN, Operon fails to achieve the best-case FNR set by PAN, particularly under noisy conditions. This elevated FNR negatively impacts Operon's solution rate. For example, changing SNR from $\infty$ to 10, $\mathrm{PAN + Operon}$ 's average solution rate drops from $27.4\%$ to $0\%$ , and Operon's solution rate falls from $18.1\%$ to $0\%$ . As La Cava et al. (2021) noted, this limitation persists even when Operon is provided with only the relevant features $X_{S_0}$ and under favorable conditions ( $n = 100, 000$ and $\mathrm{SNR} = 100$ ), indicating that the issue lies beyond PAN pre-screening. Other performance metrics of this sensitivity analysis are available in Appendix D.5. + +![](images/1b7cfd6d459b96f4d0a782ec4c29ea6a47ea0be90ed506deb1b9af20e8067915.jpg) +(a) False positive rate (FPR). + +![](images/a3de78410505074771e5fde7ec6d9f4af1e3fd09b68f54a6a2c2582ab6395248.jpg) +(b) False negative rate (FNR). +Figure 3: FPR and FNR of Operon, PAN+Operon, and PAN on the ground truth datasets. PAN refers to the proposed selection method in Section 4. Points indicate the mean performance and bars represent the $95\%$ confidence intervals. + +![](images/27515e8327076f5493f89a6536781c2b2ef150b5f60b3f04f8085eaf08a38936.jpg) +Figure 4: Results of selected methods on the ground truth problems with $n = 1000$ , $\mathrm{SNR} \in \{\infty, 10\}$ , and $s = 50$ . Points indicate the mean test set performance and bars represent the $95\%$ confidence intervals. + +Beyond Operon, we also evaluated several top-performing SR methods on the ground-truth problems using $n = 1000$ and $\mathrm{SNR} \in \{\infty, 10\}$ . As shown in Figure 4, $\mathrm{PAN} + \mathrm{SR}$ consistently improves SR methods across all SNR levels, though all SR and their PAN-boosted variants become less accurate at $\mathrm{SNR} = 10$ , indicating the challenge when noise is present. In particular, GP-GOMEA performs similarly to Operon, with its solution rate dropping to $0\%$ at $\mathrm{SNR} = 10$ for both the standalone and PAN-boosted variants. The best-performing SR algorithm, uDSR, also exhibits vulnerability to noise, with its PAN-boosted solution rate falling from $71.8\%$ to $7.4\%$ . Surprisingly, PAN significantly benefits + +DSR, the weakest SR algorithm in Figure 4, increasing its solution rate from $8.2\%$ to $14.9\%$ at $\mathrm{SNR} = 10$ and from $8.9\%$ to $25.8\%$ at $\mathrm{SNR} = \infty$ . These findings highlight the fundamental challenges noise introduces to SR algorithms. To date, SR algorithms have been predominantly developed for noiseless or high-SNR settings, even for "small $p$ " problems. We expect that iterative application of the proposed variable selection method, similar to Ye et al. (2024), along with careful consideration of the challenges in extreme-scale SR, could improve performance in low-SNR settings. This will be explored in future work. + +# 7. Discussion + +In this paper, we introduce PAN+SR, a novel framework designed to address the scalability challenges faced by SR methods when applied to high-dimensional datasets. The growing prevalence of big data necessitates tools capable of efficiently handling such complexity, and PAN+SR addresses this need by integrating a nonparametric prescreening mechanism with SR. This integration enables the framework to focus the model search on a relevant subset of features, reducing computational burden and improving accuracy. + +The core innovation of PAN+SR lies in its nonparametric variable selection method, which filters the input dataset to reduce dimensionality before applying SR. A key challenge in this process is minimizing the risk of false negatives (FNs), where relevant features are mistakenly excluded. Such omissions can critically impair SR methods, as the success of SR depends on having access to the true feature set. To address this issue, we developed a variable selection method designed to ensure that the selected features + +form a superset of the true feature set, effectively minimizing the FNR. Our approach leverages the characteristics of VIP rankings derived BART, providing a tuning-free, data-driven variable selection criterion capable of retaining relevant features while excluding irrelevant ones. By preserving a comprehensive set of candidate features, PAN+SR maximizes the likelihood of identifying the true underlying model. + +We evaluated PAN+SR across a diverse set of datasets, including 35 high-dimensional real-world datasets from the PMLB database and 100 modified simulated datasets based on the Feynman Symbolic Regression Database. The results were highly promising: PAN+SR improved the performance of 18 out of 19 SR methods on real datasets and all 19 methods on simulated datasets when noise is absent. These findings underscore the framework's potential to enhance the robustness and scalability of SR methods across diverse datasets. + +In addition, we explored the sensitivity of $\mathrm{PAN} + \mathrm{SR}$ to varying sample sizes and SNR. Our analysis demonstrated that the performance gains achieved by $\mathrm{PAN} + \mathrm{SR}$ are consistent across different sample sizes and remain robust in the presence of noise. Like our extended Feynman database, SDSR (Matsubara et al., 2024) augments the original Feynman database with irrelevant features, bringing the synthetic benchmarks closer to real-world scientific process. However, SDSR adds only 1-3 irrelevant variables, while our setup introduces 100-450 irrelevant variables, posing a substantially more challenging test for both variable selection and symbolic regression. Nonetheless, SDSR rectifies several physical inconsistencies present in the original Feynman benchmark, such as a more realistic treatment of constants and integer-valued variables, and a more careful specification of sampling ranges. Our investigation extends beyond ground-truth datasets by incorporating black-box datasets, thereby mitigating, to some extent, the limitations inherent in purely simulated data. Still, we view SRSD as a valuable and complementary benchmark and plan to incorporate its refinements in future evaluations. In summary, $\mathrm{PAN} + \mathrm{SR}$ provides a significant step forward in enabling SR methods to handle the complexities of modern datasets, offering improved performance and scalability across a wide range of applications. + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. + +# References + +Arnaldo, I., Krawiec, K., and O'Reilly, U.-M. Multiple regression genetic programming. In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, GECCO '14, pp. 879-886, New York, NY, USA, 2014. Association for Computing Machinery. +Bleich, J., Kapelner, A., George, E. I., and Jensen, S. T. Variable selection for BART: an application to gene regulation. Annals of Applied Statistics, 8(3):1750-1781, 09 2014. +Burlacu, B., Kronberger, G., and Kommenda, M. Operon $\mathrm{C} + +$ : an efficient genetic programming framework for symbolic regression. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, GECCO '20, pp. 1562-1570, New York, NY, USA, 2020. Association for Computing Machinery. +Candès, E., Fan, Y., Janson, L., and Lv, J. Panning for Gold: 'Model-X' Knockoffs for High Dimensional Controlled Variable Selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 80(3):551-577, 01 2018. +Chipman, H. A., George, E. I., and McCulloch, R. E. BART: Bayesian additive regression trees. Annals of Applied Statistics, 4(1):266-298, 03 2010. +Cranmer, M. Interpretable Machine Learning for Science with PySR and SymbolicRegression.jl. arXiv:2305.01582, 2023. +de Franca, F. O. and Aldeaia, G. S. I. Interaction-transformation evolutionary algorithm for symbolic regression. Evolutionary Computation, 29(3):367-390, 09 2021. +Derner, E., Kubalík, J., Ancona, N., and Babuška, R. Constructing parsimonious analytic models for dynamic systems via symbolic regression. Applied Soft Computing, 94:106432, 2020. +Dick, G. Genetic programming, standardisation, and stochastic gradient descent revisited: initial findings on srbench. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '22, pp. 2265-2273, New York, NY, USA, 2022. Association for Computing Machinery. +Fan, J., Feng, Y., and and, R. S. Nonparametric independence screening in sparse ultra-high-dimensional additive models. Journal of the American Statistical Association, 106(494):544-557, 2011. +Feynman, R. P., Leighton, R. B., and Sands, M. The Feynman Lectures on Physics. Basic Books, New York, NY, 2010. + +Friedman, J. H. Multivariate Adaptive Regression Splines. The Annals of Statistics, 19(1):1-67, 1991. +Hernandez, A., Balasubramanian, A., Yuan, F., Mason, S. A. M., and Mueller, T. Fast, accurate, and transferable many-body interatomic potentials by symbolic regression. npj Computational Materials, 5(1):112, November 2019. +Jin, Y., Fu, W., Kang, J., Guo, J., and Guo, J. Bayesian Symbolic Regression. arXiv:1910.08892, 2020. +Kamienny, P.-a., d'Ascoli, S., Lample, G., and Charton, F. End-to-end symbolic regression with transformers. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 10269-10281. Curran Associates, Inc., 2022. +Kamienny, P.-A., Lample, G., Lamprier, S., and Virgolin, M. Deep generative symbolic regression with Monte-Carlo-tree-search. In Proceedings of the 40th International Conference on Machine Learning, ICML'23, pp. 15655-15668. JMLR.org, 2023. +Kelly, M., Longjohn, R., and Nottingham, K. The UCI Machine Learning Repository, 2013. +Keren, L. S., Liberzon, A., and Lazebnik, T. A computational framework for physics-informed symbolic regression with straightforward integration of domain knowledge. Scientific Reports, 13(1):1249, January 2023. +Kronberger, G., Kommenda, M., Promberger, A., and Nickel, F. Predicting friction system performance with symbolic regression and genetic programming with factor variables. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO '18, pp. 1278-1285, New York, NY, USA, 2018. Association for Computing Machinery. +La Cava, W., Spector, L., Danai, K., and Lackner, M. Evolving differential equations with developmental linear genetic programming and epigenetic hill climbing. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, GECCO Comp '14, pp. 141-142, New York, NY, USA, 2014. Association for Computing Machinery. +La Cava, W., Helmuth, T., Spector, L., and Moore, J. H. A Probabilistic and Multi-Objective Analysis of Lexicase Selection and $\varepsilon$ -Lexicase Selection. Evolutionary Computation, 27(3):377-402, September 2019a. +La Cava, W., Singh, T. R., Taggart, J., Suri, S., and Moore, J. Learning concise representations for regression by evolving networks of trees. In International Conference on Learning Representations, 2019b. + +La Cava, W., Orzechowski, P., Burlacu, B., de Franca, F., Virgolin, M., Jin, Y., Kommenda, M., and Moore, J. Contemporary symbolic regression methods and their relative performance. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1, 2021. +Lafferty, J. and Wasserman, L. Rodeo: Sparse, greedy nonparametric regression. The Annals of Statistics, 36(1): 28-63, 2008. +Landajuela, M., Lee, C. S., Yang, J., Glatt, R., Santiago, C. P., Aravena, I., Mundhenk, T., Mulcahy, G., and Petersen, B. K. A unified framework for deep symbolic regression. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 33985-33998. Curran Associates, Inc., 2022. +Lemos, P., Jeffrey, N., Cranmer, M., Ho, S., and Battaglia, P. Rediscovering orbital mechanics with machine learning. Machine Learning: Science and Technology, 4(4):045002, October 2023. +Li, W., Li, W., Yu, L., Wu, M., Sun, L., Liu, J., Li, Y., Wei, S., Yusong, D., and Hao, M. A neural-guided dynamic symbolic network for exploring mathematical expressions from data. In Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J., and Berkenkamp, F. (eds.), Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pp. 28222-28242. PMLR, 21-27 Jul 2024. +Liu, C.-Y., Zhang, S., Martinez, D., Li, M., and Senftle, T. P. Using statistical learning to predict interactions between single metal atoms and modified MgO (100) supports. npj Computational Materials, 6(1):102, 2020. +Liu, C.-Y., Ye, S., Li, M., and Senftle, T. P. A rapid feature selection method for catalyst design: Iterative Bayesian additive regression trees (iBART). The Journal of Chemical Physics, 156(16), 2022. +Liu, Y., Ročková, V., and Wang, Y. Variable selection with ABC Bayesian forests. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 83(3):453-481, 04 2021. +Makke, N. and Chawla, S. Interpretable scientific discovery with symbolic regression: a review. Artificial Intelligence Review, 57(1):2, January 2024. +Märtens, M. and Izzo, D. Symbolic regression for space applications: Differentiable cartesian genetic programming powered by multi-objective memetic algorithms. arXiv:2206.06213, 2022. + +Matsubara, Y., Chiba, N., Igarashi, R., and Ushiku, Y. Rethinking symbolic regression datasets and benchmarks for scientific discovery. Journal of Data-centric Machine Learning Research, 2024. +McConaghy, T. FFX: Fast, Scalable, Deterministic Symbolic Regression Technology, pp. 235-260. Springer New York, New York, NY, 2011. +Meurer, A., Smith, C. P., Paprocki, M., Certík, O., Kirpichev, S. B., Rocklin, M., Kumar, A., Ivanov, S., Moore, J. K., Singh, S., Rathnayake, T., Vig, S., Granger, B. E., Muller, R. P., Bonazzi, F., Gupta, H., Vats, S., Johansson, F., Pedregosa, F., Curry, M. J., Terrel, A. R., Roučka, v., Saboo, A., Fernando, I., Kulal, S., Cirmrnan, R., and Scopatz, A. SymPy: symbolic computing in Python. PeerJ Computer Science, 3:e103, January 2017. +Petersen, B. K., Larma, M. L., Mundhenk, T. N., Santiago, C. P., Kim, S. K., and Kim, J. T. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. In International Conference on Learning Representations, 2021. +Romano, J. D., Le, T. T., La Cava, W., Gregg, J. T., Goldberg, D. J., Chakraborty, P., Ray, N. L., Himmelstein, D., Fu, W., and Moore, J. H. PMLB v1.0: an open-source dataset collection for benchmarking machine learning methods. Bioinformatics, 38(3):878-880, October 2021. +Schmidt, M. D. and Lipson, H. Age-fitness pareto optimization. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, GECCO '10, pp. 543-544, New York, NY, USA, 2010. Association for Computing Machinery. +Shojaee, P., Meidani, K., Barati Farimani, A., and Reddy, C. Transformer-based planning for symbolic regression. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems, volume 36, pp. 45907-45919. Curran Associates, Inc., 2023. +Stephens, T. gplearn: Genetic Programming in Python. https://github.com/trevorstephens/gplearn, 2020. +Strogatz, S. H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. CRC Press, 2015. +Tenachi, W., Ibata, R., and Diakogiannis, F. I. Deep symbolic regression for physics guided by units constraints: Toward the automated discovery of physical laws. The Astrophysical Journal, 959(2):99, December 2023. +Udrescu, S.-M. and Tegmark, M. AI Feynman: A physics-inspired method for symbolic regression. Science Advances, 6(16):eaay2631, 2020. + +Udrescu, S.-M., Tan, A., Feng, J., Neto, O., Wu, T., and Tegmark, M. AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 4860-4871. Curran Associates, Inc., 2020. +Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. Openml: networked science in machine learning. SIGKDD Explor. Newsl., 15(2):49-60, June 2014. +Verstyuk, S. and Douglas, M. R. Machine learning the gravity equation for international trade. Available at SSRN 4053795, 2022. +Virgolin, M. and Pissis, S. P. Symbolic regression is NP-hard. Transactions on Machine Learning Research, 2022. +Virgolin, M., Alderliesten, T., and Bosman, P. A. N. Linear scaling with and within semantic backpropagation-based genetic programming for symbolic regression. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO '19, pp. 1084-1092, New York, NY, USA, 2019. Association for Computing Machinery. +Virgolin, M., Wang, Z., Alderliesten, T., and Bosman, P. A. N. Machine learning for the prediction of pseudorealistic pediatric abdominal phantoms for radiation dose reconstruction. Journal of Medical Imaging, 7(4):046501, 2020. +Virgolin, M., Alderliesten, T., Witteveen, C., and Bosman, P. A. N. Improving Model-Based Genetic Programming for Symbolic Regression of Small Expressions. Evolutionary Computation, 29(2):211-237, 06 2021. +Wu, T. and Tegmark, M. Toward an artificial intelligence physicist for unsupervised learning. Phys. Rev. E, 100: 033311, September 2019. +Ye, S., Senftle, T. P., and Li, M. Operator-Induced Structural Variable Selection for Identifying Materials Genes. Journal of the American Statistical Association, 119(545): 81-94, 2024. + +# A. Reproducing the Experiment + +The experiment made use of an existing symbolic regression (SR) benchmarking platform, SRBench (La Cava et al., 2021), and changes were made to facilitate other functionalities, including signal-to-noise ratio (SNR) tuning, feature pre-screening, and variable usage accuracy calculation. The README file in our GitHub repository https://github.com/mattsheng/PAN_SR details the complete set of commands for reproducing the experiment. Here, we provide a short summary of the experiment process. Experiments are launched from the experiments/ folder via the script analyze.py. After installing and configuring the conda environment provided by SRBench, the complete black-box experiment on standalone SR methods can be started via the following command: + +```shell +python analyze.py /path/to/pmlb/ +2 -results ./results/blackbox/SR/ +3 -n_trials 10 +4 -time_limit 24:00 +5 -tuned -skip_tuning +``` + +To enable PAN pre-screening, the users can either specify the path to a pre-run variable selection result or run the prescreening in place. The first option is useful when the users need to compare different SR methods on the same dataset: + +```shell +python analyze.py /path/to/pmlb \ +results ../results/blackbox/SR_BART_VIP \ +-n_trials 10 \ +-time_limit 24:00 \ +-vs_method BART_VIP \ +-vs_result_path ../results/blackbox/pmlb_BART_VIP_withidx.feather \ +-vs_idx_label idx_hclst \ +-tuned -skip_tuning +``` + +If no path is given to -vs_result_path, the PAN pre-screening will be run in place. Similarly, the ground-truth experiment for the standalone SR methods on Feynman datasets with a sample size of $n = 1000$ and an SNR of 10 can be run by the following command: + +```txt +python analyze.py /path/to/feynman \ +2 -results ../results_feynman/SR \ +3 -signal_to_noise 10 \ +4 -n 1000 \ +5 -sym_data \ +6 -n_trials 10 \ +7 -time_limit 24:00 \ +8 -tuned -skip_tuning +``` + +Note that -sym_data enables more performance metric calculations only available for ground-truth problems. To run PAN pre-screening only on the Feynman datasets with a sample size of $n = 1000$ and an SNR of 10, we can use the following command: + +```shell +python analyze.py /path/to/feynman \ +2 -script BART_selection \ +3 -ml BART_VIP \ +4 -results ./results_feynman/BART_VIP/n_1000/ \ +5 -signal_to_noise 10 \ +6 -n 1000 \ +7 -sym_data \ +8 -n_trials 10 \ +9 -rep 20 \ +10 -time_limit 24:00 +``` + +The -rep 20 argument instructs the program to run $K = 20$ replications of BART for estimating the variable ranking $r_{jk}$ of the $j$ th feature at the $k$ th run. Users can use other variable selection methods by modifying the BART_selection.py script. + +# B. Additional Dataset Information + +PMLB datasets Black-box datasets and their metadata are available from PMLB under an MIT license and is described in detail in Romano et al. (2021). In this experiment, we only focus on high-dimensional regression datasets available from PMLB. Specifically, we use PMLB regression datasets satisfying the following criteria: + +1. $n < 200$ and $p\geq 10$ , or +2. $n\geq 200$ and $p\geq 20$ + +Furthermore, datasets that have categorical features (number of unique value $\leq 5$ ) or non-continuous response variable (proportion of unique value $< 0.9$ ) are excluded since they are incorrectly classified as regression task (Dick, 2022). Among the datasets meeting these criteria, we found that two datasets, 195-auto-price and 207_autoprice, are identical, and we only kept 195-auto-price in our analysis. See Dick (2022) for a detailed analysis of the dataset duplication and incorrect problem classification issues of PMLB. + +Feynman datasets The original Feynman database described in Udrescu and Tegmark (2020) consists of only the relevant features $X_{S_0}$ and a large sample size of $n = 10^5$ , and is available in Feynman Symbolic Regression Database (https://space.mit.edu/home/tegmark/aifeynman.html). We extended the Feynman Symbolic Regression Database to include irrelevant features $X_{\mathrm{irr}}^{j} \in \mathbb{R}^{n \times s}$ for each relevant feature $x_j$ , $j \in S_0$ . To take advantage of the SRBench platform, we standardized the Feynman equations to PMLB format and included metadata detailing the true model and the units of each variable. The extended Feynman datasets are generated using the Python script provided in feynman-dataset-code/generate_feynman-dataset.py. To avoid the need to generate different datasets for each sample size $n$ considered in the main paper, we set $s = 50$ and $n = 100,000$ for all Feynman equations with random state control; we refer to this as the full Feynman datasets. In the experiment, the full Feynman datasets are randomly split into a $75\% / 25\%$ train/test set. If the train set contains more samples than the desired training sample size $n$ , the train and test sets will be further subsampled so that $X_{\mathrm{train}}$ has exactly $n$ samples and $X_{\mathrm{test}}$ has exactly $\lfloor n / 3 \rfloor$ samples. + +Users can also generate datasets using other data-generating functions $f_{0}$ by supplying a CSV file with the expression of $f_{0}(\cdot)$ and an additional CSV file describing the desired uniform distribution (i.e., the lower and upper bounds of the distribution) of each variable in $f_{0}(\cdot)$ . See feynman_dataset_code/FeynmanEquations.csv and feynman_dataset_code/units.csv for more details. + +Sampling Process for Extended Feynman Datasets The sampling process for the extended Feynman datasets is described in the main text and is reproduced here for completeness of the data description in this section. + +For each equation $f_0(\cdot)$ in the Feynman Lectures on Physics, we generated the relevant features $X_{S_0}$ following Udrescu and Tegmark (2020): + +$$ +\left(x _ {1, j}, \dots , x _ {n, j}\right) \stackrel {\text {i d}} {\sim} \operatorname {U n i f} \left(a _ {j}, b _ {j}\right), \quad \text {f o r} 1 \leq j \leq p _ {0}, \tag {6} +$$ + +where $p_0 = |\mathcal{S}_0|$ is the number of relevant features, $n$ is the sample size, and $a_j$ and $b_j$ are the lower and upper bounds for feature $x_j$ described in https://space.mit.edu/home/tegmark/aifeynman/FeynmanEquations.csv. Then, the response variable is generated as follow: + +$$ +y _ {i} = f _ {0} \left(x _ {i, 1}, \dots , x _ {i, p _ {0}}\right) + \varepsilon_ {i}, \quad \text {f o r} 1 \leq i \leq n, \tag {7} +$$ + +where $\varepsilon_{i}\stackrel{\mathrm{ii d}}{\sim}N(0,\sigma_{\varepsilon}^{2})$ is an additive Gaussian error, $\sigma_f^2$ denotes the sample variance of $f_0(\cdot)$ , and $\sigma_{\varepsilon}^{2} = \sigma_{f}^{2} / \mathrm{SNR}$ is the error variance tuned to a prescribed signal-to-noise ratio (SNR). When $\sigma_{\varepsilon}^{2} = 0$ (i.e., $\mathrm{SNR} = \infty$ ), (6) and (7) generate the original Feynman Symbolic Regression Database. + +For each relevant feature $\boldsymbol{x}_j, j = 1, \ldots, p_0$ , we generate $s = 50$ copies of irrelevant features following the distribution of $\boldsymbol{x}_j$ : $(\boldsymbol{x}_{j,\mathrm{irr}}^1, \ldots, \boldsymbol{x}_{j,\mathrm{irr}}^s) \stackrel{\mathrm{iid}}{\sim} \mathrm{Unif}(a_j, b_j)$ . Then, the final feature matrix is $\boldsymbol{X} = [X_{\mathcal{S}_0}, X_{\mathrm{irr}}^1, \ldots, X_{\mathrm{irr}}^{p_0}] \in \mathbb{R}^{n \times p}$ , where $\boldsymbol{X}_{\mathrm{irr}}^j = (\boldsymbol{x}_{j,\mathrm{irr}}^1, \ldots, \boldsymbol{x}_{j,\mathrm{irr}}^s) \in \mathbb{R}^{n \times s}$ is the irreverent feature matrix induced by the $j$ th relevant feature for $j = 1, \ldots, p_0$ , totaling $p = p_0(1 + s)$ number of features. + +In Appendix D.4, we consider sampling processes where features are not iid sampled from a uniform distribution. + +# C. Additional Experiment Details + +# C.1. General Experiment Settings + +Experiments were run in a heterogeneous cluster composed of nodes with Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.60GHz, Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz, Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz, and AMD EPYC 7642 CPU @ 2.3GHz processors. The training of a single method on a single dataset for a fixed random seed was considered a job. Each job was managed by SLURM Workload Manager to receive one CPU core, 16GB of RAM, and a time limit of 24 hours. For the ground-truth problems, each final model was given an additional 5 minutes for each of the following steps: 1) cleaning the model for SymPy parsing, 2) simplifying the cleaned model using SymPy, 3) checking the difference solution criterion of the simplified model, 4) checking the ratio solution criterion of the simplified model, and 5) calculating model size (complexity). When the simplification of the cleaned model exceeded the 5-minute wall clock, steps 3-5 were run on the cleaned model instead. + +# C.2. Implementation Details of the Proposed Variable Selection Method + +The proposed method uses the bartMachine R package for its BART implementation. For each dataset, we fit $K = 20$ independent BART models and record the ranking $r_{j,k}$ of variable $x_{j}$ 's variable inclusion proportion (VIP) in the $k$ th run; the hyperparameters for bartMachine are summarized in Table 2. To cluster the VIP rankings into 2 clusters, we use the hclust function in R to perform agglomeration clustering (unweighted pair group method with arithmetic mean) on the Euclidean dissimilarity matrix of the VIP rankings. Then, $x_{j}$ is selected if $\bar{r}_{j} = \sum_{k=1}^{K} r_{j,k} / K$ belongs to the low-mean cluster. + +Table 2: Hyperparameters in bartMachine. + +
ParameterValue
# of trees20
# of burn-in samples10,000
# of posterior samples10,000
+ +# D. Additional Results + +# D.1. Visualization of Average VIP Rankings $\bar{r}_j$ . + +Figure 5 shows the average BART VIP rankings for Feynman equation I-38-12 with $n = 1000$ . At high SNR, there is a clear separation between the low- and high-mean clusters, and the hypothesized cluster means closely match their actual values. As SNR decreases, irrelevant features tend to receive higher rankings, slightly shifting the cluster means incurring more false positives (FPs). Despite this deviation, the cluster means remain far apart, ensuring separation between relevant and irrelevant features. + +Figure 6 further demonstrates the clustering accuracy of the proposed method. Regardless of the SNR level, all true features are consistently assigned to the low-mean cluster, which is highly desirable in the PAN+SR framework. While decreasing SNR leads to some misclassification of the irrelevant features, the proposed method ensures that no true features are excluded. This robustness in retaining the true features under varying noise levels makes the proposed method well-suited for PAN+SR framework and high-dimensional SR tasks. + +![](images/54edec18f20ff776fd0676af93c3f2c23984fd6b64a4327cc20b718b117ee677.jpg) + +![](images/37748ff6cdbf6e2c19004153210c05fa05a2187596e08dc0fa283f91b87fcf0d.jpg) + +![](images/73b5020404a30e574d3a459ccfb8ff029fb6497e2e9984924090ff37ff35ccf9.jpg) + +![](images/9eb0c022f80a127b0f2e6f74ab2a926039f106c88a078509c451084e9d4db96f.jpg) + +![](images/31ed5f47d23fc29b41578a39fb6b0e3e7cb497fa0a331514f49cb60a156fb49a.jpg) + +![](images/d8fc9c10544b35e0f9bc0fd6b4b37f43de33ef3ff03a9d264a4fdf69cf293102.jpg) + +![](images/925836ecd4416233cd7a91bc28a7e04720bddb75e49eea89a434b7f39094821c.jpg) +Average VIP Ranking Cluster low-mean high-mean + +![](images/05bcf283bb539db945aedd07a28be4ade843deb8860ca9b7f2c55355a5e16246.jpg) +Figure 5: Average BART VIP rankings $\bar{r}_j$ . over $K = 20$ runs on Feynman equation I-38-12 with $n = 1000$ , $p_0 = 4$ , and $p = 204$ . Black vertical dashed lines indicate the cluster means. Red solid vertical lines are the hypothesized cluster means: $(1 + p_0) / 2 = 2.5$ and $(p_0 + 1 + p) / 2 = 104.5$ . + +![](images/ce61967e9409f4381c5347a356257f89c58ede7184a26a96219a9126b4d13d90.jpg) + +![](images/b91634df9123b027043e0f1a747073b575e1548c188e25609f45b3fc97d87ae4.jpg) + +![](images/5b0a181f9301a8ae5781b41df5f01f838b85f32b412e56b65fd91fcb971e2a21.jpg) + +![](images/312156070a1b8db1e69745f8cf733b3f85f349fd6383f8fab08cb7cfd43af3ad.jpg) + +![](images/506ae968ee43411c5821a3c337527868cba709203dd57a56917238a03acce3e1.jpg) + +![](images/c4d285631cd95030b307155cf2da6058bec78a44967a085f408baf7b58da423e.jpg) + +![](images/014818a2920da1a65aef5cc0ef6539e0ec89132593de90f233f302b1040a6053.jpg) +True Label + +![](images/2a58276a36f9c52c964e614be0364dd81f20b8b98f271c07f83b90008209a0f2.jpg) +Irrelevant Feature + +![](images/916d7fd379cc4cc9778ac8d0acbad29086386939c0e00614e87d54cf2e2a504d.jpg) +Figure 6: Hierarchical clustering accuracy on Feynman equation I-38-12 with $n = 1000$ , $p_0 = 4$ , and $p = 204$ . Red and teal represent the low- and high-mean clusters, respectively. Circles and triangles represent relevant and irrelevant features, respectively. + +![](images/36d87a56c3c491f73b373355112743ae67f6af45d66e5e5f9c8b8641c08a6bf5.jpg) +True Feature + +![](images/a168cbb37d75f250f39e765b0c12ac84b7aa022d35d7a80d65039a32202f3aff.jpg) +low-1 + +![](images/3005406b84d0528c799ef8d1b6eccdef091e3bc26ac4739ced17eb8f03912fa9.jpg) +high-mean + +# D.2. Analysis of Different Nonparametric Variable Selection Methods + +![](images/7e1040abdfac01c00fe48bf8f9746205e4c3fb64331ec4b4af7f06637d61657c.jpg) +Figure 7: True positive rate (TPR) on the Feynman datasets for $n = 500,1000,1500,2000$ and $\mathrm{SNR} = \infty, 20, 15, 10, 5, 2, 1, 0.5$ . Points indicate the mean performance, and bars show the $95\%$ confidence interval. VIP Rank is the proposed method for PAN pre-screening. Local, G.SE, G.MAX, and RF are alternative nonparametric variable selection methods. + +PAN pre-screening presents a unique challenge to nonparametric variable selection methods where any missed signals (false negative) will eliminate the correct expression $f_0(\cdot)$ from the search space. That is, a true positive rate (TPR) near $100\%$ in the pre-screening phase is necessary to ensure successful SR tasks. Figure 7 compares the average TPR of five nonparametric variable selection methods across various configurations of $n$ and SNR on the Feynman datasets. VIP Rank, the proposed method, is compared with three BART permutation test-based methods (Local, G.SE, and G.MAX) (Bleich et al., 2014) and the Random Forest (RF) variable selection method in PySR (Cranmer, 2023). Of the three BART permutation test-based methods, BART-Local applies the least stringent selection criteria, while BART-G.MAX is the most stringent, with BART-G.SE offering a balance between the two. The RF implementation requires users to specify the number of selected variables $k$ , which we tuned over $\{1,2,\dots,20\}$ using 5-fold cross-validation. + +VIP Rank consistently achieves the highest TPR, nearing or reaching $100\%$ across all experimental settings. In noiseless conditions $(\mathrm{SNR} = \infty)$ , only VIP Rank attains a perfect TPR of $100\%$ . Although there is a slight TPR decline for VIP Rank at $n = 500$ and $\mathrm{SNR} \leq 5$ , it still outperforms the other methods, particularly at $n = 500$ and $\mathrm{SNR} = 0.5$ . These results reinforce the need for a specialized variable selection method for PAN pre-screening. In addition to the four methods considered here, we point readers to Ye et al. (2024), where they analyzed three additional nonparametric variable selection methods and showed that none outperform BART-G.SE in terms of TPR. + +Figure 8 illustrates the false positive rate (FPR), a crucial metric for evaluating variable selection accuracy. As discussed in the main paper, VIP Rank produces higher FPR under low SNR conditions—a tradeoff made to maintain a near-perfect true positive rate (TPR). While this tradeoff may be undesirable for typical variable selection tasks, it is acceptable for PAN pre-screening, where minimizing false negatives (FNs) is the priority. The three BART permutation-based methods and RF consistently maintain low and robust FPRs across all settings of $n$ and SNR. However, as Figure 7 shows, this strict control of FPR comes at the cost of worse TPR performance. + +To further evaluate the impact of variable selection methods in the PAN+SR framework, we replaced VIP Rank with BART-G.SE and compared their performance using Operon as the SR method. Operon was chosen for this analysis due to + +![](images/739d7234aeef70e3a34eac81ae318002b17a5c31b6d02c96423fa5ae01ed186c.jpg) +Figure 8: False positive rate (FPR) on the Feynman datasets for $n = 500,1000,1500,2000$ and $\mathrm{SNR} = \infty, 20, 15, 10, 5, 2, 1, 0.5$ . Points indicate the mean performance, and bars show the $95\%$ confidence interval. VIP Rank is the proposed method for PAN pre-screening. Local, G.SE, G.MAX, and RF are alternative nonparametric variable selection methods. + +its strong $R^2$ performance in both the black-box and ground-truth experiments. Table 3 summarizes the average test set $R^2$ on the Feynman dataset. VIP+SR consistently achieves the highest $R^2$ across all experimental settings. For instance, at $n = 500$ and $\mathrm{SNR} = 20$ , VIP+SR achieves an average $R^2$ of 0.892, compared to 0.860 for GSE+SR and 0.846 for standalone SR. Under high noise conditions, VIP+SR continues to demonstrate better robustness than GSE+SR. At $n = 500$ and $\mathrm{SNR} = 0.5$ , VIP+SR scores 0.145, slightly outperforming GSE+SR (0.142) and standalone (0.142). This trend is consistent across different different sample sizes $n$ . + +# D.3. Effect of Different Clustering Algorithms + +The proposed VIP Rank variable selection method can be implemented using various off-the-shelf clustering algorithms. However, due to the class imbalance nature of the variable selection problem, not all clustering algorithms are suitable. In this ablation study, we examine the effect of clustering algorithms on TPR and FPR performances of VIP Rank. We elected 10 clustering algorithms available in scikit-learn v1.5.7: agglomerative hierarchical clustering (AHC), k-means++, Gaussian mixture model (GMM), Birch, Mean Shift, Affinity Propagation, Spectral, OPTICS, HDBSCAN, and DBSCAN. + +As illustrated in Figure 9, the first 5 clustering algorithms (AHC, k-mean++, GMM, Birch, Mean Shift) achieve the highest TPR across all simulation settings with indistinguishable differences. Affinity Propagation also has similar TPR compared with the top 5 algorithms but lacks behind in noisy (e.g., $\mathrm{SNR} = 0.5$ ) and small- $n$ (e.g., $n = 500$ ) settings. The rest of the pack has significantly worse TPR and are thus not suitable in VIP Rank. + +Since the top 5 algorithms have indistinguishable TPR, we elect one with the least FPR. As shown in Figure 10, AHC has significantly lower FPR than the rest of the top 5 algorithms across most simulation settings. Combine with its near $100\%$ TPR, AHC is capable of identifying a more compact feature set that has a high probability of containing all relevant features. + +![](images/2d5b450200e5d747270a3086ec5230f5e46dc648ba1c4289a06634ea84084281.jpg) +Figure 9: True positive rate of various ablations of clustering algorithm. + +![](images/7689154e7beaf3027f731738b5d88364ea147fa0c9bc35e94a295133f6847f52.jpg) +Figure 10: False positive rate of various ablations of clustering algorithm. + +Table 3: Average test set $R^2$ . The highest value in each experimental setting is in bold. + +
noiseless2015105210.5
n = 500
VIP+SR0.9740.8920.8700.8370.7300.5250.3350.145
GSE+SR0.9480.8600.8590.8180.7100.5100.3270.142
SR0.9150.8460.8400.7920.7020.5060.3220.142
n = 1000
VIP+SR0.9840.9190.9010.8670.7740.5860.4060.229
GSE+SR0.9710.9140.8970.8510.7740.5740.4050.229
SR0.9420.8830.8670.8250.7470.5800.3930.227
n = 1500
VIP+SR0.9900.9280.9090.8740.7920.6120.4330.260
GSE+SR0.9610.9100.8990.8660.7810.6000.4280.257
SR0.9560.8950.8780.8560.7610.5920.4260.255
n = 2000
VIP+SR0.9900.9350.9140.8870.8050.6190.4480.277
GSE+SR0.9630.9180.9050.8720.7870.6170.4450.272
SR0.9600.9070.8920.8550.7810.6110.4370.272
+ +# D.4. Effect of Noisy, Duplicated, and Correlated Predictors + +In addition to the extensive simulation settings described in Section 5.2, we further evaluate VIP Rank under alternative predictor structures that challenge common modeling assumptions: + +- Baseline: $x_{1},\ldots ,x_{p}\stackrel {\mathrm{~iid}}{\sim}\mathrm{Unif}(0,1)$ +- Noisy $X$ : Independent Gaussian noise is added to each predictor with variance equal to 1/5 of the signal variance +- Duplicated $X$ : A redundant feature is added: $x_{6} = x_{1} + x_{2}$ , where $x_{1}$ and $x_{2}$ are relevant predictors +Correlated $X\colon x_1,\ldots ,x_p\sim \mathrm{Unif}(0,1)$ with an autocorrelation structure: $\rho_{ij} = 0.9^{|i - j|}$ + +The response variable $y$ is generated according to the Friedman equation (1991): + +$$ +y = 1 0 \sin (\pi x _ {1} x _ {2}) + 2 0 (x _ {3} - 0. 5) ^ {2} + 1 0 x _ {4} + 5 x _ {5} + \varepsilon , \quad \varepsilon \sim N (0, \sigma^ {2}). +$$ + +We fix $n = 1000$ , $p = 100$ , $\mathrm{SNR} = 10$ , and repeat each scenario for 100 trials. Table 4 reports the average TPR and FPR. VIP Rank consistently identifies all relevant features across all scenarios, demonstrating strong robustness to noise, redundancy, and correlation among predictors. + +Table 4: Average performance in each scenario across 100 trials. + +
ScenarioTPRFPR
Baseline100%10.58%
Noisy X100%26.42%
Duplicated X100%11.11%
Correlated X100%15.98%
+ +# D.5. Additional Performance Metrics for Operon vs PAN+Operon + +Figures 11, 12, 13 show additional metrics not discussed in the main paper. Although PAN+Operon's solution rate plummeted from $\sim 27\%$ at $\mathrm{SNR} = \infty$ to $0\%$ at $\mathrm{SNR} = 20$ across all $n$ , Figure 11 shows there is still improvement in $R^2$ on test set across all $n$ and SNR, while improving model interpretability evidenced by the uniformly lower model size in Figure 12. + +![](images/0a5e007cc1feb917d9af1bfa2a0f499f29faddd73faee275af6cc6ee06b1ceed.jpg) +Figure 11: $R^2$ on test set with Operon as the SR module. Points indicate the average $R^2$ on test set and bars represent the $95\%$ confidence intervals. + +![](images/263e25a4b686fb3e318c1b89d00efa7aebb8a2654b461053a8d992ef483c4363.jpg) +Figure 12: Model size with Operon as the SR module. Points indicate the average model size and bars represent the $95\%$ confidence intervals. + +![](images/e48bd3a88070c47b155325f49612b70aa7d04efd44f64cefc30afd6e4bce68e0.jpg) +Figure 13: Solution rate with Operon as the SR module. Points indicate the average solution rate and bars represent the $95\%$ confidence intervals. \ No newline at end of file diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/images.zip b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f1ac94283d1a5292de668b9bf46a2029ffa734c1 --- /dev/null +++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00b10f0b9a45765371a80a0f45f2cbc22c827d65d17d61125641269bee3ae9fa +size 993963 diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/layout.json b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d40b61ba650758a301a623128e36bf91e5229624 --- /dev/null +++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51ede053ce49afcdf6118c21d249d780a8f1ae06f39522a1b379b5471ae0c2cf +size 800765 diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_content_list.json b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1ea2c467e98b2e41702e68e5baec9ddeb5673d4e --- /dev/null +++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33b207d7474c36e2244edb7e149418213d49043db35e2658a7eb824430b64c1a +size 350143 diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_model.json b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9806b3ff21151dfa7d7d33abe89e63db89253e76 --- /dev/null +++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c593f38714f8bc47b2771b1d1ab9a7bfa2849cd69d9e8915649232c13817f72b +size 424430 diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_origin.pdf b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4b1c4d9ac0c8ab61f633b913c446b7143f6fc82f --- /dev/null +++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83225f30b01acae912510db48b518ab047b09e6d3b7325aefbbf76a8031f7273 +size 4525036 diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/full.md b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/full.md new file mode 100644 index 0000000000000000000000000000000000000000..505042c5fa8b25c9c1eb7ec31f3bae4722faee3c --- /dev/null +++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/full.md @@ -0,0 +1,1897 @@ +# ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $\alpha$ - $\beta$ -Divergence + +Guanghui Wang1 Zhiyong Yang1 Zitai Wang2 Shi Wang2 Qianqian Xu2 Qingming Huang1 + +# Abstract + +Knowledge Distillation (KD) transfers knowledge from a large teacher model to a smaller student model by minimizing the divergence between their output distributions, typically using forward Kullback-Leibler divergence (FKLD) or reverse KLD (RKLD). It has become an effective training paradigm due to the broader supervision information provided by the teacher distribution compared to one-hot labels. We identify that the core challenge in KD lies in balancing two mode-concentration effects: the Hardness-Concentration effect, which refers to focusing on modes with large errors, and the Confidence-Concentration effect, which refers to focusing on modes with high student confidence. Through an analysis of how probabilities are reassigned during gradient updates, we observe that these two effects are entangled in FKLD and RKLD, but in extreme forms. Specifically, both are too weak in FKLD, causing the student to fail to concentrate on the target class. In contrast, both are too strong in RKLD, causing the student to overly emphasize the target class while ignoring the broader distributional information from the teacher. To address this imbalance, we propose ABKD, a generic framework with $\alpha$ - $\beta$ -divergence. Our theoretical results show that ABKD offers a smooth interpolation between FKLD and RKLD, achieving an effective trade-off between these effects. Extensive experiments on 17 language/vision datasets with 12 teacher-student settings confirm + +$^{1}$ School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China $^{2}$ Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China $^{3}$ Key Laboratory of Big Data Mining and Knowledge Management (BDKM), University of Chinese Academy of Sciences, Beijing, China. Correspondence to: Zhiyong Yang , Qingming Huang . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +its efficacy. The code is available at https: //github.com/ghwang-s/abkd. + +# 1. Introduction + +Knowledge Distillation (KD) (Hinton, 2015) is a widely-adopted technique for transferring knowledge from large models (teachers) to smaller models (students). In this setup, the student model, with a predictive distribution $q_{\theta}$ , learns to mimic the predictive distribution $p$ of the teacher model. This imitation is typically achieved by minimizing a predefined divergence $\mathbb{D}$ between the teacher distribution $p$ and the student distribution $q_{\theta}$ : $\ell_{\mathrm{KD}} \triangleq \mathbb{D}(p \| q_{\theta})$ . This way, KD allows the student to leverage richer soft label information from $p$ compared to one-hot labels, often leading to better performance than traditional supervised fine-tuning. This has been shown in tasks like image classification (Dosovitskiy, 2020; Radford et al., 2021; Yang et al., 2023b; Wang et al., 2022b) and text generation (Vaswani, 2017; Touvron et al., 2023a). + +A key step in KD is to choose a proper divergence $\mathbb{D}$ for distribution matching. One popular choice in previous works (Cho & Hariharan, 2019; Mirzadeh et al., 2020; Zhou et al., 2021; Zhao et al., 2022; Jin et al., 2023; Sun et al., 2024; Zheng & Yang, 2024) is the forward Kullback-Leibler divergence (FKLD). However, FKLD's asymmetry often results in a student distribution $q_{\theta}$ that is overly smooth, spreading across the entire support of $p$ . To address this, recent studies (Lee et al., 2023; Gu et al., 2024a; Kim et al., 2024; Gu et al., 2024b) have explored the reverse KLD (RKLD), which allows $q_{\theta}$ to focus on a few prominent modes of $p$ . Despite the effectiveness, empirical results (Wen et al., 2023; Wu et al., 2024; Ko et al., 2024) suggest that RKLD often yields suboptimal performance across a range of tasks. What is worse, there is no systematic approach to identify the essential issues hidden behind, which hinders the development of a more generic and effective KD framework. To get out of this dilemma, we first pose the following question: + +What underlying factors contribute to the suboptimal performance of FKLD and RKLD? + +To answer this, we analyze how different divergence func + +![](images/6bdebde9811c3dfc6b4f3fa5be6cf279ed67eb39cd39e3f6fe40853c99e6174c.jpg) +(a) + +![](images/389078ddf88d049d74fa515aa686d74a1231c488bacca8149633e58e749240e5.jpg) +(b) +(c) + +![](images/d618b78b8d332109dfa820730d8d45e1a31c432cd25019a0fad89df94327634a.jpg) +(d) + +![](images/0e78847e275140d865b351047ffca9ce28ecc6a5e3f1458036d7ff3f8472ae61.jpg) + +![](images/d8d740d1a87e45c7d6ad6a5ed2cc546ca8f0d843909bbfc9ee77a44006aebdd7.jpg) +(f) +Figure 1. (a) Illustration of the unified search space for our proposed ABKD, where height (color) represents performance $(\uparrow)$ . The FKLD and RKLD are special cases of ABKD when selecting $(\alpha = 1, \beta = 0)$ and $(\alpha = 0, \beta = 1)$ , respectively. The $\alpha$ -divergence can only search along the submanifold $\alpha + \beta = 1$ in the ABKD space. (b)-(c) illustrate how adjusting $\alpha$ and $\beta$ affects hardness-concentration and confidence-concentration. (d)-(g) illustrate how different divergences learn a student distribution from the given teacher distribution. The $\alpha$ -divergence, compared to others, can more effectively learn soft label information while maintaining focus on the target class. + +![](images/b632ad1fefad33a8480c98e98f34aca3905e8d65058307596564f550f1dbc315.jpg) + +tions affect the allocation of probability mass in the student distribution during training by tracking the log mass ratio LogR. Notably, LogR is proportional to the gradient of the loss function w.r.t the logits. This insight allows us to frame the problem as understanding how divergence algorithms influence the reduction of LogR. Through this lens, we identify two key mode-concentration effects: Hardness-Concentration and Confidence-Concentration. Hardness-Concentration refers to focusing on modes in the loss where there is a large error between $p$ and $q_{\theta}$ , while Confidence-Concentration refers to focusing on modes in the loss where $q_{\theta}$ has high confidence. + +On top of this, we find that the limitations of FKLD and RKLD stem from the extreme ways they utilize these concentration effects: a) FKLD exhibits weak concentration effects, treating mismatches equally from all classes, which fails in guiding the student to concentrate on the target class and causes incorrect predictions (Fig. 1d). b) RKLD exhibits strong concentration effects, focusing on both hard classes with large errors and classes where the student has high confidence. This often leads to a trivial solution, where the well-trained student focuses exclusively on the target class and ignores broader knowledge from $p$ (Fig. 1e). With the limitations revealed, we continue to seek an answer to the following question: + +Can we find a generic, theoretically grounded method to balance hardness-concentration and confidence-concentration? + +In pursuit of this, we introduce the $\alpha$ - $\beta$ -divergence, a general extension of divergences that unifies FKLD and RKLD, while also extending to previously unexplored divergences like the Hellinger distance and $\beta$ -divergence. Our theoretical results demonstrate that the $\alpha$ - $\beta$ -divergence provides + +a flexible mechanism to smoothly interpolate between the extremes of FKLD and RKLD by controlling the trade-off between hardness-concentration (Fig. 1b) and confidence-concentration (Fig. 1c) via the hyperparameters $\alpha$ and $\beta$ . This mechanism ensures a more proper allocation of probability mass (Fig. 1g). Motivated by these insights, we propose ABKD, a generic distillation framework based on $\alpha - \beta$ -divergence. Empirical results across a variety of tasks, including instruction-following and image classification, demonstrate ABKD's generality and effectiveness. For instance, by modifying only the loss function, ABKD achieves performance improvements of 0.81 to 3.31 over FKLD and RKLD on five instruction-response datasets when distilling GPT-2 XL (1.5B) into GPT-2 (0.1B). + +In summary, the contributions of this work are three-fold: + +- Theoretically: We analyze the limitations of FKLD and RKLD from novel perspectives of hardness-concentration and confidence-concentration, and show that the $\alpha-\beta$ -divergence offers a flexible approach to balance these effects. +- Methodologically: We propose ABKD, a flexible distillation framework that unifies FKLD and RKLD and generalizes to several other divergences, offering greater versatility and applicability. +- Empirically: Extensive experiments on 17 language and vision datasets with 12 teacher-student configurations (0.85M-0.46M to 7B-3B) validate the theoretical insights. ABKD outperforms or matches state-of-the-art methods without extra trainable parameters and allows further gains by rectifying their loss functions. + +Prior Arts. We discuss related work and defer a concentrated account to App. A. + +# 2. Preliminaries + +KD involves using a fixed teacher model $f_{T}$ to improve the performance of a parameterized student model $f_{S}$ . Given an input $x$ , the teacher $f_{T}$ and student $f_{S}$ produce probability distributions $p$ and $q_{\theta}$ , respectively. + +The goal of KD can be achieved by letting $q_{\theta}$ mimic $p$ for all samples in dataset $\mathcal{D}$ . A direct way to do this is minimizing: + +$$ +\ell_ {\mathrm {K D}} \triangleq \mathbb {D} (p \| q _ {\theta}), \tag {1} +$$ + +where $\mathbb{D}$ is a distribution measure. Optionally, practitioners can substitute $p$ with the one-hot vector $\boldsymbol{y}$ , where $\boldsymbol{y} \triangleq [0, \dots, 1, \dots, 0]$ with 1 at ground-truth label $y$ and 0 elsewhere. In this case, the loss is $\ell_{\mathrm{CE}} \triangleq \mathbb{D}(\boldsymbol{y} \| q_{\theta})$ , where $\mathbb{D}$ is typically the FKLD. The final training loss is: + +$$ +\ell = \ell_ {\mathrm {C E}} + \lambda \ell_ {\mathrm {K D}}, \tag {2} +$$ + +where $\lambda$ is a hyperparameter. Since $p$ provides richer information (i.e., soft label) than the one-hot vector $\mathbf{y}$ , KD outperforms traditional supervised fine-tuning on many downstream tasks, such as instruction-following and image classification. The settings for these tasks in KD are as follows. + +Instruction-following. Let $\mathbf{x}$ and $\mathbf{y}$ represent the input and output sequences, respectively. A token-level autoregressive model produces an $C$ -dimensional probability distribution for the $n$ -th token over the vocabulary $\mathbb{V}$ , conditioned on $\mathbf{x}$ and $\mathbf{y}_{