Improve dataset card: Add description, links, metadata, and sample usage for MLRC-Bench

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +100 -3
README.md CHANGED
@@ -1,3 +1,100 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-generation
5
+ - code-generation
6
+ tags:
7
+ - agents
8
+ - machine-learning
9
+ - benchmark
10
+ - code-generation
11
+ language:
12
+ - en
13
+ ---
14
+
15
+ # MLRC-Bench: Can Language Agents Solve Machine Learning Research Challenges?
16
+
17
+ MLRC-Bench is a benchmark designed to quantify how effectively language agents can tackle challenging Machine Learning (ML) Research Competitions. It focuses on open research problems that demand novel methodologies by evaluating the key steps of proposing and implementing novel research methods with rigorous protocols and objective metrics.
18
+
19
+ The benchmark features a curated suite of 7 competition tasks adapted from recent Machine Learning conference competitions. Our findings reveal significant challenges for LLM agents; even the best-performing tested agent (gemini-exp-1206 under MLAB) closes only 9.3% of the gap between baseline and top human participant scores. MLRC-Bench is a dynamic benchmark, designed to grow with new ML competitions and encourage rigorous, objective evaluations of AI research capabilities.
20
+
21
+ * **Paper**: [MLRC-Bench: Can Language Agents Solve Machine Learning Research Challenges?](https://huggingface.co/papers/2504.09702)
22
+ * **Code**: [https://github.com/yunx-z/MLRC-Bench](https://github.com/yunx-z/MLRC-Bench)
23
+ * **Project Page / Leaderboard**: [https://huggingface.co/spaces/launch/MLRC_Bench](https://huggingface.co/spaces/launch/MLRC_Bench)
24
+
25
+ ## Sample Usage
26
+
27
+ To get started with MLRC-Bench, follow these steps to set up the environment and launch an agent.
28
+
29
+ ### Setup MLRC-Bench Core Environment
30
+
31
+ First, clone the MLRC-Bench repository and navigate into it:
32
+
33
+ ```bash
34
+ git clone https://github.com/yunx-z/MLRC-Bench.git
35
+ cd MLRC-Bench
36
+ ```
37
+
38
+ Next, create and activate a conda environment named `mlab` for the benchmark's core dependencies, then install the `MLAgentBench` package:
39
+
40
+ ```bash
41
+ conda create -n mlab python=3.10
42
+ conda activate mlab
43
+
44
+ # Navigate into the MLAgentBench subdirectory to install it in editable mode
45
+ cd MLAgentBench
46
+ pip install -e .
47
+ pip install openai
48
+ cd .. # Go back to the MLRC-Bench root directory
49
+
50
+ # Install additional system-level dependencies for the benchmark
51
+ bash install.sh
52
+ ```
53
+
54
+ ### Setup Task-Specific Environment
55
+
56
+ Each competition task within MLRC-Bench has its own environment. You'll need to set up a dedicated conda environment for each task you wish to run. Replace `${TASK_NAME}` with the specific task name (e.g., `llm-merging`):
57
+
58
+ ```bash
59
+ # Navigate to the task's script directory
60
+ cd MLAgentBench/benchmarks_base/${TASK_NAME}/scripts
61
+
62
+ # Create and activate a dedicated conda environment for this task
63
+ conda env create -f environment.yml --name ${TASK_NAME}
64
+ conda activate ${TASK_NAME}
65
+
66
+ # Install core benchmark components into the task environment.
67
+ # This ensures that MLRC-Bench and MLAgentBench are available within the task's isolated environment.
68
+ cd ../../../.. # Navigate from 'scripts/' up to the 'MLRC-Bench/' root directory
69
+ pip install -e . # Installs MLRC-Bench (and MLAgentBench) into the active task environment
70
+ pip install openai # Ensure OpenAI is available in the task environment
71
+ ```
72
+
73
+ *(Optional)* Some competition tasks may require setting up Kaggle API authentication (`~/.kaggle/kaggle.json`). Refer to the [Kaggle API documentation](https://www.kaggle.com/docs/api) and provide manual consent to competition rules if prompted.
74
+
75
+ ### Launching an Agent
76
+
77
+ To launch an MLAB agent on a specific task within its activated environment:
78
+
79
+ ```bash
80
+ # Ensure you are in the MLRC-Bench root directory and the task's conda environment is active.
81
+ bash launch.sh ${TASK_NAME} ${MODEL} ${GPU_ID}
82
+ ```
83
+ You will need to specify `MY_OPENAI_API_KEY` and `MY_AZURE_OPENAI_ENDPOINT` as environment variables for OpenAI models. Supported models are listed in `MLAgentBench/LLM.py` within the repository.
84
+
85
+ ## Tasks
86
+
87
+ The first release of MLRC-Bench includes 7 tasks adapted from recent Machine Learning conference competitions. Each task is represented as a folder in `MLAgentBench/benchmarks_base/` within the repository. Within each task folder, the `env/` directory contains files that the research agent will see at the beginning, while the `script/` folder contains additional hidden files such as `prepare.py` for downloading data.
88
+
89
+ ## Citation
90
+
91
+ If you use MLRC-Bench in your research, please cite the following paper:
92
+
93
+ ```bibtex
94
+ @article{zhang2025mlrcbench,
95
+ title={MLRC-Bench: Can Language Agents Solve Machine Learning Research Challenges?},
96
+ author={Zhang, Yunxiang and Khalifa, Muhammad and Bhushan, Shitanshu and Murphy, Grant D and Logeswaran, Lajanugen and Kim, Jaekyeom and Lee, Moontae and Lee, Honglak and Wang, Lu},
97
+ journal={arXiv preprint arXiv:2504.09702},
98
+ year={2025}
99
+ }
100
+ ```