Improve dataset card: Add paper/code links, align license, and update tags

#4
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +82 -25
README.md CHANGED
@@ -1,22 +1,25 @@
1
  ---
2
- license: apache-2.0
3
  task_categories:
4
  - text-generation
5
  tags:
6
  - code
 
 
 
7
  ---
8
 
9
  # FrontierCO: Benchmark Dataset for Frontier Combinatorial Optimization
10
 
 
 
11
  ## Overview
12
 
13
  **FrontierCO** is a curated benchmark suite for evaluating ML-based solvers on large-scale and real-world **Combinatorial Optimization (CO)** problems. The benchmark spans **8 classical CO problems** across **5 application domains**, providing both training and evaluation instances specifically designed to test the frontier of ML and LLM capabilities in solving NP-hard problems.
14
 
15
- code for evaluating agent https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco
16
-
17
- code for running classifical solver, generate training data, evaluating neural solver: https://github.com/sunnweiwei/FrontierCO
18
 
19
- Evaluation Code: https://github.com/sunnweiwei/FrontierCO
20
 
21
  ---
22
 
@@ -43,10 +46,10 @@ FrontierCO/
43
 
44
  Each task folder contains:
45
 
46
- * `easy_test_instances/`: Benchmark instances that are solvable by SOTA human-designed solvers.
47
- * `hard_test_instances/`: Instances that remain computationally intensive or lack known optimal solutions.
48
- * `valid_instances/` *(if applicable)*: Additional instances for validation or development.
49
- * `config.py`: Metadata about instance format, solver settings, and reference solutions.
50
 
51
  ---
52
 
@@ -54,20 +57,20 @@ Each task folder contains:
54
 
55
  The benchmark currently includes the following problems:
56
 
57
- * **MIS** – Maximum Independent Set
58
- * **MDS** – Minimum Dominating Set
59
- * **TSP** – Traveling Salesman Problem
60
- * **CVRP** – Capacitated Vehicle Routing Problem
61
- * **CFLP** – Capacitated Facility Location Problem
62
- * **CPMP** – Capacitated p-Median Problem
63
- * **FJSP** – Flexible Job-shop Scheduling Problem
64
- * **STP** – Steiner Tree Problem
65
 
66
  Each task includes:
67
 
68
- * Easy and hard test sets with varying difficulty and practical relevance
69
- * Training and validation instances where applicable, generated using problem-specific generators
70
- * Reference results for classical and ML-based solvers
71
 
72
  ---
73
 
@@ -75,10 +78,10 @@ Each task includes:
75
 
76
  Instances are sourced from a mix of:
77
 
78
- * Public repositories (e.g., [TSPLib](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/), [CVRPLib](http://vrp.galgos.inf.puc-rio.br/))
79
- * DIMACS and PACE Challenges
80
- * Synthetic instance generators used in prior ML and optimization research
81
- * Manual curation from recent SOTA solver evaluation benchmarks
82
 
83
  For tasks lacking open benchmarks, we include high-quality synthetic instances aligned with real-world difficulty distributions.
84
 
@@ -114,6 +117,60 @@ score = eval_func(**instance, **solution)
114
  print("Evaluation score:", score)
115
  ```
116
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
  ---
118
 
119
  ## Citation
@@ -134,4 +191,4 @@ If you use **FrontierCO** in your research or applications, please cite the foll
134
 
135
  This dataset is released under the MIT License. Refer to `LICENSE` file for details.
136
 
137
- ---
 
1
  ---
2
+ license: mit
3
  task_categories:
4
  - text-generation
5
  tags:
6
  - code
7
+ - benchmarks
8
+ - combinatorial-optimization
9
+ - llm-agents
10
  ---
11
 
12
  # FrontierCO: Benchmark Dataset for Frontier Combinatorial Optimization
13
 
14
+ [Paper: CO-Bench: Benchmarking Language Model Agents in Algorithm Search for Combinatorial Optimization](https://huggingface.co/papers/2504.04310) | [Code: CO-Bench GitHub Repository](https://github.com/sunnweiwei/CO-Bench)
15
+
16
  ## Overview
17
 
18
  **FrontierCO** is a curated benchmark suite for evaluating ML-based solvers on large-scale and real-world **Combinatorial Optimization (CO)** problems. The benchmark spans **8 classical CO problems** across **5 application domains**, providing both training and evaluation instances specifically designed to test the frontier of ML and LLM capabilities in solving NP-hard problems.
19
 
20
+ Code for evaluating agent: [https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco](https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco)
 
 
21
 
22
+ Code for running classical solver, generating training data, evaluating neural solver: [https://github.com/sunnweiwei/FrontierCO](https://github.com/sunnweiwei/FrontierCO)
23
 
24
  ---
25
 
 
46
 
47
  Each task folder contains:
48
 
49
+ * `easy_test_instances/`: Benchmark instances that are solvable by SOTA human-designed solvers.
50
+ * `hard_test_instances/`: Instances that remain computationally intensive or lack known optimal solutions.
51
+ * `valid_instances/` *(if applicable)*: Additional instances for validation or development.
52
+ * `config.py`: Metadata about instance format, solver settings, and reference solutions.
53
 
54
  ---
55
 
 
57
 
58
  The benchmark currently includes the following problems:
59
 
60
+ * **MIS** – Maximum Independent Set
61
+ * **MDS** – Minimum Dominating Set
62
+ * **TSP** – Traveling Salesman Problem
63
+ * **CVRP** – Capacitated Vehicle Routing Problem
64
+ * **CFLP** – Capacitated Facility Location Problem
65
+ * **CPMP** – Capacitated p-Median Problem
66
+ * **FJSP** – Flexible Job-shop Scheduling Problem
67
+ * **STP** – Steiner Tree Problem
68
 
69
  Each task includes:
70
 
71
+ * Easy and hard test sets with varying difficulty and practical relevance
72
+ * Training and validation instances where applicable, generated using problem-specific generators
73
+ * Reference results for classical and ML-based solvers
74
 
75
  ---
76
 
 
78
 
79
  Instances are sourced from a mix of:
80
 
81
+ * Public repositories (e.g., [TSPLib](http://comopt.ifi.uni-heidelberg.de/software/TSPLib95/), [CVRPLib](http://vrp.galgos.inf.puc-rio.br/))
82
+ * DIMACS and PACE Challenges
83
+ * Synthetic instance generators used in prior ML and optimization research
84
+ * Manual curation from recent SOTA solver evaluation benchmarks
85
 
86
  For tasks lacking open benchmarks, we include high-quality synthetic instances aligned with real-world difficulty distributions.
87
 
 
117
  print("Evaluation score:", score)
118
  ```
119
 
120
+ <details>
121
+ <summary><strong>Using Agents on Custom Problems</strong></summary>
122
+
123
+ Step 1: Include a concise description and a solve template. For example:
124
+
125
+ ```python
126
+ problem_description = '''The Traveling Salesman Problem (TSP) is a classic combinatorial optimization problem where, given a set of cities with known pairwise distances, the objective is to find the shortest possible tour that visits each city exactly once and returns to the starting city. More formally, given a complete graph G = (V, E) with vertices V representing cities and edges E with weights representing distances, we seek to find a Hamiltonian cycle (a closed path visiting each vertex exactly once) of minimum total weight.
127
+
128
+ Implement in Solve Function
129
+
130
+ def solve(**kwargs):
131
+ """
132
+ Solve a TSP instance.
133
+
134
+ Args:
135
+ - nodes (list): List of (x, y) coordinates representing cities in the TSP problem
136
+ Format: [(x1, y1), (x2, y2), ..., (xn, yn)]
137
+
138
+ Returns:
139
+ dict: Solution information with:
140
+ - 'tour' (list): List of node indices representing the solution path
141
+ Format: [0, 3, 1, ...] where numbers are indices into the nodes list
142
+ """
143
+
144
+ return {
145
+ 'tour': [],
146
+ }
147
+ '''
148
+ ```
149
+ Step 2: Define the agent
150
+ ```python
151
+ from agents import GreedyRefine
152
+ agent = GreedyRefine(
153
+ problem_description=problem_description,
154
+ timeout=10,
155
+ model='openai/o3-mini')
156
+ ```
157
+ Step 3: Define the `evaluate` function and run the loop. Use the evaluate function to get results on the data, and iteratively improve the solution based on feedback:
158
+ ```python
159
+ evaluate = ... # Define evaluate() to return score (float) and feedback (str)
160
+ # Run for 64 iterations
161
+ for it in range(64):
162
+ code = agent.step()
163
+ dev_score, dev_feedback = evaluate(code) # Define evaluate() to return score (float) and feedback (str)
164
+ agent.feedback(dev_score, dev_feedback)
165
+
166
+ # Get the final soltuion
167
+ code = agent.finalize()
168
+ print(code)
169
+ ```
170
+ </details>
171
+
172
+ *Docker environment for sandboxed agent execution and solution evaluation: coming soon.*
173
+
174
  ---
175
 
176
  ## Citation
 
191
 
192
  This dataset is released under the MIT License. Refer to `LICENSE` file for details.
193
 
194
+ ---