File size: 6,416 Bytes
8e62344
e92b9d6
8e62344
 
 
 
e92b9d6
 
 
8e62344
4d13354
1ead477
 
e92b9d6
 
1ead477
 
 
 
e92b9d6
83799cd
e92b9d6
71141e3
1ead477
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e92b9d6
 
 
 
1ead477
 
 
 
 
 
 
e92b9d6
 
 
 
 
 
 
 
1ead477
 
 
e92b9d6
 
 
1ead477
 
 
 
 
 
 
e92b9d6
 
 
 
1ead477
 
 
 
 
 
 
 
 
 
68beaf6
1ead477
68beaf6
 
 
 
 
 
 
 
 
 
 
 
 
cf7de12
68beaf6
 
 
 
 
 
 
1ead477
 
e92b9d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ead477
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e92b9d6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
---
license: mit
task_categories:
- text-generation
tags:
- code
- benchmarks
- combinatorial-optimization
- llm-agents
---

# FrontierCO: Benchmark Dataset for Frontier Combinatorial Optimization

[Paper: CO-Bench: Benchmarking Language Model Agents in Algorithm Search for Combinatorial Optimization](https://huggingface.co/papers/2504.04310) | [Code: CO-Bench GitHub Repository](https://github.com/sunnweiwei/CO-Bench)

## Overview

**FrontierCO** is a curated benchmark suite for evaluating ML-based solvers on large-scale and real-world **Combinatorial Optimization (CO)** problems. The benchmark spans **8 classical CO problems** across **5 application domains**, providing both training and evaluation instances specifically designed to test the frontier of ML and LLM capabilities in solving NP-hard problems.

Code for evaluating agent: [https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco](https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco)

Code for running classical solver, generating training data, evaluating neural solver: [https://github.com/sunnweiwei/FrontierCO](https://github.com/sunnweiwei/FrontierCO)

---

## Dataset Structure

Each subdirectory corresponds to a specific CO task:

```
FrontierCO/
├── CFLP/
│   ├── easy_test_instances/
│   ├── hard_test_instances/
│   ├── valid_instances/
│   └── config.py
├── CPMP/
├── CVRP/
├── FJSP/
├── MIS/
├── MDS/
├── STP/
├── TSP/
└── ...
```

Each task folder contains:

*   `easy_test_instances/`: Benchmark instances that are solvable by SOTA human-designed solvers.
*   `hard_test_instances/`: Instances that remain computationally intensive or lack known optimal solutions.
*   `valid_instances/` *(if applicable)*: Additional instances for validation or development.
*   `config.py`: Metadata about instance format, solver settings, and reference solutions.

---

## Tasks Covered

The benchmark currently includes the following problems:

*   **MIS** – Maximum Independent Set
*   **MDS** – Minimum Dominating Set
*   **TSP** – Traveling Salesman Problem
*   **CVRP** – Capacitated Vehicle Routing Problem
*   **CFLP** – Capacitated Facility Location Problem
*   **CPMP** – Capacitated p-Median Problem
*   **FJSP** – Flexible Job-shop Scheduling Problem
*   **STP** – Steiner Tree Problem

Each task includes:

*   Easy and hard test sets with varying difficulty and practical relevance
*   Training and validation instances where applicable, generated using problem-specific generators
*   Reference results for classical and ML-based solvers

---

## Data Sources

Instances are sourced from a mix of:

*   Public repositories (e.g., [TSPLib](http://comopt.ifi.uni-heidelberg.de/software/TSPLib95/), [CVRPLib](http://vrp.galgos.inf.puc-rio.br/))
*   DIMACS and PACE Challenges
*   Synthetic instance generators used in prior ML and optimization research
*   Manual curation from recent SOTA solver evaluation benchmarks

For tasks lacking open benchmarks, we include high-quality synthetic instances aligned with real-world difficulty distributions.

---

## Usage

To use this dataset, clone the repository and select the task of interest. Each `config.py` file documents the format and how to parse or evaluate the instances.

```bash
git clone https://huggingface.co/datasets/CO-Bench/FrontierCO
cd FrontierCO/CFLP
```

Load a data instance
```python
from config import load_data
instance = load_data('easy_test_instances/i1000_1.plc')
print(instance)
```

Generate a solution
```python
# Your solution generation code goes here.
# For example:
solution = my_solver_func(**instance)
```

### Evaluate a solution
```python
from config import eval_func
score = eval_func(**instance, **solution)
print("Evaluation score:", score)
```

<details>
<summary><strong>Using Agents on Custom Problems</strong></summary>

Step 1: Include a concise description and a solve template. For example:

```python
problem_description = '''The Traveling Salesman Problem (TSP) is a classic combinatorial optimization problem where, given a set of cities with known pairwise distances, the objective is to find the shortest possible tour that visits each city exactly once and returns to the starting city. More formally, given a complete graph G = (V, E) with vertices V representing cities and edges E with weights representing distances, we seek to find a Hamiltonian cycle (a closed path visiting each vertex exactly once) of minimum total weight.

Implement in Solve Function

def solve(**kwargs):
    """
    Solve a TSP instance.

    Args:
        - nodes (list): List of (x, y) coordinates representing cities in the TSP problem
                      Format: [(x1, y1), (x2, y2), ..., (xn, yn)]

    Returns:
        dict: Solution information with:
            - 'tour' (list): List of node indices representing the solution path
                           Format: [0, 3, 1, ...] where numbers are indices into the nodes list
    """

    return {
        'tour': [],
    }
'''
```
Step 2: Define the agent
```python
from agents import GreedyRefine
agent = GreedyRefine(
    problem_description=problem_description,
    timeout=10,
    model='openai/o3-mini')
```
Step 3: Define the `evaluate` function and run the loop. Use the evaluate function to get results on the data, and iteratively improve the solution based on feedback:
```python
evaluate = ...  # Define evaluate() to return score (float) and feedback (str)
# Run for 64 iterations
for it in range(64):
    code = agent.step()
    dev_score, dev_feedback = evaluate(code) # Define evaluate() to return score (float) and feedback (str)
    agent.feedback(dev_score, dev_feedback)

# Get the final soltuion
code = agent.finalize()
print(code)
```
</details>

*Docker environment for sandboxed agent execution and solution evaluation: coming soon.*

---

## Citation

If you use **FrontierCO** in your research or applications, please cite the following paper:

```bibtex
@misc{feng2025comprehensive,
  title={A Comprehensive Evaluation of Contemporary ML-Based Solvers for Combinatorial Optimization},
  author={Shengyu Feng and Weiwei Sun and Shanda Li and Ameet Talwalkar and Yiming Yang},
  year={2025},
}
```

---

## License

This dataset is released under the MIT License. Refer to `LICENSE` file for details.

---