Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): (None, {}), NamedSplit('validation'): (None, {}), NamedSplit('test'): ('text', {})}
Error code:   FileFormatMismatchBetweenSplitsError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Operations Research Benchmark

Overview

This benchmark is adapted from ReEvo[1], which originally consists of six types of combinatorial optimization problems (COPs). We have extended the benchmark by adding a cooperative driving problem that involves complex simulation environments using SUMO. Check OR-Agent[2] and its repository for more details.

Types of Functions to Evolve

The functions to evolve are categorized into three groups:

Classical Metaheuristics (ACO / GA / GLS)

  • Ant Colony Optimization (ACO)[3]: Evolve ACO heuristic components, such as the computation of desirability and pheromone guidance.
  • Guided Local Search (GLS)[4]: Evolve the penalty heuristic that guides perturbations during GLS.
  • Genetic Algorithm (GA)[5]: Evolve GA-related operators and heuristics (the domain-specific logic within the GA pipeline, as defined by the problem wrapper).

Attention Reshaping in Neural Combinatorial Optimization (POMO / LEHD)

  • Policy Optimization with Multiple Optima (POMO)[6]: A reinforcement learning training and inference framework for neural constructive solvers that exploits symmetry and uses multiple rollouts from different starting conditions to stabilize and improve solution quality. We evolve attention reshaping heuristics inserted into the neural solver (not the model weights). For POMO settings, download checkpoints from the official repository and place them in the corresponding directories (e.g., place checkpoint-3100.pt for TSP at problems/tsp_pomo/checkpoints/checkpoint-3100.pt).

  • Neural Combinatorial Optimization with Light Encoder, Heavy Decoder (LEHD)[6]: This approach shifts modeling capacity into the decoder while keeping the encoder lightweight, aiming for better generalization and scaling in constructive routing solvers. We evolve attention reshaping heuristics. For LEHD settings, download checkpoints and data from the official repository and place them in the corresponding directories.

Direct Solution Construction Heuristics

We can evolve functions that directly construct solutions. For example:

  • For online bin packing problems, evolve the function that generates priority scores for each bin; the solver then selects the bin with the highest priority.
  • For cooperative driving problems, evolve the function that generates actions for all drivers.

Problem Details

The benchmark problems are stored at [project_root]/problems. Detailed descriptions of each problem are provided below.

Traveling Salesman Problems (TSPs)

The Traveling Salesman Problem (TSP) is a classic optimization challenge that seeks the shortest possible route for a salesman to visit each city in a list exactly once and return to the origin city.

  • TSP via Ant Colony Optimization (tsp_aco): Find the shortest path that visits all given nodes and returns to the starting node. ACO implementations are adapted from DeepACO[2].
  • TSP via Guided Local Search (tsp_gls): Use Guided Local Search (GLS)[4] to find the shortest path.
  • TSP via LEHD (tsp_lehd): Use LEHD[7] to find the shortest path.
  • TSP via POMO (tsp_pomo): Use POMO[6] to find the shortest path.
  • TSP via Constructive Routing Solvers (tsp_constructive): Evolve functions that directly construct solutions for TSP.

Capacitated Vehicle Routing Problems (CVRPs)

The Capacitated Vehicle Routing Problem (CVRP) extends the TSP by adding constraints on vehicle capacity. Each vehicle can carry a limited load, and the objective is to minimize the total distance traveled while delivering goods to various locations.

  • CVRP via Ant Colony Optimization (cvpr_aco): Solve CVRP using Ant Colony Optimization (ACO)[3].
  • CVRP via LEHD (cvpr_lehd): Solve CVRP using LEHD[7].
  • CVRP via POMO (cvpr_pomo): Solve CVRP using POMO[6].

Bin Packing Problems (BPPs)

The Bin Packing Problem requires packing objects of different volumes into a finite number of bins or containers of fixed volume to minimize the number of bins used. This problem is widely applicable in manufacturing, shipping, and storage optimization.

  • BPP via Ant Colony Optimization (bpp_offline_aco)
  • Online BPP (bpp_online) via Priority Score Heuristics

Orienteering Problems (OPs)

In the Orienteering Problem (OP), the goal is to maximize the total score collected by visiting nodes while subject to a maximum tour length constraint.

  • OP for Routing Problems via Ant Colony Optimization (op_aco)

Multiple Knapsack Problems (MKPs)

The Multiple Knapsack Problem (MKP) involves distributing a set of items, each with a given weight and value, among multiple knapsacks to maximize the total value without exceeding the capacity of any knapsack.

  • MKP via Ant Colony Optimization (mkp_aco): Solve MKP using Ant Colony Optimization (ACO)[3].

Decap Placement Problem (DPPs)

The Decap Placement Problem (DPP) is a critical hardware design optimization issue that involves finding the optimal placement of decoupling capacitors (decap) within a power distribution network (PDN) to enhance power integrity (PI). Decoupling capacitors are hardware components that help reduce power noise and ensure a stable power supply to operating integrated circuits in hardware devices such as CPUs, GPUs, and AI accelerators.

  • Decap Placement Problem (DPP) for Electronic Design Automation (EDA) Problems via Genetic Algorithm (GA)[5] (dpp_ga)

Cooperative Driving Problem (CDPs)

The Cooperative Driving Problem (CDP) is a complex optimization challenge that involves optimizing the driving behavior of multiple vehicles on a road segment.

  • Cooperative Driving Problem (CDP) (driving): Evolve functions that directly construct driving actions for each time step.

Dataset Structure

The dataset is organized under:

tsp_aco/
tsp_gls/
tsp_pomo/
tsp_lehd/
tsp_constructive/
cvrp_aco/
…
driving/

tsp for Traveling Salesman Problems; cvrp for Capacitated Vehicle Routing Problems; bpp for Bin Packing Problems; dpp for Decap Placement Problems; OP for Orienteering Problems; driving for cooperative driving problem.

Each problem includes:

  • Problem-specific data
  • Evaluation scripts
  • Configuration files

Usage

You can manually run the eval.py script this way:

python eval.py \
    --root_dir=<path_to_project_root> \
    --file_output_prefix=<path_to_output_file> \
    --mode=val \
    --problem_size=50

You can also an Evaluator class that run eval script like this (Check (OR-Agent)[https://github.com/qiliuchn/OR-Agent/blob/main/src/oragent/evaluator.py] for more details):

subprocess.run([
                'python', 'eval.py',
                '--root_dir', '/path/to/project',
                '--file_output_prefix', '/path/to/outputs/exp1_',
            ], 
            text=True,
            timeout=self.timeout_seconds,  # timeout seconds
            cwd=os.getcwd(),
            env=env,  # python env
            stdout=f, 
            stderr=f
            )

Note: Evaluator won't specify mode and problem_size; Since evaluator is intended for general purpose, we assume it does not know any problem detail. This makes it easier for you to add new problems - you don't need to modify the Evaluator class.

Each evaluation produces:

  • metrics: detailed performance statistics (e.g., collisions, speed, etc.)
  • features: behavioral descriptors (used in MAP-Elites)
  • score: scalar fitness value Example:
metrics = {...}
features = (2, 0, 1, 4)
score = 12.34

Evaluation script will output to stdout like this:

<evaluation details>

__SANDBOX_RESULT__

__METRICS_START__
<print(repr(metrics))>
__METRICS_END__

__FEATURES_START__
<print(repr(features))>
__FEATURES_END__

__SCORE_START__
<print(repr(score))>
__SCORE_END__

__SANDBOX_SUCCESS__

Customization

You can customize your own research problems by following the requirements below:

Command line arguments requirements

Command line arguments:

  1. root_dir: the project root directory; knowing project root can help you to load data; default: current working directory (os.getcwd()); Eval script need this to load dataset since eval script may be generated and stored in a different location to support parallelism;
  2. file_output_prefix: the output file prefix: this prefix can be used to save output files during evaluation for inspection purposes; we use prefix since you may want more than just a folder name; say you may want to add solution id to the output filename; file will be saved by: with open(f"{file_output_prefix}<filename>", 'w'):\n...; absolute path is recommended; default: '', which means just save to current working directory;
  3. mode: train or val; default: val;
  4. problem_size; default: 50 (Note: this value differs for each problem!);

Output requirements

Eval script should print out metrics,features, and score;

  1. metrics: a dict that map test name (str) to metrics (Dict), or a dict that maps performance index name to values; metrics dict is used for user and AI agent inspection; It's optional but we strongly recommend you to prepare a detailed metrics for each problem; as this can help LLM to better understand the solution performance!
  2. features: a tuple of ints that represents the features of the solution; features tuple is used for solution storage in the solution database; Features is generally generated from metrics, possibly with some added feature; but Evaluator will not assume any conversion method; you need to specify it yourself. Features could be set to None if you don't want to specify feature; in that case MAP-Elite will be disabled;
  3. score: a float that represents the score of the solution; score is used for as the fitness score. It's required. It's usually generated from metrics; but Evaluator will not assume any conversion method; you need to specify it yourself.

Example: Assume the following variables are generated during eval script:

metrics = {
    "critical_ttc_count": 28, 
    "collisions": 0, 
    "emergencyStops": 0, 
    "emergencyBraking": 4, 
    "teleports": 0, 
    "avg_speed": 12.51, 
    "speed_variance": 16.22
}
features = (2, 0, 1, 4)
score = 12.34

Then stdout should be:

...
__SANDBOX_RESULT__

__METRICS_START__
<print(repr(metrics))>
__METRICS_END__

__FEATURES_START__
<print(repr(features))>
__FEATURES_END__

__SCORE_START__
<print(repr(score))>
__SCORE_END__

__SANDBOX_SUCCESS__

Intended Use

This dataset is intended for:

  • Research in automated algorithm design
  • Benchmarking heuristic optimization methods
  • Studying LLM-based scientific discovery systems

Limitations

  • Some problems rely on external simulators (e.g., SUMO)
  • Performance depends on evaluation configuration
  • Not all problem domains are equally represented

References

[1] Ye, H., Wang, J., Cao, Z., Berto, F., Hua, C., Kim, H., Park, J., & Song, G. (2024). Reevo: Large language models as hyper-heuristics with reflective evolution. Advances in Neural Information Processing Systems, 37, 43571–43608.

[2] Liu. Q., Hao, R., Li, C., Ma, W. (2026). OR-Agent: Bridging Evolutionary Search and Structured Research for Automated Algorithm Discovery. https://arxiv.org/abs/2602.13769.

[3] Ye, H., Wang, J., Cao, Z., Liang, H., & Li, Y. (2023). DeepACO: Neural-enhanced ant systems for combinatorial optimization. Advances in Neural Information Processing Systems, 36, 43706–43728.

[4] Voudouris, C., & Tsang, E. (1999). Guided local search and its application to the traveling salesman problem. European Journal of Operational Research, 113(2), 469–499.

[5] Park, H., Kim, H., Kim, H., Park, J., Choi, S., Kim, J., Son, K., Suh, H., Kim, T., Ahn, J., & Kim, J. (2023, October). Versatile genetic algorithm-bayesian optimization (GA-BO) bi-level optimization for decoupling capacitor placement. In 2023 IEEE 32nd Conference on Electrical Performance of Electronic Packaging and Systems (EPEPS) (pp. 1–3). IEEE.

[6] Kwon, Y. D., Choo, J., Kim, B., Yoon, I., Gwon, Y., & Min, S. (2020). POMO: Policy optimization with multiple optima for reinforcement learning. Advances in Neural Information Processing Systems, 33, 21188–21198.

[7] Luo, F., Lin, X., Liu, F., Zhang, Q., & Wang, Z. (2023). Neural combinatorial optimization with heavy decoder: Toward large scale generalization. Advances in Neural Information Processing Systems, 36, 8845–8864.

Downloads last month
-

Paper for qiliuchn/operations-research