The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns ({'scip_gap', 'scip_s', 'feasible'})
This happened while the json dataset builder was generating data using
hf://datasets/yfxiao/contrarc-milp/epn/train/results.json (at revision 44d68dd2b3315e3c3792af8bcd187462fa0fc6bd), [/tmp/hf-datasets-cache/medium/datasets/29403800815050-config-parquet-and-info-yfxiao-contrarc-milp-bcbd5b05/hub/datasets--yfxiao--contrarc-milp/snapshots/44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/epn/train/manifest.json (origin=hf://datasets/yfxiao/contrarc-milp@44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/epn/train/manifest.json), /tmp/hf-datasets-cache/medium/datasets/29403800815050-config-parquet-and-info-yfxiao-contrarc-milp-bcbd5b05/hub/datasets--yfxiao--contrarc-milp/snapshots/44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/epn/train/results.json (origin=hf://datasets/yfxiao/contrarc-milp@44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/epn/train/results.json), /tmp/hf-datasets-cache/medium/datasets/29403800815050-config-parquet-and-info-yfxiao-contrarc-milp-bcbd5b05/hub/datasets--yfxiao--contrarc-milp/snapshots/44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/rpl/train/manifest.json (origin=hf://datasets/yfxiao/contrarc-milp@44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/rpl/train/manifest.json), /tmp/hf-datasets-cache/medium/datasets/29403800815050-config-parquet-and-info-yfxiao-contrarc-milp-bcbd5b05/hub/datasets--yfxiao--contrarc-milp/snapshots/44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/rpl/train/results.json (origin=hf://datasets/yfxiao/contrarc-milp@44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/rpl/train/results.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
file: string
domain: string
split: string
idx: int64
n_vars: int64
n_constrs: int64
obj_val: int64
scip_s: double
scip_gap: double
feasible: bool
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 1272
to
{'file': Value('string'), 'domain': Value('string'), 'split': Value('string'), 'idx': Value('int64'), 'n_vars': Value('int64'), 'n_constrs': Value('int64'), 'obj_val': Value('int64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns ({'scip_gap', 'scip_s', 'feasible'})
This happened while the json dataset builder was generating data using
hf://datasets/yfxiao/contrarc-milp/epn/train/results.json (at revision 44d68dd2b3315e3c3792af8bcd187462fa0fc6bd), [/tmp/hf-datasets-cache/medium/datasets/29403800815050-config-parquet-and-info-yfxiao-contrarc-milp-bcbd5b05/hub/datasets--yfxiao--contrarc-milp/snapshots/44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/epn/train/manifest.json (origin=hf://datasets/yfxiao/contrarc-milp@44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/epn/train/manifest.json), /tmp/hf-datasets-cache/medium/datasets/29403800815050-config-parquet-and-info-yfxiao-contrarc-milp-bcbd5b05/hub/datasets--yfxiao--contrarc-milp/snapshots/44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/epn/train/results.json (origin=hf://datasets/yfxiao/contrarc-milp@44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/epn/train/results.json), /tmp/hf-datasets-cache/medium/datasets/29403800815050-config-parquet-and-info-yfxiao-contrarc-milp-bcbd5b05/hub/datasets--yfxiao--contrarc-milp/snapshots/44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/rpl/train/manifest.json (origin=hf://datasets/yfxiao/contrarc-milp@44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/rpl/train/manifest.json), /tmp/hf-datasets-cache/medium/datasets/29403800815050-config-parquet-and-info-yfxiao-contrarc-milp-bcbd5b05/hub/datasets--yfxiao--contrarc-milp/snapshots/44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/rpl/train/results.json (origin=hf://datasets/yfxiao/contrarc-milp@44d68dd2b3315e3c3792af8bcd187462fa0fc6bd/rpl/train/results.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
file string | domain string | split string | idx int64 | n_vars int64 | n_constrs int64 | obj_val int64 |
|---|---|---|---|---|---|---|
00000.npz | epn | train | 0 | 12,960 | 13,312 | 33,204 |
00001.npz | epn | train | 1 | 11,980 | 12,362 | 32,488 |
00002.npz | epn | train | 2 | 16,916 | 15,398 | 30,428 |
00003.npz | epn | train | 3 | 15,186 | 14,094 | 30,528 |
00004.npz | epn | train | 4 | 10,970 | 10,648 | 31,132 |
00005.npz | epn | train | 5 | 11,086 | 11,018 | 30,332 |
00006.npz | epn | train | 6 | 9,346 | 10,580 | 31,856 |
00007.npz | epn | train | 7 | 15,418 | 14,134 | 32,956 |
00008.npz | epn | train | 8 | 8,848 | 8,792 | 33,128 |
00009.npz | epn | train | 9 | 7,232 | 8,424 | 31,640 |
00010.npz | epn | train | 10 | 8,756 | 9,292 | 31,640 |
00011.npz | epn | train | 11 | 11,616 | 11,112 | 31,136 |
00012.npz | epn | train | 12 | 18,060 | 16,520 | 30,544 |
00013.npz | epn | train | 13 | 11,612 | 11,822 | 30,580 |
00014.npz | epn | train | 14 | 18,348 | 16,336 | 31,188 |
00015.npz | epn | train | 15 | 8,694 | 9,600 | 30,684 |
00016.npz | epn | train | 16 | 18,932 | 16,884 | 31,468 |
00017.npz | epn | train | 17 | 13,428 | 12,480 | 33,120 |
00018.npz | epn | train | 18 | 15,784 | 14,256 | 31,996 |
00019.npz | epn | train | 19 | 17,004 | 16,250 | 30,872 |
00020.npz | epn | train | 20 | 19,340 | 15,730 | 31,116 |
00021.npz | epn | train | 21 | 14,304 | 13,540 | 32,932 |
00022.npz | epn | train | 22 | 21,350 | 18,156 | 30,684 |
00023.npz | epn | train | 23 | 12,126 | 11,884 | 31,016 |
00024.npz | epn | train | 24 | 11,572 | 10,318 | 30,928 |
00025.npz | epn | train | 25 | 13,266 | 12,486 | 33,372 |
00026.npz | epn | train | 26 | 8,610 | 9,576 | 30,732 |
00027.npz | epn | train | 27 | 15,862 | 14,976 | 30,872 |
00028.npz | epn | train | 28 | 10,556 | 11,332 | 30,384 |
00029.npz | epn | train | 29 | 15,784 | 14,152 | 31,292 |
00030.npz | epn | train | 30 | 21,384 | 17,464 | 31,276 |
00031.npz | epn | train | 31 | 13,700 | 12,798 | 30,268 |
00032.npz | epn | train | 32 | 15,064 | 13,486 | 31,076 |
00033.npz | epn | train | 33 | 11,966 | 11,424 | 32,464 |
00034.npz | epn | train | 34 | 12,506 | 11,562 | 30,872 |
00035.npz | epn | train | 35 | 13,056 | 12,190 | 30,292 |
00036.npz | epn | train | 36 | 12,162 | 11,788 | 32,160 |
00037.npz | epn | train | 37 | 15,750 | 14,804 | 30,404 |
00038.npz | epn | train | 38 | 12,472 | 12,856 | 31,768 |
00039.npz | epn | train | 39 | 6,970 | 8,152 | 31,068 |
00040.npz | epn | train | 40 | 10,306 | 10,432 | 31,624 |
00041.npz | epn | train | 41 | 7,984 | 9,358 | 30,424 |
00042.npz | epn | train | 42 | 11,256 | 10,662 | 30,676 |
00043.npz | epn | train | 43 | 11,244 | 10,818 | 31,648 |
00044.npz | epn | train | 44 | 14,488 | 12,738 | 30,260 |
00045.npz | epn | train | 45 | 16,968 | 15,258 | 32,264 |
00046.npz | epn | train | 46 | 13,440 | 12,958 | 31,092 |
00047.npz | epn | train | 47 | 15,906 | 14,904 | 32,008 |
00048.npz | epn | train | 48 | 12,590 | 12,798 | 31,520 |
00049.npz | epn | train | 49 | 11,962 | 12,378 | 32,500 |
00050.npz | epn | train | 50 | 9,848 | 9,720 | 34,884 |
00051.npz | epn | train | 51 | 13,566 | 13,678 | 30,732 |
00052.npz | epn | train | 52 | 9,626 | 9,770 | 31,928 |
00053.npz | epn | train | 53 | 16,208 | 15,098 | 33,164 |
00054.npz | epn | train | 54 | 17,848 | 15,400 | 31,024 |
00055.npz | epn | train | 55 | 14,278 | 12,424 | 33,632 |
00056.npz | epn | train | 56 | 17,376 | 15,782 | 31,632 |
00057.npz | epn | train | 57 | 12,674 | 12,156 | 30,936 |
00058.npz | epn | train | 58 | 15,426 | 13,920 | 30,368 |
00059.npz | epn | train | 59 | 15,618 | 14,938 | 30,632 |
00060.npz | epn | train | 60 | 15,744 | 13,732 | 30,956 |
00061.npz | epn | train | 61 | 22,192 | 18,756 | 30,328 |
00062.npz | epn | train | 62 | 12,424 | 12,340 | 30,356 |
00063.npz | epn | train | 63 | 15,654 | 14,466 | 30,944 |
00064.npz | epn | train | 64 | 14,192 | 12,272 | 32,884 |
00065.npz | epn | train | 65 | 9,004 | 9,828 | 31,828 |
00066.npz | epn | train | 66 | 16,588 | 14,884 | 32,172 |
00067.npz | epn | train | 67 | 9,146 | 9,610 | 31,400 |
00068.npz | epn | train | 68 | 13,522 | 13,340 | 30,508 |
00069.npz | epn | train | 69 | 9,050 | 9,926 | 31,072 |
00070.npz | epn | train | 70 | 12,088 | 11,636 | 32,932 |
00071.npz | epn | train | 71 | 17,906 | 16,106 | 31,960 |
00072.npz | epn | train | 72 | 14,518 | 14,214 | 31,360 |
00073.npz | epn | train | 73 | 16,266 | 14,468 | 30,716 |
00074.npz | epn | train | 74 | 16,528 | 14,542 | 30,132 |
00075.npz | epn | train | 75 | 13,554 | 12,962 | 30,508 |
00076.npz | epn | train | 76 | 8,856 | 9,546 | 32,868 |
00077.npz | epn | train | 77 | 10,258 | 11,102 | 30,760 |
00078.npz | epn | train | 78 | 13,450 | 12,848 | 30,640 |
00079.npz | epn | train | 79 | 13,312 | 11,568 | 31,080 |
00080.npz | epn | train | 80 | 19,390 | 16,666 | 31,184 |
00081.npz | epn | train | 81 | 16,986 | 15,570 | 31,768 |
00082.npz | epn | train | 82 | 8,826 | 9,858 | 30,776 |
00083.npz | epn | train | 83 | 16,158 | 14,980 | 30,252 |
00084.npz | epn | train | 84 | 13,800 | 12,644 | 31,772 |
00085.npz | epn | train | 85 | 11,680 | 12,072 | 30,996 |
00086.npz | epn | train | 86 | 19,280 | 16,568 | 30,920 |
00087.npz | epn | train | 87 | 6,636 | 8,326 | 35,120 |
00088.npz | epn | train | 88 | 9,134 | 9,668 | 31,624 |
00089.npz | epn | train | 89 | 7,976 | 9,524 | 30,484 |
00090.npz | epn | train | 90 | 6,748 | 8,212 | 31,484 |
00091.npz | epn | train | 91 | 15,188 | 13,620 | 31,424 |
00092.npz | epn | train | 92 | 9,826 | 10,284 | 31,364 |
00093.npz | epn | train | 93 | 12,468 | 12,058 | 30,564 |
00094.npz | epn | train | 94 | 11,848 | 11,922 | 33,108 |
00095.npz | epn | train | 95 | 15,428 | 14,252 | 31,000 |
00096.npz | epn | train | 96 | 12,684 | 12,122 | 32,308 |
00097.npz | epn | train | 97 | 8,812 | 9,890 | 32,192 |
00098.npz | epn | train | 98 | 22,390 | 18,882 | 30,604 |
00099.npz | epn | train | 99 | 9,588 | 10,020 | 31,888 |
ContrArc-MILP: Learning-Oriented Binary Integer Programming Dataset
A dataset of 2,000 binary integer linear programming (BILP) instances derived from contract-based architecture selection problems (ContrArc). Designed for training and evaluating GNN-based predict-and-search solvers with controlled distribution shifts.
Dataset Structure
Two problem domains, each with four splits:
| Split | EPN | RPL | Description |
|---|---|---|---|
train |
600 | 600 | Training set (default constraints, moderate size) |
test_id |
150 | 150 | In-distribution test (same distribution as train) |
test_large |
100 | 100 | Scale shift (~2x larger instances, same constraints) |
test_ood |
150 | 150 | Structural shift (tighter coupling constraints) |
Cross-domain generalization can be evaluated by training on one domain and testing on the other (e.g., train on epn/train, evaluate on rpl/test_id).
Domains
- EPN (Electric Power Network): Architecture selection for aircraft power distribution networks with generators, AC/DC buses, rectifiers, and loads.
- RPL (Reconfigurable Production Line): Configuration of reconfigurable manufacturing production lines with conveyors and machines.
Instance Statistics
| Split | N | Median vars | Median constraints | Median SCIP time | Feasibility | Optimality |
|---|---|---|---|---|---|---|
| epn/train | 600 | 13,308 | 12,740 | 3.0s | 100% | 100% |
| epn/test_id | 150 | 13,328 | 12,884 | 4.7s | 100% | 100% |
| epn/test_large | 100 | 27,072 | 19,832 | 11.9s | 100% | 100% |
| epn/test_ood | 150 | 14,044 | 13,308 | 4.8s | 100% | 100% |
| rpl/train | 600 | 14,990 | 10,480 | 2.2s | 100% | 92% |
| rpl/test_id | 150 | 14,794 | 10,480 | 2.8s | 100% | 91% |
| rpl/test_large | 100 | 33,540 | 16,028 | 4.5s | 93% | 82% |
| rpl/test_ood | 150 | 16,656 | 11,580 | 3.2s | 100% | 97% |
SCIP times measured with a 300-second time limit. Optimality = gap < 1e-6.
File Format
Each instance is stored as a compressed NumPy archive (.npz):
import numpy as np
import gzip
data = np.load("epn/train/00000.npz", allow_pickle=True)
# Gzip-compressed LP file content (CPLEX LP format)
lp_string = gzip.decompress(bytes(data["lp_gz"])).decode()
# Optimal binary solution vector from Gurobi
solution = data["solution"] # shape: (n_vars,), dtype: float32, values in {0, 1}
# Optimal objective value
obj_val = float(data["obj_val"])
Each split directory also contains:
manifest.json: Per-instance metadata (n_vars, n_constrs, obj_val)results.json: SCIP solve results (solve time, gap, feasibility)
Distribution Shift Design
The dataset tests three types of generalization:
In-distribution (
test_id): Same component ranges and constraint rules as training. Baseline for model performance.Scale generalization (
test_large): ~1.5-2x larger instances with the same constraint structure. Tests whether learned heuristics transfer to bigger problems.Structural generalization (
test_ood): Same instance sizes but with tighter coupling constraints (cross-tag composition rules, implication constraints). Tests robustness to constraint distribution shift.Cross-domain generalization (implicit): Train on EPN, evaluate on RPL (or vice versa). The domains have different component types and connectivity patterns.
Problem Characteristics
All instances are pure binary (0/1) integer programs with:
- Linear objective (minimization)
- Linear constraints (equalities and inequalities)
- Derived from Gurobi models with AND, OR, and indicator general constraints, linearized via big-M formulation
The problems encode architecture selection: choosing components, their implementations, and connection mappings to minimize total cost while satisfying structural and contractual constraints.
Intended Use
This dataset is designed for:
- Training GNN-based MILP solvers (e.g., predict-and-search, Neural Diving)
- Evaluating generalization of learned combinatorial optimization methods
- Benchmarking under controlled distribution shifts
Generation
Instances are generated using ContrArc, a contract-based methodology for cyber-physical system architecture exploration that converts architecture selection into MILP using assume-guarantee contracts and subgraph isomorphism (Xiao et al., DATE 2024). Instances are generated with randomized component counts, library sizes, and constraint configurations. Optimal solutions are obtained via Gurobi. SCIP solve times are provided as reference solver baselines.
Citation
If you use this dataset, please cite:
@inproceedings{xiao2024contrarc,
title={Efficient Exploration of Cyber-Physical System Architectures Using Contracts and Subgraph Isomorphism},
author={Xiao, Yifeng and Oh, Chanwook and Lora, Michele and Nuzzo, Pierluigi},
booktitle={2024 Design, Automation \& Test in Europe Conference \& Exhibition (DATE)},
pages={1--6},
year={2024},
doi={10.23919/DATE58400.2024.10546764}
}
- Downloads last month
- 9