Datasets:
Upload dataset.md
Browse files- table_data/dataset.md +84 -0
table_data/dataset.md
ADDED
|
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction to Datasets
|
| 2 |
+
|
| 3 |
+
Presolved Data is stored in `.\instance`. The folder structure after the datasets are set up looks as follows
|
| 4 |
+
|
| 5 |
+
```bash
|
| 6 |
+
instances/
|
| 7 |
+
MIPLIB/ -> 1065 instances
|
| 8 |
+
set_cover/ -> 3994 instances
|
| 9 |
+
independent_set/ -> 1604 instances
|
| 10 |
+
nn_verification/ -> 3104 instances
|
| 11 |
+
load_balancing/ -> 2286 instances
|
| 12 |
+
```
|
| 13 |
+
|
| 14 |
+
### Dataset Description
|
| 15 |
+
|
| 16 |
+
#### MIPLIB
|
| 17 |
+
|
| 18 |
+
Heterogeneous dataset from [MIPLIB 2017](https://miplib.zib.de/), a well-established benchmark for evaluating MILP solvers. The dataset includes a diverse set of particularly challenging mixed-integer programming (MIP) instances, each known for its computational difficulty.
|
| 19 |
+
|
| 20 |
+
#### Set Covering
|
| 21 |
+
|
| 22 |
+
This dataset consists of instances of the classic Set Covering Problem, which can be found [here](https://github.com/ds4dm/learn2branch/tree/master). Each instance requires finding the minimum number of sets that cover all elements in a universe. The problem is formulated as a MIP problem.
|
| 23 |
+
|
| 24 |
+
#### Maximum Independent Set
|
| 25 |
+
|
| 26 |
+
This dataset addresses the Maximum Independent Set Problem, which can be found [here](https://github.com/ds4dm/learn2branch/tree/master). Each instance is modeled as a MIP, with the objective of maximizing the size of the independent set.
|
| 27 |
+
|
| 28 |
+
#### NN Verification
|
| 29 |
+
|
| 30 |
+
This “Neural Network Verification” dataset is to verify whether a neural network is robust to input perturbations can be posed as a MIP. The MIP formulation is described in the paper [On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models (Gowal et al., 2018)](https://arxiv.org/abs/1810.12715). Each input on which to verify the network gives rise to a different MIP.
|
| 31 |
+
|
| 32 |
+
#### Load Balancing
|
| 33 |
+
|
| 34 |
+
This dataset is from [NeurIPS 2021 Competition](https://github.com/ds4dm/ml4co-competition). This problem deals with apportioning workloads. The apportionment is required to be robust to any worker’s failure. Each instance problem is modeled as a MILP, using a bin-packing with an apportionment formulation.
|
| 35 |
+
|
| 36 |
+
### Dataset Spliting
|
| 37 |
+
|
| 38 |
+
Each dataset was split into a training set $D_{\text{train}}$ and a testing set $D_{\text{test}}$, following an approximate 80-20 split. Moreover, we split the dataset by time and "optimality", which means according to the proportion of optimality for each parameter is similar in training and testing sets. This ensures a balanced representation of both temporal variations and the highest levels of parameter efficiency in our data partitions.
|
| 39 |
+
|
| 40 |
+
To split the datasets and create different folds for cross validation, run
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
python extract_feature/split_fold.py \
|
| 44 |
+
--dataset_name "your_dataset_name" \
|
| 45 |
+
--time_path "/your/path/to/soving times" \
|
| 46 |
+
--feat_path "/your/path/to/features" \
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
## Handcraft Feature Extraction
|
| 50 |
+
|
| 51 |
+
**Folder:** ```extract_feature```
|
| 52 |
+
|
| 53 |
+
**Problem format:** ```your_file.mps.gz``` or ```your_file.lp```
|
| 54 |
+
|
| 55 |
+
**Log format:** ```your_file.log```
|
| 56 |
+
|
| 57 |
+
1. Static feature extraction for MIP problems: run
|
| 58 |
+
|
| 59 |
+
```bash
|
| 60 |
+
python extract_feature/extract_problem.py \
|
| 61 |
+
--problem_folder "/your/path/to/instances" \
|
| 62 |
+
--dataset_name "your_dataset_name" \
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
2. Extract other features from COPT solution log: run
|
| 66 |
+
|
| 67 |
+
```bash
|
| 68 |
+
python extract_feature/extract_log_feature.py \
|
| 69 |
+
--log_folder "/your/path/to/solving logs" \
|
| 70 |
+
--dataset_name "your_dataset_name" \
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
3. Feature combination and preprocessing: run
|
| 74 |
+
|
| 75 |
+
```bash
|
| 76 |
+
python extract_feature/combine.py \
|
| 77 |
+
--dataset_name "your_dataset_name" \
|
| 78 |
+
```
|
| 79 |
+
4. Label extraction from COPT solution log: run
|
| 80 |
+
|
| 81 |
+
```bash
|
| 82 |
+
python extract_feature/extract_time.py \
|
| 83 |
+
--log_folder "/your/path/to/solving logs" \
|
| 84 |
+
--dataset_name "your_dataset_name" \
|