CO-Bench commited on
Commit
68beaf6
·
verified ·
1 Parent(s): 1ead477

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -2
README.md CHANGED
@@ -85,9 +85,29 @@ For tasks lacking open benchmarks, we include high-quality synthetic instances a
85
  To use this dataset, clone the repository and select the task of interest. Each `config.py` file documents the format and how to parse or evaluate the instances.
86
 
87
  ```bash
88
- git clone https://huggingface.co/datasets/<your-username>/FrontierCO
89
  cd FrontierCO/CFLP
90
- python config.py # example: parse instances or evaluate solver
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
  ```
92
 
93
  ---
 
85
  To use this dataset, clone the repository and select the task of interest. Each `config.py` file documents the format and how to parse or evaluate the instances.
86
 
87
  ```bash
88
+ git clone https://huggingface.co/datasets/CO-Bench/FrontierCO
89
  cd FrontierCO/CFLP
90
+ ```
91
+
92
+ Load a data instance
93
+ ```python
94
+ from config import load_data
95
+ instance = load_data('easy_test_instances/i1000_1.plc')
96
+ print(instance)
97
+ ```
98
+
99
+ Generate a solution
100
+ ```python
101
+ # Your solution generation code goes here.
102
+ # For example:
103
+ solution = my_solver(instance)
104
+ ```
105
+
106
+ ### Evaluate a solution
107
+ ```python
108
+ from config import eval_func
109
+ score = eval_func(**instance, **solution)
110
+ print("Evaluation score:", score)
111
  ```
112
 
113
  ---