Junbo Jacob Lian commited on
Commit
e406ae1
·
1 Parent(s): f9db377

Add dual prompt formats: schema-based and data-embedded

Browse files

- prompt_schema: Data loaded at runtime (scalable for large datasets)
- prompt_full: Full JSON embedded in prompt (compatible with other benchmarks)

This enables fair comparison with NL4Opt, MAMO, IndustryOR while maintaining
scalability for production scenarios.

Files changed (3) hide show
  1. README.md +48 -18
  2. retailopt_190.jsonl +0 -0
  3. retailopt_190.parquet +2 -2
README.md CHANGED
@@ -19,7 +19,9 @@ dataset_info:
19
  features:
20
  - name: scenario_id
21
  dtype: string
22
- - name: prompt
 
 
23
  dtype: string
24
  - name: data
25
  dtype: string
@@ -63,6 +65,24 @@ The benchmark spans 8 scenario families and 38 archetypes covering core retail p
63
 
64
  English
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ## Dataset Structure
67
 
68
  ### Data Fields
@@ -70,7 +90,8 @@ English
70
  | Field | Type | Description |
71
  |-------|------|-------------|
72
  | `scenario_id` | string | Unique scenario identifier (e.g., `retail_f1_base_v0`) |
73
- | `prompt` | string | Natural-language problem description with structure cues |
 
74
  | `data` | string | JSON-formatted instance data (parse with `json.loads()`) |
75
  | `reference_status` | string | Ground truth solver status (`OPTIMAL`, `INFEASIBLE`, etc.) |
76
  | `reference_objective` | float | Ground truth objective value (null if infeasible) |
@@ -89,21 +110,17 @@ English
89
  from datasets import load_dataset
90
  import json
91
 
92
- # Load dataset
93
  dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")
94
-
95
- # Access a sample
96
  sample = dataset[0]
97
- print(sample['scenario_id']) # e.g., "retail_f1_base_v0"
98
- print(sample['prompt'][:200]) # First 200 chars of prompt
99
 
100
- # Parse JSON data
101
- data = json.loads(sample['data'])
102
- print(data['periods']) # Number of time periods
103
- print(data['products']) # List of products
104
  ```
105
 
106
- ### Benchmarking Your Model
 
 
107
 
108
  ```python
109
  from datasets import load_dataset
@@ -112,17 +129,30 @@ import json
112
  dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")
113
 
114
  for sample in dataset:
115
- # Get prompt and data
116
- prompt = sample['prompt']
117
  data = json.loads(sample['data'])
118
 
119
- # Generate code with your LLM
120
  generated_code = your_llm(prompt)
 
121
 
122
- # Execute generated code
123
- exec(generated_code, {'data': data})
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
 
125
- # Compare with ground truth
126
  print(f"Reference: {sample['reference_status']}, {sample['reference_objective']}")
127
  ```
128
 
 
19
  features:
20
  - name: scenario_id
21
  dtype: string
22
+ - name: prompt_schema
23
+ dtype: string
24
+ - name: prompt_full
25
  dtype: string
26
  - name: data
27
  dtype: string
 
65
 
66
  English
67
 
68
+ ## Two Prompt Formats
69
+
70
+ RetailOpt-190 provides **two prompt formats** for different evaluation scenarios:
71
+
72
+ | Format | Field | Data Location | Use Case |
73
+ |--------|-------|---------------|----------|
74
+ | **Schema-based** | `prompt_schema` | External (runtime) | Large datasets, tests data access patterns |
75
+ | **Data-embedded** | `prompt_full` | In prompt | Direct comparison with other benchmarks |
76
+
77
+ ### Why Two Formats?
78
+
79
+ Most existing benchmarks (NL4Opt, MAMO, IndustryOR) embed data directly in prompts. RetailOpt-190 supports both approaches to enable:
80
+
81
+ 1. **Fair comparison**: Use `prompt_full` when comparing with other benchmarks in unified evaluation frameworks
82
+ 2. **Scalability**: Use `prompt_schema` for production scenarios with large datasets
83
+
84
+ Both formats provide the **same semantic information**—only the data delivery method differs.
85
+
86
  ## Dataset Structure
87
 
88
  ### Data Fields
 
90
  | Field | Type | Description |
91
  |-------|------|-------------|
92
  | `scenario_id` | string | Unique scenario identifier (e.g., `retail_f1_base_v0`) |
93
+ | `prompt_schema` | string | Schema-based prompt (data loaded at runtime via `data` variable) |
94
+ | `prompt_full` | string | Data-embedded prompt (full JSON data in prompt) |
95
  | `data` | string | JSON-formatted instance data (parse with `json.loads()`) |
96
  | `reference_status` | string | Ground truth solver status (`OPTIMAL`, `INFEASIBLE`, etc.) |
97
  | `reference_objective` | float | Ground truth objective value (null if infeasible) |
 
110
  from datasets import load_dataset
111
  import json
112
 
 
113
  dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")
 
 
114
  sample = dataset[0]
 
 
115
 
116
+ print(sample['scenario_id']) # e.g., "retail_f1_base_v0"
117
+ print(sample['prompt_schema'][:200]) # Schema-based prompt
118
+ print(sample['prompt_full'][:200]) # Data-embedded prompt
 
119
  ```
120
 
121
+ ### Option A: Schema-based Evaluation
122
+
123
+ Use `prompt_schema` when you need external data loading (matches production scenarios):
124
 
125
  ```python
126
  from datasets import load_dataset
 
129
  dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")
130
 
131
  for sample in dataset:
132
+ prompt = sample['prompt_schema']
 
133
  data = json.loads(sample['data'])
134
 
 
135
  generated_code = your_llm(prompt)
136
+ exec(generated_code, {'data': data}) # Data pre-loaded
137
 
138
+ print(f"Reference: {sample['reference_status']}, {sample['reference_objective']}")
139
+ ```
140
+
141
+ ### Option B: Data-embedded Evaluation
142
+
143
+ Use `prompt_full` for direct text-to-solution evaluation (compatible with other benchmarks):
144
+
145
+ ```python
146
+ from datasets import load_dataset
147
+
148
+ dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")
149
+
150
+ for sample in dataset:
151
+ prompt = sample['prompt_full'] # Data is already in prompt
152
+
153
+ generated_code = your_llm(prompt)
154
+ exec(generated_code) # Code parses JSON from prompt itself
155
 
 
156
  print(f"Reference: {sample['reference_status']}, {sample['reference_objective']}")
157
  ```
158
 
retailopt_190.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
retailopt_190.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3582dd2eadd7de55f6c5667835283f2781864b52effe17212123945180636a9b
3
- size 231673
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0106c6a440d4c39ade4ef1fcfefad1465fc5583e92b9475f246420f8275504e9
3
+ size 466102