File size: 6,975 Bytes
53b9325
3ec746c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53b9325
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
 
 
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
 
 
 
eaad601
3ec746c
eaad601
 
3ec746c
 
e41cd52
3ec746c
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
 
 
 
eaad601
3ec746c
eaad601
3ec746c
e406ae1
3ec746c
 
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
 
 
 
 
 
 
 
eaad601
3ec746c
e406ae1
3ec746c
 
 
e41cd52
3ec746c
e41cd52
3ec746c
e41cd52
3ec746c
 
 
e41cd52
3ec746c
 
e41cd52
3ec746c
 
 
e41cd52
 
3ec746c
e41cd52
3ec746c
e406ae1
 
3ec746c
e41cd52
e406ae1
3ec746c
e406ae1
3ec746c
 
 
e406ae1
e41cd52
 
 
3ec746c
e41cd52
 
3ec746c
 
 
e41cd52
 
3ec746c
eaad601
3ec746c
eaad601
3ec746c
 
e41cd52
3ec746c
 
e41cd52
3ec746c
 
eaad601
3ec746c
e41cd52
3ec746c
 
 
eaad601
 
 
 
 
 
 
 
3ec746c
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
eaad601
3ec746c
eaad601
 
 
 
3ec746c
eaad601
3ec746c
eaad601
 
7905418
eaad601
 
 
 
 
 
 
3ec746c
eaad601
3ec746c
 
eaad601
3ec746c
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
---

language:
- en
license: cc-by-4.0
size_categories:
- n<1K
task_categories:
- text-generation
tags:
- optimization
- operations-research
- supply-chain
- retail
- milp
- code-generation
- benchmark
pretty_name: RetailOpt-190
dataset_info:
  features:
    - name: scenario_id
      dtype: string
    - name: prompt_schema
      dtype: string
    - name: prompt_full
      dtype: string
    - name: data
      dtype: string
    - name: reference_status
      dtype: string
    - name: reference_objective
      dtype: float64
  splits:
    - name: test
      num_examples: 190
---


# RetailOpt-190: A Retail Supply Chain Benchmark for Text-to-Optimization

**RetailOpt-190** is a solver-validated benchmark for evaluating semantic reliability in text-to-optimization. It tests whether LLM-based agents can reconstruct the intended optimization structure—not just produce runnable code.

## Dataset Description

- **Repository:** [https://github.com/Jacoblian/RetailOpt-190](https://github.com/Jacoblian/RetailOpt-190)
- **Paper:** ReLoop: Detecting Silent Failures in LLM-Generated Optimization Code via Behavioral Verification
- **Point of Contact:** Junbo Jacob Lian

### Dataset Summary

RetailOpt-190 contains 190 retail supply chain optimization instances designed to test compositional consistency in LLM-generated optimization code. Each instance includes a natural-language problem description, structured JSON data, and ground truth solutions from a validated MILP solver.

The benchmark spans 8 scenario families and 38 archetypes covering core retail planning mechanisms:

| Family | Name | Archetypes | Key Mechanisms |
|--------|------|------------|----------------|
| F1 | Core Operations | 4 | Multi-period inventory, seasonal demand, perishability |
| F2 | Assortment & Substitution | 6 | Product substitution, promotions, ultra-short shelf life |
| F3 | Resource Constraints | 4 | Storage bottleneck, supply bottleneck, volumetric limits |
| F4 | Demand Dynamics | 6 | Demand surge, supply risk, peak failure |
| F5 | Feasibility Stress | 4 | Impossible demand, storage overflow, strict service traps |
| F6 | Discrete Logistics | 4 | Lead time, MOQ, pack size, fixed order cost |
| F7 | Network & Multi-Echelon | 6 | Transshipment, hub-spoke, multi-sourcing |
| F8 | Omni-channel | 4 | Reverse logistics, labor constraints, sustainability |

### Languages

English

## Two Prompt Formats

RetailOpt-190 provides **two prompt formats** for different evaluation scenarios:

| Format | Field | Data Location | Use Case |
|--------|-------|---------------|----------|
| **Schema-based** | `prompt_schema` | External (runtime) | Large datasets, tests data access patterns |
| **Data-embedded** | `prompt_full` | In prompt | Direct comparison with other benchmarks |

### Why Two Formats?

Most existing benchmarks (NL4Opt, MAMO, IndustryOR) embed data directly in prompts. RetailOpt-190 supports both approaches to enable:

1. **Fair comparison**: Use `prompt_full` when comparing with other benchmarks in unified evaluation frameworks
2. **Scalability**: Use `prompt_schema` for production scenarios with large datasets

Both formats provide the **same semantic information**—only the data delivery method differs.

## Dataset Structure

### Data Fields

| Field | Type | Description |
|-------|------|-------------|
| `scenario_id` | string | Unique scenario identifier (e.g., `retail_f1_base_v0`) |
| `prompt_schema` | string | Schema-based prompt (data loaded at runtime via `data` variable) |
| `prompt_full` | string | Data-embedded prompt (full JSON data in prompt) |
| `data` | string | JSON-formatted instance data (parse with `json.loads()`) |
| `reference_status` | string | Ground truth solver status (`OPTIMAL`, `INFEASIBLE`, etc.) |
| `reference_objective` | float | Ground truth objective value (null if infeasible) |

### Data Splits

| Split | Examples |
|-------|----------|
| test | 190 |

## Usage

### Loading the Dataset

```python

from datasets import load_dataset

import json



dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")

sample = dataset[0]



print(sample['scenario_id'])        # e.g., "retail_f1_base_v0"

print(sample['prompt_schema'][:200])  # Schema-based prompt

print(sample['prompt_full'][:200])    # Data-embedded prompt

```

### Option A: Schema-based Evaluation

Use `prompt_schema` when you need external data loading (matches production scenarios):

```python

from datasets import load_dataset

import json



dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")



for sample in dataset:

    prompt = sample['prompt_schema']

    data = json.loads(sample['data'])



    generated_code = your_llm(prompt)

    exec(generated_code, {'data': data})  # Data pre-loaded



    print(f"Reference: {sample['reference_status']}, {sample['reference_objective']}")

```

### Option B: Data-embedded Evaluation

Use `prompt_full` for direct text-to-solution evaluation (compatible with other benchmarks):

```python

from datasets import load_dataset



dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")



for sample in dataset:

    prompt = sample['prompt_full']  # Data is already in prompt



    generated_code = your_llm(prompt)

    exec(generated_code)  # Code parses JSON from prompt itself



    print(f"Reference: {sample['reference_status']}, {sample['reference_objective']}")

```

### Evaluation Metrics

- **Execution Rate**: Percentage of instances that run without error
- **Accuracy**: Percentage matching ground truth (status + objective within tolerance)
- **Silent Failure Rate**: Executable code with incorrect answer

### Accuracy Tolerances

| Family | Problem Type | Tolerance |
|--------|--------------|-----------|
| F1-F5, F7-F8 | LP / easy MIP | 0.01% |
| F6 | Hard MIP (MOQ, pack-size) | 10% |

## Dataset Creation

### Source Data

All instances are synthetically generated from 38 archetype specifications. Each archetype is instantiated with 5 numerical variants (v0-v4) via controlled parameter perturbations.

### Annotations

Ground truth solutions are computed using a validated MILP solver (Gurobi) with the following settings:
- TimeLimit: 60 seconds
- MIPGap: 1%
- Threads: 1

## Additional Information

### Citation

```bibtex

@article{lian2026reloop,

  author    = {Junbo Jacob Lian and Yujun Sun and Huiling Chen and Chaoyu Zhang and Chung-Piaw Teo},

  title     = {ReLoop: Detecting Silent Failures in LLM-Generated Optimization Code via Behavioral Verification},

  journal   = {arXiv preprint},

  year      = {2026}

}

```

### License

- **Code**: MIT
- **Data**: CC BY 4.0

### Related Resources

- **ReLoop Framework**: [https://github.com/junbolian/ReLoop](https://github.com/junbolian/ReLoop) - Complete implementation of the ReLoop verification pipeline