RetailOpt-190 / README.md
Junbo Jacob Lian
Add dual prompt formats: schema-based and data-embedded
e406ae1
metadata
language:
  - en
license: cc-by-4.0
size_categories:
  - n<1K
task_categories:
  - text-generation
tags:
  - optimization
  - operations-research
  - supply-chain
  - retail
  - milp
  - code-generation
  - benchmark
pretty_name: RetailOpt-190
dataset_info:
  features:
    - name: scenario_id
      dtype: string
    - name: prompt_schema
      dtype: string
    - name: prompt_full
      dtype: string
    - name: data
      dtype: string
    - name: reference_status
      dtype: string
    - name: reference_objective
      dtype: float64
  splits:
    - name: test
      num_examples: 190

RetailOpt-190: A Retail Supply Chain Benchmark for Text-to-Optimization

RetailOpt-190 is a solver-validated benchmark for evaluating semantic reliability in text-to-optimization. It tests whether LLM-based agents can reconstruct the intended optimization structure—not just produce runnable code.

Dataset Description

Dataset Summary

RetailOpt-190 contains 190 retail supply chain optimization instances designed to test compositional consistency in LLM-generated optimization code. Each instance includes a natural-language problem description, structured JSON data, and ground truth solutions from a validated MILP solver.

The benchmark spans 8 scenario families and 38 archetypes covering core retail planning mechanisms:

Family Name Archetypes Key Mechanisms
F1 Core Operations 4 Multi-period inventory, seasonal demand, perishability
F2 Assortment & Substitution 6 Product substitution, promotions, ultra-short shelf life
F3 Resource Constraints 4 Storage bottleneck, supply bottleneck, volumetric limits
F4 Demand Dynamics 6 Demand surge, supply risk, peak failure
F5 Feasibility Stress 4 Impossible demand, storage overflow, strict service traps
F6 Discrete Logistics 4 Lead time, MOQ, pack size, fixed order cost
F7 Network & Multi-Echelon 6 Transshipment, hub-spoke, multi-sourcing
F8 Omni-channel 4 Reverse logistics, labor constraints, sustainability

Languages

English

Two Prompt Formats

RetailOpt-190 provides two prompt formats for different evaluation scenarios:

Format Field Data Location Use Case
Schema-based prompt_schema External (runtime) Large datasets, tests data access patterns
Data-embedded prompt_full In prompt Direct comparison with other benchmarks

Why Two Formats?

Most existing benchmarks (NL4Opt, MAMO, IndustryOR) embed data directly in prompts. RetailOpt-190 supports both approaches to enable:

  1. Fair comparison: Use prompt_full when comparing with other benchmarks in unified evaluation frameworks
  2. Scalability: Use prompt_schema for production scenarios with large datasets

Both formats provide the same semantic information—only the data delivery method differs.

Dataset Structure

Data Fields

Field Type Description
scenario_id string Unique scenario identifier (e.g., retail_f1_base_v0)
prompt_schema string Schema-based prompt (data loaded at runtime via data variable)
prompt_full string Data-embedded prompt (full JSON data in prompt)
data string JSON-formatted instance data (parse with json.loads())
reference_status string Ground truth solver status (OPTIMAL, INFEASIBLE, etc.)
reference_objective float Ground truth objective value (null if infeasible)

Data Splits

Split Examples
test 190

Usage

Loading the Dataset

from datasets import load_dataset
import json

dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")
sample = dataset[0]

print(sample['scenario_id'])        # e.g., "retail_f1_base_v0"
print(sample['prompt_schema'][:200])  # Schema-based prompt
print(sample['prompt_full'][:200])    # Data-embedded prompt

Option A: Schema-based Evaluation

Use prompt_schema when you need external data loading (matches production scenarios):

from datasets import load_dataset
import json

dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")

for sample in dataset:
    prompt = sample['prompt_schema']
    data = json.loads(sample['data'])

    generated_code = your_llm(prompt)
    exec(generated_code, {'data': data})  # Data pre-loaded

    print(f"Reference: {sample['reference_status']}, {sample['reference_objective']}")

Option B: Data-embedded Evaluation

Use prompt_full for direct text-to-solution evaluation (compatible with other benchmarks):

from datasets import load_dataset

dataset = load_dataset("Jacoblian/RetailOpt-190", split="test")

for sample in dataset:
    prompt = sample['prompt_full']  # Data is already in prompt

    generated_code = your_llm(prompt)
    exec(generated_code)  # Code parses JSON from prompt itself

    print(f"Reference: {sample['reference_status']}, {sample['reference_objective']}")

Evaluation Metrics

  • Execution Rate: Percentage of instances that run without error
  • Accuracy: Percentage matching ground truth (status + objective within tolerance)
  • Silent Failure Rate: Executable code with incorrect answer

Accuracy Tolerances

Family Problem Type Tolerance
F1-F5, F7-F8 LP / easy MIP 0.01%
F6 Hard MIP (MOQ, pack-size) 10%

Dataset Creation

Source Data

All instances are synthetically generated from 38 archetype specifications. Each archetype is instantiated with 5 numerical variants (v0-v4) via controlled parameter perturbations.

Annotations

Ground truth solutions are computed using a validated MILP solver (Gurobi) with the following settings:

  • TimeLimit: 60 seconds
  • MIPGap: 1%
  • Threads: 1

Additional Information

Citation

@article{lian2026reloop,
  author    = {Junbo Jacob Lian and Yujun Sun and Huiling Chen and Chaoyu Zhang and Chung-Piaw Teo},
  title     = {ReLoop: Detecting Silent Failures in LLM-Generated Optimization Code via Behavioral Verification},
  journal   = {arXiv preprint},
  year      = {2026}
}

License

  • Code: MIT
  • Data: CC BY 4.0

Related Resources