aharoon commited on
Commit
18ab6a0
·
verified ·
1 Parent(s): 079d6f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +214 -75
README.md CHANGED
@@ -13,99 +13,248 @@ language:
13
  [![SPIE Photonics West](https://img.shields.io/badge/SPIE%20Photonics%20West-Presentation-blue)](https://spie.org/photonics-west/presentation/Comprehensive-machine-learning-benchmarking-for-fringe-projection-profilometry-with-photorealistic/13904-1)
14
  [![GitHub](https://img.shields.io/badge/GitHub-Code-green)](https://github.com/AnushLak/fpp-ml-bench)
15
 
16
- The first open-source, photorealistic synthetic dataset for single-shot fringe projection profilometry (FPP), generated using [VIRTUS-FPP](https://arxiv.org/abs/2509.22685) in NVIDIA Isaac Sim. This dataset was created to enable standardized benchmarking and systematic comparison of deep learning approaches for single-shot 3D depth reconstruction from fringe patterns.
17
 
18
  ## Dataset Summary
19
 
20
  | Property | Value |
21
  |----------|-------|
22
- | Total fringe images | 15,600 |
23
- | Depth reconstructions | 300 |
24
  | Objects | 50 |
25
- | Viewpoints per object | 6 |
26
  | Resolution | 960 × 960 pixels |
27
  | Measurement range | 1.5–1.8 m |
28
  | Ground truth method | 18-step phase shifting + Gray-code unwrapping |
29
- | Train / Val / Test split | 240 / 30 / 30 (object-level) |
30
 
31
- ## Loading the Dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ```python
34
  import scipy.io as sio
35
  from PIL import Image
36
  import numpy as np
37
 
38
- # Load a fringe image
39
- fringe = np.array(Image.open("train/fringe/sample.png").convert("L"), dtype=np.float32) / 255.0
 
 
 
 
 
 
 
 
 
40
 
41
- # Load the corresponding ground truth depth map
42
- mat = sio.loadmat("train/depth/sample.mat")
 
 
 
43
 
44
- # Available depth keys depending on normalization:
45
- depth_raw = mat["depthMap"] # Raw depth in mm
46
- depth_global_normalized = mat["depthMapMeters"] # Depth in meters (raw / 1000)
47
- depth_individual_normalized = mat["depthMapNormalized"] # Per-sample [0, 1] normalized
48
  ```
49
 
50
- ### Denormalizing Individual Normalized Depth
51
 
52
- Each sample has stored normalization parameters to recover metric depth:
53
 
54
  ```python
55
- # Load normalization parameters
56
- params = sio.loadmat("info_depth_params/train/depth/sample.mat")
 
 
57
  depth_min = float(params["depth_min"])
58
  depth_max = float(params["depth_max"])
59
 
60
- # Recover depth in mm from [0, 1] normalized prediction
61
- depth_mm = depth_normalized * (depth_max - depth_min) + depth_min
62
  ```
63
 
64
- ## Dataset Structure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
  ```
67
- fpp-ml-bench/
68
- ├── train/ # 240 samples (40 objects × 6 viewpoints)
69
- │ ├── fringe/ # Input fringe pattern images (.png)
70
- └── depth/ # Ground truth depth maps (.mat)
71
- ├── val/ # 30 samples (5 objects × 6 viewpoints)
72
- │ ├── fringe/
73
- └── depth/
74
- ├── test/ # 30 samples (5 objects × 6 viewpoints)
75
- │ ├── fringe/
76
- └── depth/
77
- └── info_depth_params/ # Per-sample normalization parameters
78
- ├── train/depth/ # depth_min, depth_max per sample (.mat)
79
- ├── val/depth/
80
- └── test/depth/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
  ```
82
 
83
- ### File Formats
84
 
85
- - **Fringe images** (`.png`): Single-channel 8-bit grayscale, 960×960 pixels. Each image is the first frame of an 18-step phase-shifting sequence.
86
- - **Depth maps** (`.mat`): MATLAB files containing three depth representations per sample:
87
- - `depthMap`: Raw depth in millimeters
88
- - `depthMapMeters`: Global normalized (raw / 1000, in meters)
89
- - `depthMapNormalized`: Individual normalized to [0, 1] using per-sample min/max
90
- - **Normalization parameters** (`.mat`): Contains `depth_min` and `depth_max` for each sample, required to recover metric depth from individual normalized predictions.
 
 
 
 
91
 
92
- ### Depth Normalization Strategies
93
 
94
- The dataset includes three depth representations to support systematic evaluation of data representation strategies. Our benchmarking study found these lead to dramatically different model performance:
95
 
96
- | Strategy | Key (`mat` file) | Range | Performance (Object MAE) |
97
- |----------|-----------------|-------|--------------------------|
98
- | Raw | `depthMap` | 0–2000 mm | 148.07 mm |
99
- | Global normalized | `depthMapMeters` | 0–2 m | 82.49 mm |
100
- | Individual normalized | `depthMapNormalized` | [0, 1] | 16.20 mm |
 
 
 
 
101
 
102
- Individual normalization decouples shape from scale, achieving 9.1× improvement over raw depth. See the [paper](https://arxiv.org/abs/2601.08900) for full analysis.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
 
104
  ## Data Acquisition
105
 
106
  ### Virtual FPP System
107
 
108
- All data were generated using [VIRTUS-FPP](https://arxiv.org/abs/2509.22685), a physics-based virtual FPP system built in NVIDIA Isaac Sim. The pipeline integrates OptiX ray tracing for photorealistic rendering, PhysX for physics, and USD for 3D scene composition. The projector is modeled via the inverse camera model, enabling accurate fringe projection at arbitrary distances without hardware constraints.
109
 
110
  | Parameter | Value |
111
  |-----------|-------|
@@ -118,19 +267,11 @@ All data were generated using [VIRTUS-FPP](https://arxiv.org/abs/2509.22685), a
118
 
119
  ### Objects
120
 
121
- 50 objects were sourced from two datasets:
122
- - **YCB Object Dataset**: Household objects (containers, tools, food items)
123
- - **NVIDIA Physical AI Warehouse**: Industrial components
124
-
125
- Objects span cylindrical containers, rectangular boxes, complex shapes (power drills, sprayguns), and industrial components, providing diversity in surface characteristics and morphological complexity.
126
-
127
- All objects use consistent matte material properties: roughness = 0.95, specular = 0.15, AO-to-diffuse = 0.95.
128
 
129
  ### Multi-View Acquisition
130
 
131
- Each object was rotated about the vertical axis in 60° increments, yielding 6 viewpoints per object with approximately 50% overlap between adjacent views:
132
-
133
- $$R_z(\theta_i) = \begin{bmatrix} \cos\theta_i & -\sin\theta_i & 0 \\ \sin\theta_i & \cos\theta_i & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \theta_i = i \cdot 60°, \quad i = 0,1,...,5$$
134
 
135
  ### Ground Truth Generation
136
 
@@ -138,33 +279,31 @@ Ground truth depth maps were generated using an 18-step phase-shifting sequence
138
 
139
  ## Train/Val/Test Split
140
 
141
- The dataset is split at the **object level** to ensure evaluation on completely unseen geometries:
142
-
143
- | Split | Objects | Samples | Percentage |
144
- |-------|---------|---------|------------|
145
- | Train | 40 | 240 | 80% |
146
- | Val | 5 | 30 | 10% |
147
- | Test | 5 | 30 | 10% |
148
 
149
- **Important:** No object appears in more than one split. This prevents the model from memorizing object shapes seen during training and forces generalization to novel geometries.
 
 
 
 
150
 
151
  ## Intended Uses
152
 
153
  - Benchmarking deep learning architectures for single-shot FPP depth estimation
154
  - Evaluating data representation and loss function strategies for fringe-to-depth learning
155
- - Developing and validating learning-based 3D reconstruction methods
156
- - Research into the fundamental limitations of single-shot depth recovery from structured light
157
 
158
  ## Limitations
159
 
160
- - **Synthetic only**: All data is rendered in simulation. Domain gap to real-world FPP systems has not been characterized. See [paper](https://arxiv.org/abs/2601.08900) for discussion and future work on sim-to-real transfer.
161
  - **Material properties**: All objects use identical matte materials. Specular, translucent, or highly reflective surfaces are not represented.
162
- - **Single projector pattern**: Only the first fringe image from each 18-step sequence is used as model input. The remaining 17 patterns are used solely for ground truth generation.
163
  - **Fixed measurement range**: All objects are scanned at 1.5–1.8 m. Performance at other distances is unknown.
164
 
165
  ## Citation
166
 
167
- If you use this dataset in your research, please cite:
168
 
169
  ```bibtex
170
  @article{lakshman2026comprehensive,
@@ -184,4 +323,4 @@ If you use this dataset in your research, please cite:
184
 
185
  ## Contact
186
 
187
- For questions or issues, please open an issue on [GitHub](https://github.com/AnushLak/FPP-ML-Benchmarking) or contact [anushlak@iastate.edu](mailto:anushlak@iastate.edu) or [aharoon@iastate.edu](mailto:aharoon@iastate.edu).
 
13
  [![SPIE Photonics West](https://img.shields.io/badge/SPIE%20Photonics%20West-Presentation-blue)](https://spie.org/photonics-west/presentation/Comprehensive-machine-learning-benchmarking-for-fringe-projection-profilometry-with-photorealistic/13904-1)
14
  [![GitHub](https://img.shields.io/badge/GitHub-Code-green)](https://github.com/AnushLak/fpp-ml-bench)
15
 
16
+ The first open-source, photorealistic synthetic dataset for single-shot fringe projection profilometry (FPP), generated using [VIRTUS-FPP](https://arxiv.org/abs/2509.22685) in NVIDIA Isaac Sim. This dataset enables standardized benchmarking and systematic comparison of deep learning approaches for single-shot 3D depth reconstruction from fringe patterns.
17
 
18
  ## Dataset Summary
19
 
20
  | Property | Value |
21
  |----------|-------|
22
+ | Total fringe images | 15,600 (52 per viewpoint × 6 viewpoints × 50 objects) |
23
+ | Depth reconstructions | 300 (6 viewpoints × 50 objects) |
24
  | Objects | 50 |
25
+ | Viewpoints per object | 6 (0°, 60°, 120°, 180°, 240°, 300°) |
26
  | Resolution | 960 × 960 pixels |
27
  | Measurement range | 1.5–1.8 m |
28
  | Ground truth method | 18-step phase shifting + Gray-code unwrapping |
29
+ | Train / Val / Test split | 240 / 30 / 30 (object-level, 40/5/5 objects) |
30
 
31
+ ## Repository Layout
32
+
33
+ The dataset is organized into two top-level directories serving different purposes:
34
+
35
+ ```
36
+ fpp-ml-bench/
37
+ ├── training_datasets/ # Pre-split, ready-to-train data (plug and play)
38
+ └── fpp_synthetic_dataset/ # Full raw dataset per object (complete scans + all metadata)
39
+ ```
40
+
41
+ `training_datasets/` is what you need if you want to train models directly. `fpp_synthetic_dataset/` is the complete raw dataset with all phase, mesh, and reconstruction data per object.
42
+
43
+ ---
44
+
45
+ ## training_datasets/
46
+
47
+ Pre-split into train/val/test at the object level. Contains six dataset variants covering all combinations of normalization strategy and background handling, plus the normalization parameter files needed for individual normalization.
48
+
49
+ ```
50
+ training_datasets/
51
+ ├── training_data_depth_raw/
52
+ │ ├── train/
53
+ │ │ ├── fringe/ # 240 fringe images (full background)
54
+ │ │ └── depth/ # 240 raw depth .mat files
55
+ │ ├── val/
56
+ │ │ ├── fringe/ # 30 fringe images
57
+ │ │ └── depth/ # 30 raw depth .mat files
58
+ │ └── test/
59
+ │ ├── fringe/ # 30 fringe images
60
+ │ └── depth/ # 30 raw depth .mat files
61
+ ├── training_data_depth_global_normalized/ # same structure, global normalized depth
62
+ ├── training_data_depth_individual_normalized/ # same structure, [0,1] normalized depth
63
+ ├── training_data_bgremoved_depth_raw/ # background pixels zeroed in fringe input
64
+ ├── training_data_bgremoved_depth_global_normalized/
65
+ ├── training_data_bgremoved_depth_individual_normalized/
66
+ └── info_depth_params/ # per-sample min/max for individual normalization
67
+ ├── train/depth/
68
+ ├── val/depth/
69
+ └── test/depth/
70
+ ```
71
+
72
+ ### Loading training data
73
 
74
  ```python
75
  import scipy.io as sio
76
  from PIL import Image
77
  import numpy as np
78
 
79
+ # --- Pick a dataset variant ---
80
+ # Full background (recommended):
81
+ # training_data_depth_raw
82
+ # training_data_depth_global_normalized
83
+ # training_data_depth_individual_normalized <-- recommended
84
+ # Background removed (for ablation study):
85
+ # training_data_bgremoved_depth_raw
86
+ # training_data_bgremoved_depth_global_normalized
87
+ # training_data_bgremoved_depth_individual_normalized
88
+
89
+ dataset_dir = "training_datasets/training_data_depth_individual_normalized"
90
 
91
+ # Load a fringe image
92
+ fringe = np.array(
93
+ Image.open(f"{dataset_dir}/train/fringe/banana-a0.png").convert("L"),
94
+ dtype=np.float32
95
+ ) / 255.0 # normalize to [0, 1]
96
 
97
+ # Load the corresponding depth map
98
+ depth = sio.loadmat(f"{dataset_dir}/train/depth/banana-a0.mat")["depthMap"]
 
 
99
  ```
100
 
101
+ ### Denormalizing individual normalized depth
102
 
103
+ When using `training_data_depth_individual_normalized`, load the stored min/max to recover metric depth from model predictions:
104
 
105
  ```python
106
+ # Load normalization parameters (mirror the split and filename)
107
+ params = sio.loadmat(
108
+ "training_datasets/info_depth_params/train/depth/banana-a0.mat"
109
+ )
110
  depth_min = float(params["depth_min"])
111
  depth_max = float(params["depth_max"])
112
 
113
+ # Recover depth in mm from a [0, 1] prediction
114
+ depth_mm = prediction * (depth_max - depth_min) + depth_min
115
  ```
116
 
117
+ ### Dataset variants
118
+
119
+ The six `training_data_*` folders cover the full experimental matrix from the paper:
120
+
121
+ | Folder | Fringe input | Depth target | Object MAE (mm) |
122
+ |--------|-------------|--------------|-----------------|
123
+ | `training_data_depth_raw` | Full | Raw (mm) | 148.07 |
124
+ | `training_data_depth_global_normalized` | Full | Meters | 82.49 |
125
+ | `training_data_depth_individual_normalized` | Full | [0, 1] | **16.20** |
126
+ | `training_data_bgremoved_depth_raw` | BG zeroed | Raw (mm) | 437.40 |
127
+ | `training_data_bgremoved_depth_global_normalized` | BG zeroed | Meters | 598.40 |
128
+ | `training_data_bgremoved_depth_individual_normalized` | BG zeroed | [0, 1] | 45.01 |
129
+
130
+ Background removal degrades performance across all normalizations. See the [paper](https://arxiv.org/abs/2601.08900) for full analysis.
131
+
132
+ ---
133
+
134
+ ## fpp_synthetic_dataset/
135
+
136
+ The complete raw dataset. Each of the 50 objects has its full 6-viewpoint scan with all intermediate and final outputs from VIRTUS-FPP. All depth representations live in a single flat `depth_information/` folder.
137
 
138
  ```
139
+ fpp_synthetic_dataset/
140
+ ├── depth_information/ # All depth data, flat (2100 files)
141
+ │ ├── banana-a0_raw_depth.mat
142
+ ├── banana-a0_raw_depth.png
143
+ ├── banana-a0_global_normalized_depth.mat
144
+ │ ├── banana-a0_global_normalized_depth.png
145
+ ├── banana-a0_individual_normalized_depth.mat
146
+ ├── banana-a0_individual_normalized_depth.png
147
+ │ ├── banana-a0_individual_normalized_depth_params.mat
148
+ ├── banana-a60_raw_depth.mat # ... next viewpoint
149
+ └── ... # 7 files × 300 samples
150
+
151
+ ├── banana/ # object folder (50 total)
152
+ │ ├── A0/ # viewpoint (6 per object)
153
+ │ │ ├── A_0.png # fringe images (52 per viewpoint)
154
+ │ │ ├── A_1.png
155
+ │ │ ├── ...
156
+ │ │ ├── A_51.png
157
+ │ │ ├── banana-a0.ply # ground truth mesh
158
+ │ │ ├── wrapped_phase.mat # wrapped phase map
159
+ │ │ ├── unwrapped_phase.mat # unwrapped phase map
160
+ │ │ ├── reconstruction.fig # MATLAB figure
161
+ │ │ ├── reconstruction.png # rendered reconstruction
162
+ │ │ ├── mask.csv # object mask
163
+ │ │ ├── x.csv # point cloud X coordinates
164
+ │ │ ├── y.csv # point cloud Y coordinates
165
+ │ │ └── z.csv # point cloud Z coordinates
166
+ │ ├── A60/ # next viewpoint, same structure
167
+ │ ├── A120/
168
+ │ ├── A180/
169
+ │ ├── A240/
170
+ │ └── A300/
171
+ ├── battery/ # next object, same structure
172
+ └── ... # 50 objects total
173
  ```
174
 
175
+ ### Files per object-viewpoint
176
 
177
+ | File | Format | Description |
178
+ |------|--------|-------------|
179
+ | `A_0.png` – `A_51.png` | PNG (960×960, grayscale) | 52-frame fringe pattern sequence. `A_0.png` is used as model input in the benchmarking study. |
180
+ | `<object>-<angle>.ply` | PLY | Ground truth 3D mesh |
181
+ | `wrapped_phase.mat` | MAT | Wrapped phase map from phase-shifting algorithm |
182
+ | `unwrapped_phase.mat` | MAT | Temporally unwrapped phase (Gray-code) |
183
+ | `mask.csv` | CSV | Binary object mask |
184
+ | `x.csv`, `y.csv`, `z.csv` | CSV | Point cloud coordinates (mm) |
185
+ | `reconstruction.png` | PNG | Rendered depth reconstruction |
186
+ | `reconstruction.fig` | FIG | MATLAB figure of reconstruction |
187
 
188
+ ### Files in depth_information/
189
 
190
+ Seven files per object-viewpoint, named `<object>-<angle>_<type>`:
191
 
192
+ | Suffix | Format | Description |
193
+ |--------|--------|-------------|
194
+ | `_raw_depth.mat` | MAT | Depth in millimeters |
195
+ | `_raw_depth.png` | PNG | Visualization of raw depth |
196
+ | `_global_normalized_depth.mat` | MAT | Depth in meters (raw / 1000) |
197
+ | `_global_normalized_depth.png` | PNG | Visualization of global normalized depth |
198
+ | `_individual_normalized_depth.mat` | MAT | Depth normalized to [0, 1] per sample |
199
+ | `_individual_normalized_depth.png` | PNG | Visualization of individual normalized depth |
200
+ | `_individual_normalized_depth_params.mat` | MAT | `depth_min` and `depth_max` for denormalization |
201
 
202
+ ### Loading from fpp_synthetic_dataset
203
+
204
+ ```python
205
+ import scipy.io as sio
206
+ from PIL import Image
207
+ import numpy as np
208
+
209
+ object_name = "banana"
210
+ viewpoint = "A0"
211
+ angle_tag = "a0" # lowercase, used in depth_information filenames
212
+
213
+ base = "fpp_synthetic_dataset"
214
+
215
+ # --- Full fringe sequence ---
216
+ fringes = [
217
+ np.array(Image.open(f"{base}/{object_name}/{viewpoint}/A_{i}.png").convert("L"))
218
+ for i in range(52)
219
+ ]
220
+
221
+ # --- Ground truth mesh ---
222
+ # banana-a0.ply (use open3d or trimesh)
223
+ import trimesh
224
+ mesh = trimesh.load(f"{base}/{object_name}/{viewpoint}/{object_name}-{angle_tag}.ply")
225
+
226
+ # --- Phase maps ---
227
+ wrapped = sio.loadmat(f"{base}/{object_name}/{viewpoint}/wrapped_phase.mat")
228
+ unwrapped = sio.loadmat(f"{base}/{object_name}/{viewpoint}/unwrapped_phase.mat")
229
+
230
+ # --- Point cloud ---
231
+ import pandas as pd
232
+ x = pd.read_csv(f"{base}/{object_name}/{viewpoint}/x.csv").values
233
+ y = pd.read_csv(f"{base}/{object_name}/{viewpoint}/y.csv").values
234
+ z = pd.read_csv(f"{base}/{object_name}/{viewpoint}/z.csv").values
235
+
236
+ # --- Depth maps (from depth_information) ---
237
+ raw_depth = sio.loadmat(
238
+ f"{base}/depth_information/{object_name}-{angle_tag}_raw_depth.mat"
239
+ )["depthMap"]
240
+
241
+ ind_depth = sio.loadmat(
242
+ f"{base}/depth_information/{object_name}-{angle_tag}_individual_normalized_depth.mat"
243
+ )["depthMap"]
244
+
245
+ params = sio.loadmat(
246
+ f"{base}/depth_information/{object_name}-{angle_tag}_individual_normalized_depth_params.mat"
247
+ )
248
+ depth_min, depth_max = float(params["depth_min"]), float(params["depth_max"])
249
+ ```
250
+
251
+ ---
252
 
253
  ## Data Acquisition
254
 
255
  ### Virtual FPP System
256
 
257
+ All data were generated using [VIRTUS-FPP](https://arxiv.org/abs/2509.22685), a physics-based virtual FPP system in NVIDIA Isaac Sim. The pipeline integrates OptiX ray tracing for photorealistic rendering, PhysX for physics, and USD for 3D scene composition. The projector is modeled via the inverse camera model, enabling accurate fringe projection at arbitrary distances without hardware constraints.
258
 
259
  | Parameter | Value |
260
  |-----------|-------|
 
267
 
268
  ### Objects
269
 
270
+ 50 objects sourced from the [YCB Object Dataset](https://ycb-objects.github.io/) and [NVIDIA Physical AI Warehouse](https://developer.nvidia.com/physical-ai). The collection spans cylindrical containers, rectangular boxes, complex shapes (power drills, sprayguns), and industrial components. All objects use consistent matte material properties: roughness = 0.95, specular = 0.15, AO-to-diffuse = 0.95.
 
 
 
 
 
 
271
 
272
  ### Multi-View Acquisition
273
 
274
+ Each object was rotated about the vertical axis in 60° increments, yielding 6 viewpoints (A0, A60, A120, A180, A240, A300) with approximately 50% overlap between adjacent views.
 
 
275
 
276
  ### Ground Truth Generation
277
 
 
279
 
280
  ## Train/Val/Test Split
281
 
282
+ The split is performed at the **object level**—no object appears in more than one split. This forces models to generalize to entirely unseen geometries rather than memorizing shapes seen during training.
 
 
 
 
 
 
283
 
284
+ | Split | Objects | Samples (objects × 6 viewpoints) |
285
+ |-------|---------|----------------------------------|
286
+ | Train | 40 | 240 |
287
+ | Val | 5 | 30 |
288
+ | Test | 5 | 30 |
289
 
290
  ## Intended Uses
291
 
292
  - Benchmarking deep learning architectures for single-shot FPP depth estimation
293
  - Evaluating data representation and loss function strategies for fringe-to-depth learning
294
+ - Research into phase unwrapping, depth refinement, and multi-view fusion
295
+ - Studying fundamental limitations of single-shot depth recovery from structured light
296
 
297
  ## Limitations
298
 
299
+ - **Synthetic only**: All data is rendered in simulation. Domain gap to real-world FPP systems has not been characterized. See the [paper](https://arxiv.org/abs/2601.08900) for discussion on sim-to-real transfer.
300
  - **Material properties**: All objects use identical matte materials. Specular, translucent, or highly reflective surfaces are not represented.
301
+ - **Single-shot input**: Only `A_0.png` (first fringe image) is used as model input in the benchmarking study. The remaining 51 patterns are available for alternative formulations (e.g., multi-frame input).
302
  - **Fixed measurement range**: All objects are scanned at 1.5–1.8 m. Performance at other distances is unknown.
303
 
304
  ## Citation
305
 
306
+ If you use this dataset, please cite:
307
 
308
  ```bibtex
309
  @article{lakshman2026comprehensive,
 
323
 
324
  ## Contact
325
 
326
+ Questions or issues: open an issue on [GitHub](https://github.com/AnushLak/FPP-ML-Benchmarking) or contact [anushlak@iastate.edu](mailto:anushlak@iastate.edu) or [aharoon@iastate.edu](mailto:aharoon@iastate.edu).