Datasets:
File size: 2,787 Bytes
cd16926 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
tags:
- CAD
- STEP
- text-to-CAD
- ABC-dataset
- rendering
pretty_name: STEP-LLM Dataset
size_categories:
- 10K<n<100K
---
# STEP-LLM Dataset
Rendered multi-view images of ABC Dataset STEP files, used in our DATE 2026 paper:
**"STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models"**
[](https://arxiv.org/abs/2601.12641)
[](https://github.com/JasonShiii/STEP-LLM)
## Dataset Contents
Rendered JPEG images of CAD models from the [ABC Dataset](https://archive.nyu.edu/handle/2451/43778) (NYU), organized by entity count. These images were used with GPT-4o to generate natural language captions for training STEP-LLM.
| Folder | Entity range | # Models | Description |
|---|---|---|---|
| `step_under500_image/` | 0–500 entities | ~20k | Used in DATE 2026 paper |
| `step_500-1000_image/` | 500–1000 entities | ~17k | Ongoing journal extension |
Each folder contains 10 per-chunk zip files (one per ABC dataset chunk `abc_0001` – `abc_0010`):
- `step_under500_image/abc_000N_step_v00_under500_image.zip` (~25 MB each)
- `step_500-1000_image/abc_000N_step_v00_500-1000.zip` (~75–85 MB each)
## Download
```python
from huggingface_hub import hf_hub_download
# Download a single chunk (under-500 entities, chunk 1)
hf_hub_download(
repo_id="JasonShiii/STEP-LLM-dataset",
filename="step_under500_image/abc_0001_step_v00_under500_image.zip",
repo_type="dataset",
local_dir="./data"
)
```
Or download all chunks:
```bash
huggingface-cli download JasonShiii/STEP-LLM-dataset --repo-type dataset --local-dir ./data
```
## Usage
These images are used in `data_preparation/captioning.ipynb` (see [GitHub repo](https://github.com/JasonShiii/STEP-LLM)) to generate GPT-4o captions, which become the natural language training inputs for STEP-LLM.
The released captions are available directly in the GitHub repo (`cad_captions_0-500.csv`, `cad_captions_500-1000.csv`) — you do not need to download these images to train or run inference with STEP-LLM.
## License
Images are derived from the [ABC Dataset](https://archive.nyu.edu/handle/2451/43778) and released under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
## Citation
```bibtex
@article{shi2026step,
title={STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models},
author={Shi, Xiangyu and Ding, Junyang and Zhao, Xu and Zhan, Sinong and Mohapatra, Payal
and Quispe, Daniel and Welbeck, Kojo and Cao, Jian and Chen, Wei and Guo, Ping and others},
journal={arXiv preprint arXiv:2601.12641},
year={2026}
}
```
|