Datasets:
Update dataset card for SpatialEdit-500K
Browse filesHi, I'm Niels from the Hugging Face community science team.
This PR improves the dataset card for SpatialEdit-500K by:
- Adding `image-to-image` to the task categories in the YAML metadata.
- Linking the paper to its Hugging Face papers page.
- Adding a link to the GitHub repository.
- Including the citation information.
This helps in making the dataset more discoverable on the Hub.
README.md
CHANGED
|
@@ -1,24 +1,35 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
# SpatialEdit-500K
|
| 6 |
|
| 7 |
-
SpatialEdit-500K is a synthetic training dataset for fine-grained image spatial editing. It is built for learning geometry-aware edits such as object moving, object rotation, camera viewpoint change.
|
| 8 |
|
| 9 |
-
The dataset
|
| 10 |
|
| 11 |
-
##
|
| 12 |
-
|
| 13 |
-
- Large-scale synthetic data for spatially grounded image editing
|
| 14 |
-
- Covers both object-centric and camera-centric transformations
|
| 15 |
-
- Used to train the SpatialEdit baseline model
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
| 18 |
|
| 19 |
-
|
| 20 |
-
- GitHub: https://github.com/EasonXiao-888/SpatialEdit
|
| 21 |
-
- Model: https://huggingface.co/EasonXiao-888/SpatialEdit-16B
|
| 22 |
-
- Benchmark: https://huggingface.co/datasets/EasonXiao-888/SpatialEdit-Bench
|
| 23 |
|
| 24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-image
|
| 5 |
---
|
| 6 |
|
| 7 |
# SpatialEdit-500K
|
| 8 |
|
| 9 |
+
SpatialEdit-500K is a synthetic training dataset for fine-grained image spatial editing. It is built for learning geometry-aware edits such as object moving, object rotation, and camera viewpoint change.
|
| 10 |
|
| 11 |
+
The dataset was introduced in the paper [SpatialEdit: Benchmarking Fine-Grained Image Spatial Editing](https://huggingface.co/papers/2604.04911). It is generated with a controllable rendering pipeline to provide structured spatial transformations at scale.
|
| 12 |
|
| 13 |
+
## Project Resources
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
+
- **GitHub Repository:** [EasonXiao-888/SpatialEdit](https://github.com/EasonXiao-888/SpatialEdit)
|
| 16 |
+
- **Model:** [SpatialEdit-16B](https://huggingface.co/EasonXiao-888/SpatialEdit-16B)
|
| 17 |
+
- **Benchmark:** [SpatialEdit-Bench](https://huggingface.co/datasets/EasonXiao-888/SpatialEdit-Bench)
|
| 18 |
|
| 19 |
+
## Highlights
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
+
- **Large-scale synthetic data:** 500,000 samples for spatially grounded image editing.
|
| 22 |
+
- **Comprehensive transformations:** Covers both object-centric (moving, rotation) and camera-centric transformations.
|
| 23 |
+
- **High fidelity:** Generated with a controllable Blender pipeline rendering objects across diverse backgrounds with systematic camera trajectories.
|
| 24 |
+
- **Precise labels:** Provides precise ground-truth transformations for spatial manipulation tasks.
|
| 25 |
+
|
| 26 |
+
## Citation
|
| 27 |
+
|
| 28 |
+
```bibtex
|
| 29 |
+
@article{xiao2026spatialedit,
|
| 30 |
+
title = {SpatialEdit: Benchmarking Fine-Grained Image Spatial Editing},
|
| 31 |
+
author = {Xiao, Yicheng and Zhang, Wenhu and Song, Lin and Chen, Yukang and Li, Wenbo and Jiang, Nan and Ren, Tianhe and Lin, Haokun and Huang, Wei and Huang, Haoyang and Li, Xiu and Duan, Nan and Qi, Xiaojuan},
|
| 32 |
+
journal = {arXiv preprint arXiv:2604.04911},
|
| 33 |
+
year = {2026}
|
| 34 |
+
}
|
| 35 |
+
```
|