Add paper and GitHub links, update citation and metadata (#1)
Browse files- Add paper and GitHub links, update citation and metadata (066c210e9f5271f3ac8c9142aca85cb421c36d44)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -1,61 +1,59 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
task_categories:
|
| 6 |
-
|
|
|
|
|
|
|
| 7 |
tags:
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
- 10K<n<100K
|
| 11 |
-
|
| 12 |
-
arxiv:
|
| 13 |
-
|
| 14 |
---
|
| 15 |
|
| 16 |
# TransFrag27K: Transparent Fragment Dataset
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
## Dataset Summary
|
| 19 |
TransFrag27K is the first large-scale transparent fragment dataset, which contains **27,000 images and masks** at a resolution of 640×480. The dataset covers fragments of common everyday glassware and incorporates **more than 150 background textures** and **100 HDRI environment lightings**.
|
| 20 |
|
| 21 |
<p align="center">
|
| 22 |
-
<img src="./demonstration.png" alt="Demonstration" width="1000"/>
|
| 23 |
</p>
|
| 24 |
|
| 25 |
-
|
| 26 |
Transparent objects, being a special category, have refractive and transmissive material properties that make their visual features highly sensitive to environmental lighting and background. In real-world scenarios, collecting data of transparent objects with diverse backgrounds and lighting conditions is challenging, and annotations are prone to errors due to difficulties in recognition.
|
| 27 |
|
| 28 |
-
To address this,
|
| 29 |
- Objects are randomly fractured using the Cell Fracture add-on.
|
| 30 |
- Parametric scripts batch-adjust lighting, backgrounds, and camera poses.
|
| 31 |
- Rendering is performed automatically to output paired RGB images and binary masks.
|
| 32 |
|
| 33 |
-
The Blender script
|
| 34 |
-
[Transparent-Fragments-Contour-Estimation](https://github.com/Keithllin/Transparent-Fragments-Contour-Estimation)
|
| 35 |
-
|
| 36 |
|
| 37 |
---
|
| 38 |
|
| 39 |
## Supported Tasks
|
| 40 |
- Semantic Segmentation for various transparent fragments.
|
|
|
|
| 41 |
|
| 42 |
---
|
| 43 |
|
| 44 |
-
|
| 45 |
## Dataset Structure
|
| 46 |
In our released dataset, to facilitate subsequent customized processing, we organize each object’s data in the following structure:
|
| 47 |
|
| 48 |
-
```
|
|
|
|
| 49 |
│ ├─Planar1
|
| 50 |
│ │ ├─anno_mask
|
| 51 |
│ │ └─rgb
|
| 52 |
│ ├─Planar2
|
| 53 |
-
|
| 54 |
│ │ ├─anno_mask
|
| 55 |
│ │ └─rgb
|
| 56 |
│ ├─Curved1
|
| 57 |
│ │ ├─anno_mask
|
| 58 |
-
|
| 59 |
│ │ └─rgb
|
| 60 |
│ ├─Curved2
|
| 61 |
│ │ ├─anno_mask
|
|
@@ -71,7 +69,6 @@ In our released dataset, to facilitate subsequent customized processing, we orga
|
|
| 71 |
│ │ └─rgb
|
| 72 |
```
|
| 73 |
|
| 74 |
-
|
| 75 |
We mainly organize the dataset according to the **shape classes** of transparent fragments:
|
| 76 |
- **Planar**
|
| 77 |
Mainly includes fragments from flat regions such as dish bottoms and glass bases.
|
|
@@ -84,11 +81,15 @@ We mainly organize the dataset according to the **shape classes** of transparent
|
|
| 84 |
|
| 85 |
## Citation
|
| 86 |
If you find this dataset or the associated work useful for your research, please cite the paper:
|
| 87 |
-
```
|
| 88 |
-
@article{lin2025transparent,
|
| 89 |
-
title={Transparent Fragments Contour Estimation via Visual-Tactile Fusion for Autonomous Reassembly},
|
| 90 |
-
author=
|
| 91 |
-
year=
|
| 92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
}
|
| 94 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
+
size_categories:
|
| 4 |
+
- 10K<n<100K
|
| 5 |
task_categories:
|
| 6 |
+
- image-segmentation
|
| 7 |
+
modalities:
|
| 8 |
+
- image
|
| 9 |
tags:
|
| 10 |
+
- synthetic
|
| 11 |
+
- robotics
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
# TransFrag27K: Transparent Fragment Dataset
|
| 15 |
|
| 16 |
+
[**Paper**](https://huggingface.co/papers/2603.20290) | [**Code**](https://github.com/Keithllin/Transparent-Fragments-Contour-Estimation)
|
| 17 |
+
|
| 18 |
+
**Authors:** Qihao Lin, Borui Chen, Yuping Zhou, Jianing Wu, Yulan Guo, Weishi Zheng, Chongkun Xia.
|
| 19 |
+
|
| 20 |
## Dataset Summary
|
| 21 |
TransFrag27K is the first large-scale transparent fragment dataset, which contains **27,000 images and masks** at a resolution of 640×480. The dataset covers fragments of common everyday glassware and incorporates **more than 150 background textures** and **100 HDRI environment lightings**.
|
| 22 |
|
| 23 |
<p align="center">
|
| 24 |
+
<img src="https://huggingface.co/datasets/chenbr7/TransFrag27K/resolve/main/demonstration.png" alt="Demonstration" width="1000"/>
|
| 25 |
</p>
|
| 26 |
|
|
|
|
| 27 |
Transparent objects, being a special category, have refractive and transmissive material properties that make their visual features highly sensitive to environmental lighting and background. In real-world scenarios, collecting data of transparent objects with diverse backgrounds and lighting conditions is challenging, and annotations are prone to errors due to difficulties in recognition.
|
| 28 |
|
| 29 |
+
To address this, the authors designed an **automated dataset generation pipeline in Blender**:
|
| 30 |
- Objects are randomly fractured using the Cell Fracture add-on.
|
| 31 |
- Parametric scripts batch-adjust lighting, backgrounds, and camera poses.
|
| 32 |
- Rendering is performed automatically to output paired RGB images and binary masks.
|
| 33 |
|
| 34 |
+
The Blender script used to generate TransFrag27k also supports batch dataset generation with any scene in which objects are placed at a horizontal plane. For implementation details, please refer to the [official GitHub repository](https://github.com/Keithllin/Transparent-Fragments-Contour-Estimation).
|
|
|
|
|
|
|
| 35 |
|
| 36 |
---
|
| 37 |
|
| 38 |
## Supported Tasks
|
| 39 |
- Semantic Segmentation for various transparent fragments.
|
| 40 |
+
- Contour estimation for autonomous reassembly.
|
| 41 |
|
| 42 |
---
|
| 43 |
|
|
|
|
| 44 |
## Dataset Structure
|
| 45 |
In our released dataset, to facilitate subsequent customized processing, we organize each object’s data in the following structure:
|
| 46 |
|
| 47 |
+
```
|
| 48 |
+
├─TransFrag27K
|
| 49 |
│ ├─Planar1
|
| 50 |
│ │ ├─anno_mask
|
| 51 |
│ │ └─rgb
|
| 52 |
│ ├─Planar2
|
|
|
|
| 53 |
│ │ ├─anno_mask
|
| 54 |
│ │ └─rgb
|
| 55 |
│ ├─Curved1
|
| 56 |
│ │ ├─anno_mask
|
|
|
|
| 57 |
│ │ └─rgb
|
| 58 |
│ ├─Curved2
|
| 59 |
│ │ ├─anno_mask
|
|
|
|
| 69 |
│ │ └─rgb
|
| 70 |
```
|
| 71 |
|
|
|
|
| 72 |
We mainly organize the dataset according to the **shape classes** of transparent fragments:
|
| 73 |
- **Planar**
|
| 74 |
Mainly includes fragments from flat regions such as dish bottoms and glass bases.
|
|
|
|
| 81 |
|
| 82 |
## Citation
|
| 83 |
If you find this dataset or the associated work useful for your research, please cite the paper:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
|
| 85 |
+
```bibtex
|
| 86 |
+
@misc{lin2026transparentfragmentscontourestimation,
|
| 87 |
+
title={Transparent Fragments Contour Estimation via Visual-Tactile Fusion for Autonomous Reassembly},
|
| 88 |
+
author={Qihao Lin and Borui Chen and Yuping Zhou and Jianing Wu and Yulan Guo and Weishi Zheng and Chongkun Xia},
|
| 89 |
+
year={2026},
|
| 90 |
+
eprint={2603.20290},
|
| 91 |
+
archivePrefix={arXiv},
|
| 92 |
+
primaryClass={cs.CV},
|
| 93 |
+
url={https://arxiv.org/abs/2603.20290},
|
| 94 |
}
|
| 95 |
```
|