Update README.md
Browse files
README.md
CHANGED
|
@@ -4,23 +4,18 @@ license: cc-by-sa-4.0
|
|
| 4 |
# TransFrag27K: Transparent Fragment Dataset
|
| 5 |
|
| 6 |
## Dataset Summary
|
| 7 |
-
|
| 8 |
|
| 9 |
-
Transparent objects, being a special category, have
|
| 10 |
|
| 11 |
To address this, we designed an **automated dataset generation pipeline in Blender**:
|
| 12 |
-
- Objects are randomly fractured using the
|
| 13 |
- Parametric scripts batch-adjust lighting, backgrounds, and camera poses.
|
| 14 |
- Rendering is performed automatically to output paired RGB images and binary masks.
|
| 15 |
|
| 16 |
-
The dataset
|
| 17 |
[GitHub Repository](https://github.com/Keithllin/Transparent-Fragments-Contour-Estimation)
|
| 18 |
|
| 19 |
-
---
|
| 20 |
-
|
| 21 |
-
## Related Paper
|
| 22 |
-
**Transparent Fragments Contour Estimation via Visual-Tactile Fusion for Autonomous Reassembly**
|
| 23 |
-
*(Lin et al., 2025)*
|
| 24 |
|
| 25 |
---
|
| 26 |
|
|
@@ -67,20 +62,13 @@ We mainly organize the dataset according to the **shape classes** of transparent
|
|
| 67 |
|
| 68 |
---
|
| 69 |
|
| 70 |
-
##
|
| 71 |
-
|
| 72 |
-
from datasets import load_dataset
|
| 73 |
-
|
| 74 |
-
dataset = load_dataset("your-username/TransFrag27K")
|
| 75 |
-
print(dataset["train"][0])
|
| 76 |
-
```
|
| 77 |
-
##Citation
|
| 78 |
-
If you use this dataset, please cite the following paper:
|
| 79 |
```
|
| 80 |
@article{lin2025transparent,
|
| 81 |
title={Transparent Fragments Contour Estimation via Visual-Tactile Fusion for Autonomous Reassembly},
|
| 82 |
-
author=
|
| 83 |
-
year=
|
| 84 |
-
|
| 85 |
}
|
| 86 |
```
|
|
|
|
| 4 |
# TransFrag27K: Transparent Fragment Dataset
|
| 5 |
|
| 6 |
## Dataset Summary
|
| 7 |
+
TransFrag27K is the first large-scale transparent fragment dataset, which contains **27,000 images and masks** at a resolution of 640×480. The dataset covers fragments of common everyday glassware and incorporates **more than 150 background textures** and **100 HDRI environment lightings**.
|
| 8 |
|
| 9 |
+
Transparent objects, being a special category, have refractive and transmissive material properties that make their visual features highly sensitive to environmental lighting and background. In real-world scenarios, collecting data of transparent objects with diverse backgrounds and lighting conditions is challenging, and annotations are prone to errors due to difficulties in recognition.
|
| 10 |
|
| 11 |
To address this, we designed an **automated dataset generation pipeline in Blender**:
|
| 12 |
+
- Objects are randomly fractured using the Cell Fracture add-on.
|
| 13 |
- Parametric scripts batch-adjust lighting, backgrounds, and camera poses.
|
| 14 |
- Rendering is performed automatically to output paired RGB images and binary masks.
|
| 15 |
|
| 16 |
+
The Blender script we used to generate TransFrag27k also supports batch dataset generation with any scene in which objects are placed at a horizontal plane. For implementation details, please refer to:
|
| 17 |
[GitHub Repository](https://github.com/Keithllin/Transparent-Fragments-Contour-Estimation)
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
---
|
| 21 |
|
|
|
|
| 62 |
|
| 63 |
---
|
| 64 |
|
| 65 |
+
## Citation
|
| 66 |
+
If you find this dataset or the associated work useful for your research, please cite the paper:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
```
|
| 68 |
@article{lin2025transparent,
|
| 69 |
title={Transparent Fragments Contour Estimation via Visual-Tactile Fusion for Autonomous Reassembly},
|
| 70 |
+
author=
|
| 71 |
+
year=
|
| 72 |
+
|
| 73 |
}
|
| 74 |
```
|