RAYAuser commited on
Commit
07e09c9
Β·
verified Β·
1 Parent(s): 58f1c80

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,10 +1,52 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
-
6
  ![CLOUD_POINTS _dAtAsEt_ (1)](https://cdn-uploads.huggingface.co/production/uploads/66de3482fd7d68a29319ecd9/3_vh0mRu_K-tdwrB6SmSU.png)
7
 
8
- img_pointV2 is available πŸŽ‰πŸŽ‰πŸŽ‰πŸ₯³πŸ₯³πŸ˜€πŸ˜€
9
- This dataset is a collection of 3D point clouds generated from images in the .......... dataset (.....).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - vision
5
+ - point cloud
6
+ - NYU Depth V2
7
+ - 3d
8
+ - deep learning
9
+ - computer vision
10
+ - RAY-AUTRA-TECHNOLOGY
11
+ language: en
12
+ pretty_name: RAY-tech img_pointV2
13
+ datasets:
14
+ - jagennath-hari/nyuv2
15
  ---
16
 
 
17
  ![CLOUD_POINTS _dAtAsEt_ (1)](https://cdn-uploads.huggingface.co/production/uploads/66de3482fd7d68a29319ecd9/3_vh0mRu_K-tdwrB6SmSU.png)
18
 
19
+ # img_pointV2 is available πŸŽ‰πŸŽ‰πŸŽ‰πŸ₯³πŸ₯³πŸ˜€πŸ˜€
20
+
21
+ This dataset is a collection of 3D point clouds generated from the `jagennath-hari/nyuv2dataset`.
22
+
23
+ **img_pointV2** is the second version of the `RAY-AUTRA-TECHNOLOGY/img_pointV` dataset. It is a spatialized version of the *NYU Depth V2* dataset, transforming classic indoor images into high-fidelity 3D point clouds (`.ply` files).
24
+
25
+ The main objective is to provide clean, ready-to-use 3D scenes for training 3D vision models, eliminating the need for users to manually handle RGB-D to point cloud conversion.
26
+
27
+ ---
28
+
29
+ ### Dataset Highlights
30
+
31
+ * **Point Clouds (.ply):** Complete 3D scenes featuring both geometry ($X, Y, Z$) and color ($R, G, B$).
32
+ * **Metric Precision:** Every point is accurately positioned in meters, strictly following the real-world Kinect camera intrinsic parameters.
33
+ * **Cleaned & Uniformed:** Clouds have been filtered to remove capture noise and voxelized with a 1 cm density (voxel size: $0.01$).
34
+ * **Integrated Labels:** Metadata preserves all original semantic and instance segmentation information.
35
+
36
+ ---
37
+
38
+ ### File Structure
39
+
40
+ | File/Folder | Description |
41
+ | :--- | :--- |
42
+ | `data/` | Directory containing the `.ply` files. |
43
+ | `metadata.arrow` | Central index linking IDs, filenames, and point counts (Train/Val/Test splits). |
44
+ | `camera_params.json` | Optical parameters (intrinsics) used for the 3D reconstruction. |
45
+ | `class_names.json` | Dictionary of semantic classes (e.g., chair, wall, table). |
46
+ | `config.yaml` | Dataset configuration (license, format, normalization). |
47
+
48
+ ---
49
+
50
+ > **IMPORTANT:** These files are fully compatible with major 3D libraries such as **Open3D**, **PyTorch Geometric**, and **PointNet++**.
51
 
52
+ RAY AUTRA TECHNOLOGY 2025