Commit ·
3b26a1b
1
Parent(s): ed2947f
Update README.md
Browse files
README.md
CHANGED
|
@@ -25,4 +25,41 @@ If you find our datasets useful for your research, please cite the [AstroVision
|
|
| 25 |
}
|
| 26 |
```
|
| 27 |
|
| 28 |
-
Please make sure to like the respository to show support!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
}
|
| 26 |
```
|
| 27 |
|
| 28 |
+
Please make sure to like the respository to show support!
|
| 29 |
+
|
| 30 |
+
# Data format
|
| 31 |
+
|
| 32 |
+
Following the popular [COLMAP data format](https://colmap.github.io/format.html), each data segment contains the files `images.bin`, `cameras.bin`, and `points3D.bin`, which contain the camera extrinsics and keypoints, camera intrinsics, and 3D point cloud data, respectively.
|
| 33 |
+
|
| 34 |
+
- `cameras.bin` encodes a dictionary of `camera_id` and [`Camera`](third_party/colmap/scripts/python/read_write_model.py) pairs. `Camera` objects are structured as follows:
|
| 35 |
+
- `Camera.id`: defines the unique (and possibly noncontiguious) identifier for the `Camera`.
|
| 36 |
+
- `Camera.model`: the camera model. We utilize the "PINHOLE" camera model, as AstroVision contains undistorted images.
|
| 37 |
+
- `Camera.width` & `Camera.height`: the width and height of the sensor in pixels.
|
| 38 |
+
- `Camera.params`: `List` of cameras parameters (intrinsics). For the "PINHOLE" camera model, `params = [fx, fy, cx, cy]`, where `fx` and `fy` are the focal lengths in $x$ and $y$, respectively, and (`cx`, `cy`) is the principal point of the camera.
|
| 39 |
+
|
| 40 |
+
- `images.bin` encodes a dictionary of `image_id` and [`Image`](third_party/colmap/scripts/python/read_write_model.py) pairs. `Image` objects are structured as follows:
|
| 41 |
+
- `Image.id`: defines the unique (and possibly noncontiguious) identifier for the `Image`.
|
| 42 |
+
- `Image.tvec`: $\mathbf{r}^\mathcal{C_ i}_ {\mathrm{BC}_ i}$, i.e., the relative position of the origin of the camera frame $\mathcal{C}_ i$ with respect to the origin of the body-fixed frame $\mathcal{B}$ expressed in the $\mathcal{C}_ i$ frame.
|
| 43 |
+
- `Image.qvec`: $\mathbf{q}_ {\mathcal{C}_ i\mathcal{B}}$, i.e., the relative orientation of the camera frame $\mathcal{C}_ i$ with respect to the body-fixed frame $\mathcal{B}$. The user may call `Image.qvec2rotmat()` to get the corresponding rotation matrix $R_ {\mathcal{C}_ i\mathcal{B}}$.
|
| 44 |
+
- `Image.camera_id`: the identifer for the camera that was used to capture the image.
|
| 45 |
+
- `Image.name`: the name of the corresponding file, e.g., `00000000.png`.
|
| 46 |
+
- `Image.xys`: contains all of the keypoints $\mathbf{p}^{(i)} _k$ in image $i$, stored as a ($N$, 2) array. In our case, the keypoints are the forward-projected model vertices.
|
| 47 |
+
- `Image.point3D_ids`: stores the `point3D_id` for each keypoint in `Image.xys`, which can be used to fetch the corresponding `point3D` from the `points3D` dictionary.
|
| 48 |
+
|
| 49 |
+
- `points3D.bin` enocdes a dictionary of `point3D_id` and [`Point3D`](third_party/colmap/scripts/python/read_write_model.py) pairs. `Point3D` objects are structured as follows:
|
| 50 |
+
- `Point3D.id`: defines the unique (and possibly noncontiguious) identifier for the `Point3D`.
|
| 51 |
+
- `Point3D.xyz`: the 3D-coordinates of the landmark in the body-fixed frame, i.e., $\mathbf{\ell} _{k}^\mathcal{B}$.
|
| 52 |
+
- `Point3D.image_ids`: the ID of the images in which the landmark was observed.
|
| 53 |
+
- `Point3D.point2D_idxs`: the index in `Image.xys` that corresponds to the landmark observation, i.e., `xy = images[Point3D.image_ids[k]].xys[Point3D.point2D_idxs[k]]` given some index `k`.
|
| 54 |
+
|
| 55 |
+
These three data containers, along with the ground truth shape model, completely describe the scene.
|
| 56 |
+
|
| 57 |
+
In addition to the scene geometry, each image is annotated with a landmark map, a depth map, and a visibility mask.
|
| 58 |
+
|
| 59 |
+
<a href="https://imgur.com/DGUC0ef"><img src="https://i.imgur.com/DGUC0ef.png" title="source: imgur.com" /></a>
|
| 60 |
+
|
| 61 |
+
- The _landmark map_ provides a consistent, discrete set of reference points for sparse correspondence computation and is derived by forward-projecting vertices from a medium-resolution (i.e., $\sim$ 800k facets) shape model onto the image plane. We classify visible landmarks by tracing rays (via the [Trimesh library](https://trimsh.org/)) from the landmarks toward the camera origin and recording landmarks whose line-of-sight ray does not intersect the 3D model.
|
| 62 |
+
- The _depth map_ provides a dense representation of the imaged surface and is computed by backward-projecting rays at each pixel in the image and recording the depth of the intersection between the ray and a high-resolution (i.e., $\sim$ 3.2 million facets) shape model.
|
| 63 |
+
- The _visbility mask_ provides an estimate of the non-occluded portions of the imaged surface.
|
| 64 |
+
|
| 65 |
+
**Note:** Instead of the traditional $z$-depth parametrization used for depth maps, we use the _absolute depth_, similar to the inverse depth parameterization.
|