The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: CastError
Message: Couldn't cast
id: string
map_id: string
image: binary
depth: binary
depth_from_plane: binary
pose_c2w: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
K: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
plane_annos: list<item: struct<map_prim_id: int64, params_c: list<item: double>, params_w: list<item: double>, rl (... 52 chars omitted)
child 0, item: struct<map_prim_id: int64, params_c: list<item: double>, params_w: list<item: double>, rle: struct<c (... 40 chars omitted)
child 0, map_prim_id: int64
child 1, params_c: list<item: double>
child 0, item: double
child 2, params_w: list<item: double>
child 0, item: double
child 3, rle: struct<counts: binary, size: list<item: int64>>
child 0, counts: binary
child 1, size: list<item: int64>
child 0, item: int64
to
{'id': Value('string'), 'primitives': List({'faces': List(List(Value('int64'))), 'params': List(Value('float64')), 'proj_mat': List(List(Value('float64'))), 'verts_2d': List(List(Value('float64')))})}
because column names don't match
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
for key, table in generator:
^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 609, in wrapped
for item in generator(*args, **kwargs):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py", line 74, in _generate_tables
yield Key(file_idx, batch_idx), self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py", line 54, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
id: string
map_id: string
image: binary
depth: binary
depth_from_plane: binary
pose_c2w: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
K: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
plane_annos: list<item: struct<map_prim_id: int64, params_c: list<item: double>, params_w: list<item: double>, rl (... 52 chars omitted)
child 0, item: struct<map_prim_id: int64, params_c: list<item: double>, params_w: list<item: double>, rle: struct<c (... 40 chars omitted)
child 0, map_prim_id: int64
child 1, params_c: list<item: double>
child 0, item: double
child 2, params_w: list<item: double>
child 0, item: double
child 3, rle: struct<counts: binary, size: list<item: int64>>
child 0, counts: binary
child 1, size: list<item: int64>
child 0, item: int64
to
{'id': Value('string'), 'primitives': List({'faces': List(List(Value('int64'))), 'params': List(Value('float64')), 'proj_mat': List(List(Value('float64'))), 'verts_2d': List(List(Value('float64')))})}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id string | primitives list |
|---|---|
scene0000_00 | [{"faces":[[2,1,0],[0,45,44],[43,42,41],[37,36,35],[35,34,33],[30,29,28],[28,27,26],[26,25,24],[23,2(...TRUNCATED) |
scene0000_01 | [{"faces":[[0,4,3],[3,2,1],[1,0,3]],"params":[-0.17961026275764805,-0.10274901184952485,0.9783571914(...TRUNCATED) |
scene0000_02 | [{"faces":[[114,113,112],[112,111,110],[108,107,106],[105,104,103],[101,100,99],[99,98,97],[97,96,95(...TRUNCATED) |
scene0001_00 | [{"faces":[[50,49,48],[45,44,43],[42,41,40],[40,39,38],[38,37,36],[35,34,33],[33,32,31],[31,30,29],[(...TRUNCATED) |
scene0001_01 | [{"faces":[[0,132,131],[129,128,127],[127,126,125],[124,123,122],[122,121,120],[120,119,118],[117,11(...TRUNCATED) |
scene0002_00 | [{"faces":[[0,10,9],[8,7,6],[6,5,4],[3,2,1],[1,0,9],[8,6,4],[4,3,1],[1,9,8],[8,4,1]],"params":[-0.97(...TRUNCATED) |
scene0002_01 | [{"faces":[[0,25,24],[23,22,21],[21,20,19],[18,17,16],[16,15,14],[12,11,10],[8,7,6],[6,5,4],[4,3,2],(...TRUNCATED) |
scene0003_00 | [{"faces":[[65,64,63],[63,62,61],[60,59,58],[58,57,56],[56,55,54],[54,53,52],[52,51,50],[50,49,48],[(...TRUNCATED) |
scene0003_01 | [{"faces":[[190,189,188],[187,186,185],[184,183,182],[181,180,179],[177,176,175],[174,173,172],[172,(...TRUNCATED) |
scene0003_02 | [{"faces":[[0,12,11],[10,9,8],[8,7,6],[6,5,4],[4,3,2],[1,0,11],[10,8,6],[4,2,1],[1,11,10],[10,6,4],[(...TRUNCATED) |
Data Organization
1. Overview
The PlanaReLoc dataset is curated for the task of camera relocalization with a plane-centric pipeline introduced in the paper "PlanaReLoc: Camera Relocalization in 3D Planar Primitives via Region-Based Structure Matching". The dataset consists of a collection of scenes, each represented as an untextured map formed by an arrangment of multiple 3D planar primitives. For each scene, a set of RGB images is provided as queries (i.e., images to be relocalized), each associated with ground-truth annotations such as plane segmentation, and the camera pose. The motivation behind this dataset is to place a premium on planar primitives and investigate the use of 3D planar maps for leaner camera relocalization in structured environments.
Note that the dataset is built upon the ScanNet and 12Scenes datasets. Users are required to agree to and comply with the terms of use of these datasets before using the PlanaReLoc dataset.
2. Dataset Resources
3. Dataset Structure
The dataset contains two parts:
scannet_planareloc_dataset: built upon ScanNet, consists of 1210 scenes, 45802 query images for training and 303 scene, 7735 query images for testing/validation. The total size is around 17.2GB.s12scenes_planareloc_dataset: built upon 12Scenes, consists of 12 scenes, 1023 query images ONLY for cross-dataset evaluation. The total size is around 350MB.
Here is the directory structure of the scannet_planareloc_dataset:
scannet_planareloc_dataset/
βββ caches/ # batched into Arrow chunks for efficient loading
β βββ maps/
β β βββ train_scene0000_00-scene0564_02.arrow # 1210 scenes
β β βββ test_scene0575_00-scene0706_00.arrow # 303 scenes
β β βββ val_scene0581_00-scene0698_00.arrow # 3 scenes
β βββ queries/
β βββ train_000_003999.arrow
β βββ train_001_007999.arrow
β βββ ...
β βββ test_000_003999.arrow
β βββ test_001_007734.arrow
β βββ val_000_000103.arrow # 104 queries from 3 scenes for validation during training
βββ map_glbs # in glb format for visualization and optional use
β βββ scenexxxx_xx.glb
β βββ ...
βββ cache_set_val_split.json # json files record identifiers of maps and queries within different splits.
βββ cache_set_test_split.json
βββ cache_set_train_split.json
s12scenes_planareloc_dataset follows the similar structure as above.
4. Dataset Details
JSON Files for Dataset Splits
JSON files named as cache_set_{split}_split.json (e.g., cache_set_train_split.json) record identifiers of maps and queries included in different dataset splits (train, test, val), which are used to retrieve data from the Arrow files under ./caches/. Each JSON file contains the following fields:
queries: a list of identifiers for query samples included in the dataset split. Each identifier corresponds to a unique query image and is typically in the format off"{map_id}_{view_id}"(e.g., "scene0575_00_000000").maps: a list of meta information for each scene included in the dataset split, containing the unique identifier for each scene (e.g., "scene0575_00") and the number and the list of indices of query samples associated with that scene.meta: a dictionary containing metadata about the dataset split, including:num_maps: the total number of unique scenes included in the split.num_queries: the total number of query images included in the split.
Map Data
Map data is stored in Arrow files under ./caches/maps/, which contain the following fields:
id: a unique identifier for each scene (e.g., "scene0575_00").primitives: a list of planar primitives in that scene, where each primitive includes:params: the plane parameters in the world coordinate space, represented as a list of four values $[a, b, c, d]$ corresponding to the plane equation $ax + by + cz = d$. These parameters are normalized such that the normal vector $\mathbf{n} =(a, b, c)^T$ is a unit vector.verts_2d: 3D vertices of each planar primitive are projected to 2D using a projection matrix $\mathbf{J}$ and stored as a nested list of 2D coordinates.proj_mat: the projection matrix $\mathbf{J}$ used to project coplanar 3D vertices to 2D. The 3D vertices can restored by multiplying the 2D vertices with the transpose of the projection matrix: $\mathbf{p}{3d} = \mathbf{p}{2d} \times \mathbf{J}^T + \mathbf{n}\cdot d$faces: the mesh faces of the planar primitive, represented as a nested list of vertex indices.
Query Data
Query data is stored in Arrow files under "./caches/queries/", which contain the following fields:
id: a unique identifier for each query sample, typically in the format off"{map_id}_{view_id}"(e.g., "scene0575_00_000000").map_id: a unique identifier for the scene to which the query view belongs (e.g., "scene0575_00").image: the RGB image of the query view, encoded as bytes (JPEG format), with a fixed resolution of 480Γ640 (HΓW).depth: the raw depth map of the query view, encoded as bytes (PNG format). The depth values are stored as 16-bit unsigned integers, where the actual depth in meters can be obtained by dividing the stored value by 1000. For example, a stored value of 1500 corresponds to a depth of 1.5 meters. Not used in PlanaReLoc's default pipeline. The depth map shares the same resolution (480Γ640, HΓW) as the RGB image and is pre-aligned, so no additional geometric transformation is required.depth_from_plane: the depth map derived from plane parameters in the query space, encoded as bytes (PNG format). Similar to the depth map, the plane depth values are stored as 16-bit unsigned integers and can be converted to meters by dividing by 1000. Used in the first training phase of PlanaReLoc.pose_c2w: the camera-to-world transformation matrix of the query view, represented as a 4Γ4 nested list, with the translation component in meters. This is provided as the ground truth and is used only for training and evaluation purposes, not for inference.K: the intrinsic matrix of the query view, represented as a 3Γ3 nested list in the form of[[fx, 0, cx], [0, fy, cy], [0, 0, 1]].plane_annos: a list of plane annotations for the query view, which is provided as the ground truth and is used only for training and evaluation purposes, not for inference. Each annotation corresponds to an observed plane in the query view and includes:rle: the run-length encoding of the plane mask, which can be decoded bypycocotools.mask.decode()to obtain the binary mask of the plane in the query view.params_c: the plane parameters in the camera coordinate system, represented as a list of four values $[a, b, c, d]$ corresponding to the plane equation $ax + by + cz = d$. These parameters are normalized such that the normal vector $\mathbf{n} =(a, b, c)^T$ is a unit vector.params_w: the plane parameters in the world coordinate system.map_prim_id: the index of the corresponding planar primitive in the map, which is be used to establish correspondences between query primitives and map primitives.
5. Uses
How to download?
# change to the directory where you want to store the dataset, e.g.,
mkdir datasets && cd datasets
# download datasets with huggingface-cli
hf download hanchiao/PlanaReLoc --repo-type dataset --local-dir .
How to use?
from typing import Literal
from datasets import load_dataset
# specify the dataset and the split, e.g., if you want to load the test split of the scannet dataset:
dataset: Literal["scannet", "s12scenes"] = "scannet" # or "s12scenes"
split: Literal["train", "test", "val"] = "test" # or "train", "val"
cache_path = f"datasets/{dataset}_planareloc_dataset/caches/"
queries = load_dataset(
"arrow",
data_files={
split: cache_path + "queries/{split}_*.arrow"
},
# cache_dir=".cache/huggingface/datasets" # specify the cache directory if needed
)
maps = load_dataset(
"arrow",
data_files={
split: cache_path + "maps/{split}_*.arrow"
},
# cache_dir=".cache/huggingface/datasets" # specify the cache directory if needed
)
# generate dict to mapping from identifiers to indices in the loaded dataset for retrieval
q_key2idx={k: i for i, k in enumerate(queries["id"])}
m_key2idx={k: i for i, k in enumerate(maps["id"])}
You can retrieve any query sample that is recorded in the JSON file for that split, e.g., cache_set_test_split.json for the test split:
import json
# load the JSON file for the specified dataset and split
json_file = f"datasets/{dataset}_planareloc_dataset/cache_set_{split}_split.json"
with open(json_file, "r") as f:
summary = json.load(f)
for d in summary["queries"]:
query = queries[q_key2idx[d]]
scene_map = maps[m_key2idx[query["map_id"]]]
... # use the retrieved data for training or evaluation
Then, to use the retrieved query and map data, refer to the field descriptions in the Map Data and Query Data sections above. For example, you can decode the RGB image, depth map and plane masks of a query sample as follows:
import cv2
import numpy as np
from pycocotools import mask as cocomask
image = cv2.imdecode(np.frombuffer(query['image'], dtype=np.uint8), cv2.IMREAD_COLOR)
depth = cv2.imdecode(np.frombuffer(query['depth'], dtype=np.uint8), cv2.IMREAD_UNCHANGED).astype(np.float32) / 1000.0
pan_seg_gt = np.full(image.shape[:2], -1, dtype=np.int32) # (H, W), -1 for non-plane pixels
for i, anno in enumerate(query['plane_annos']):
pan_seg_gt[cocomask.decode(anno["rle"]) != 0] = i
Moreover, you can recover the 3D vertices of each planar primitive in the map by:
primitives = []
for p in scene_map["primitives"]:
params = np.array(p['params'])
verts_3d = np.array(p['verts_2d']) @ np.array(p['proj_mat']).T + params[:3] * params[3] # (N, 3)
new_p = {
"params": params,
"verts_3d": verts_3d,
"faces": np.array(p['faces'])
}
primitives.append(new_p)
6. Annotations
Annotation process
- To be updated
- Downloads last month
- 1,113