The Dataset Viewer has been disabled on this dataset.

GelSLAM Dataset: Tactile Object Tracking and Tactile Reconstruction

License: MIT

We release two tactile perception datasets collected using the GelSight Mini (without markers) sensor:

  • Tactile-based Long-Horizon Object Tracking Dataset
  • Tactile-based Object Reconstruction Dataset

For real-time long-horizon 6DoF object pose estimation and high-fidelity object reconstruction using only touch, please refer to our work GelSLAM. For short-horizon object tracking using only touch, see NormalFlow and the corresponding NormalFlow Dataset.


Long-Horizon Object Tracking Dataset

This benchmark dataset is designed to evaluate the performance of tactile-based 6DoF object tracking algorithms over long horizons.

Collection Setup

The dataset includes 20 objects and 140 tracking episodes, with 7 episodes per object corresponding to seven distinct initial contact locations. The 20 objects consist of 14 everyday objects, 3 small textured objects, and 3 geometric shapes. The objects and their associated initial contact locations are shown below.

Tracking objects and initial contact locations

In each episode, the tactile sensor moves against a fixed object on the workbench under continuous contact, without contact breaks. A motion capture system records the sensor pose during contact, providing precise 6DoF ground-truth poses for every tactile frame. The data collection setup is illustrated in the figure below.

Tracking data collection setup

Dataset Structure

The tracking dataset is located in dataset/tracking_dataset/. Each episode directory contains:

  • gelsight.avi: Tactile video with N frames.
  • true_start_T_currs.npy: An (N, 4, 4) array representing the sensor’s 6DoF pose for each tactile frame in gelsight.avi, formatted as homogeneous transformation matrices.
  • contact_masks.npy: An (N, H, W) array of contact masks computed from the tactile images in gelsight.avi.
  • gradient_maps.npy: An (N, H, W, 2) array of gradient maps computed from the tactile images in gelsight.avi.
  • background.png: Reference tactile image without contact.

Contact masks and gradient maps are generated from the tactile video using gs_sdk, with the calibration file located in dataset/gelsight_calibrations/gelsight3. All episodes share the same calibration file.

Dataset Statistics

On average, each episode lasts approximately 21 seconds and contains 523 frames. The dataset exhibits substantial accumulated 6DoF motion per episode, enabling rigorous evaluation of long-horizon tracking performance. The average accumulated 6DoF motion for all episodes is shown below.

Tracking dataset statistics


Object Reconstruction Dataset

This dataset is designed to qualitatively evaluate the performance of tactile-based object reconstruction algorithms.


Data Collection Setup

The dataset contains 15 objects, including 3 tool handles, 7 food items, 4 rocks and fossils, and 1 textured object. Ground-truth meshes are not provided. Data is collected in an in-the-wild setup shown below where both the object and the GelSight Mini sensor are manually held during scanning. Contact breaks and re-initializations are frequent and may exceed 100 occurrences for certain objects. Scanning trajectories are guided by visualing the real-time reconstruction results using GelSLAM, allowing the operator to adaptively explore the object surface for full coverage.

Reconstruction data collection setup

Dataset Structure

The reconstruction dataset is located in dataset/reconstruction_dataset/. For each object, the dataset contains a single tactile scanning episode consisting of:

  • gelsight.avi: Full tactile video.
  • config.yaml: Configuration file specifying sensor calibration and device information.

The calibration file path and device information are specified under device_config in config.yaml. To evaluate your own reconstruction algorithm, only the tactile video, calibration file, and device information are required. Reconstruction results obtained using GelSLAM are provided in dataset/gelslam_reconstruction_results/. You may run the GelSLAM code to reproduce these results. They are provided for qualitative comparison with your own method.

Dataset Statistics

The smallest object in the dataset is the Seed (8 × 8 × 8 mm), while the largest is the Avocado (85 × 61 × 58 mm). The tactile video duration in our dataset range from 1 to 30 minutes. Detailed statistics for each object, along with object images and the corresponding GelSLAM reconstruction results, are shown in the figure below.

Reconstruction dataset statistics


Cite Us

If you find this dataset useful, please consider citing our arXiv paper:

@misc{huang2025gelslam,
      author={Hung-Jui Huang and Mohammad Amin Mirzaee and Michael Kaess and Wenzhen Yuan},
      title={GelSLAM: A Real-time, High-Fidelity, and Robust 3D Tactile SLAM System}, 
      year={2025},
      eprint={2508.15990},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2508.15990}, 
}
Downloads last month
58

Paper for joehjhuang/GelSLAM_dataset