license: cc-by-4.0
tags:
- Adverse weather,
- camera,
- LiDAR sensor,
- deep learning,
- local rain amount,
- Object detection
size_categories:
- 100K<n<1M
Dataset Card for the InsideRain Dataset
Rain affects the sensors of highly automated vehicles. Current research analyses these influences on different sensor domains. However, a quantitative analysis of rain effects on 3D object detection, including various deep-learning models and sensor domains, remains to be done. Therefore, this work presents a measurement setup and labeled dataset to quantitatively analyze and compare sensor domains based on various deep-learning methods.
Dataset Details
As described in the corresponding paper, the recorded data from all sensors - including the sensors of the rain facility - is matched to the lowest sensor frame rate based on their timestamps and is labeled using the SUSTech POINTS tool \cite{sustech} for 3D object detection. In Figure \ref{PointsPerRainamount}, we show the average number of detections at the point cloud level for each sensor and rain amount. The different resolutions of the sensors and the sensor specific effect of the rain amount can be observed. Overall, the dataset consists of 39969 samples. In order to obtain a soft-balanced dataset, the original measurements are performed for each rain amount with approximately equally driven scenarios and number of rounds. The captured dataset has an almost equal distribution of both the amount of rain and the coordinates of the VoI. Only the rain amount of 0 mm/h indicates a low number of samples. In order to be able to estimate the effect of an almost completely balanced dataset, a hard-balanced dataset is introduced. The hard-balanced dataset is generated by matching each point from the rain amount with the largest number of samples to a similar point in each other rain amount using the nearest neighbor search. For this purpose, the features x- and y-position with a maximum Euclidean distance of 1.5 m are used. In addition, each point is only selected once (sampling without replacement). If no nearest neighbor is found within the given maximum Euclidean distance across all rain amounts, this sample is not included in the hard-balanced dataset, resulting in 12348 samples. Afterward, the hard- and soft-balanced are split into training (approx. 70 %), validation (approx. 15 %), and testing (approx. 15 %) subsets, while sequences are maintained. The sequences are maintained by randomly drawing sequences with the length of 13 samples that correspond to 4.25 m - approximately the length of the car.
- Curated by: Lukas Haas
- License: CC-BY-4.0
Dataset Structure
For each LiDAR sensor, the point clouds are stored in folders as .pcd files in the x, y, z, intensity format. For each camera, the matching frames are stored in folders as .png in the recorded resolution of 1280 x 620 px. The InsideRain dataset with its balancing and data spilts is stored as .parquete file.
Dataset Tasks
The InsideRain Dataset is an annotated sequential multi-modal rain dataset recorded from various camera and LiDAR sensors. The dataset supports 3D object detection, tracking, and rain amount prediction, as well as sensor data fusion and generation tasks.
Ethical and Responsible Use, Limitations
Since the dataset is generated on a test track and no third-party vehicles and persons are included, the anonymization of faces and license plates can be dispensed. However, timestamp information is anonymized. Nevertheless, temporal consistency is still guaranteed for all samples. It should be noted that the operational design domain is limited to the conditions depicted in the dataset, i.e., the presented scenarios, rain amounts, and light conditions.
Citation
BibTeX:
TODO
APA:
TODO
