Datasets:
File size: 5,798 Bytes
a984d67 84e521e f8adb48 84e521e 3c01c63 84e521e 5b0f5f6 84e521e 3c01c63 84e521e 3c01c63 84e521e 3c01c63 84e521e fbabe7f e9fb4de fbabe7f e9fb4de fbabe7f 84e521e e9fb4de 84e521e a984d67 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | ---
license: mit
task_categories:
- question-answering
language:
- en
pretty_name: TLV Dataset
---
# Temporal Logic Video (TLV) Dataset
<!-- PROJECT LOGO -->
<br />
<div align="center">
<h3 align="center">Temporal Logic Video (TLV) Dataset</h3>
<p align="center">
Synthetic and real video dataset with temporal logic annotation
<br />
<a href="https://github.com/UTAustin-SwarmLab/temporal-logic-video-dataset"><strong>Explore the GitHub »</strong></a>
<br />
<br />
<a href="https://anoymousu1.github.io/nsvs-anonymous.github.io/">NSVS-TL Project Webpage</a>
·
<a href="https://github.com/UTAustin-SwarmLab/Neuro-Symbolic-Video-Search-Temploral-Logic">NSVS-TL Source Code</a>
</p>
</div>
## Overview
The Temporal Logic Video (TLV) Dataset addresses the scarcity of state-of-the-art video datasets for long-horizon, temporally extended activity and object detection. It comprises two main components:
1. Synthetic datasets: Generated by concatenating static images from established computer vision datasets (COCO and ImageNet), allowing for the introduction of a wide range of Temporal Logic (TL) specifications.
2. Real-world datasets: Based on open-source autonomous vehicle (AV) driving datasets, specifically NuScenes and Waymo.
## Table of Contents
- [Dataset Composition](#dataset-composition)
- [Dataset](#dataset)
- [License](#license)
## Dataset Composition
### Synthetic Datasets
- Source: COCO and ImageNet
- Purpose: Introduce artificial Temporal Logic specifications
- Generation Method: Image stitching from static datasets
### Real-world Datasets
- Sources: NuScenes and Waymo
- Purpose: Provide real-world autonomous vehicle scenarios
- Annotation: Temporal Logic specifications added to existing data
## Dataset
Though we provide a source code to generate datasets from different data sources, we release a dataset v1 as a proof of concept.
### Dataset Structure
We provide a v1 dataset as a proof of concept. The data is offered as serialized objects, each containing a set of frames with annotations.
#### File Naming Convention
`\<tlv_data_type\>:source:\<datasource\>-number_of_frames:\<number_of_frames\>-\<uuid\>.pkl`
#### Object Attributes
Each serialized object contains the following attributes:
- `ground_truth`: Boolean indicating whether the dataset contains ground truth labels
- `ltl_formula`: Temporal logic formula applied to the dataset
- `proposition`: A set of propositions for ltl_formula
- `number_of_frame`: Total number of frames in the dataset
- `frames_of_interest`: Frames of interest which satisfy the ltl_formula
- `labels_of_frames`: Labels for each frame
- `images_of_frames`: Image data for each frame
You can download a dataset from here. The structure of the dataset is as follows: serializer.
```
tlv-dataset-v1/
├── tlv_real_dataset/
├──── prop1Uprop2/
├──── (prop1&prop2)Uprop3/
├── tlv_synthetic_dataset/
├──── Fprop1/
├──── Gprop1/
├──── prop1&prop2/
├──── prop1Uprop2/
└──── (prop1&prop2)Uprop3/
```
#### Dataset Statistics
1. Total Number of Frames
| Ground Truth TL Specifications | Synthetic TLV Dataset | | Real TLV Dataset | |
| --- | ---: | ---: | ---: | ---: |
| | COCO | ImageNet | Waymo | Nuscenes |
| Eventually Event A | - | 15,750 | - | - |
| Always Event A | - | 15,750 | - | - |
| Event A And Event B | 31,500 | - | - | - |
| Event A Until Event B | 15,750 | 15,750 | 8,736 | 19,808 |
| (Event A And Event B) Until Event C | 5,789 | - | 7,459 | 7,459 |
2. Total Number of datasets
| Ground Truth TL Specifications | Synthetic TLV Dataset | | Real TLV Dataset | |
| --- | ---: | ---: | ---: | ---: |
| | COCO | ImageNet | Waymo | Nuscenes |
| Eventually Event A | - | 60 | - | - |
| Always Event A | - | 60 | - | - |
| Event A And Event B | 120 | - | - | - |
| Event A Until Event B | 60| 60 | 45| 494 |
| (Event A And Event B) Until Event C | 97 | - | 30 | 186|
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Connect with Me
<p align="center">
<em>Feel free to connect with me through these professional channels:</em>
</p>
<div style="display: flex; justify-content: center; align-items: center; flex-wrap: nowrap;">
<a href="https://www.linkedin.com/in/mchoi07/" target="_blank"><img src="https://img.shields.io/badge/LinkedIn-0077B5?style=flat-square&logo=linkedin&logoColor=white" alt="LinkedIn" style="margin: 0 5px;"/></a>
<a href="mailto:minkyu.choi@utexas.edu"><img src="https://img.shields.io/badge/Email-D14836?style=flat-square&logo=gmail&logoColor=white" alt="Email" style="margin: 0 5px;"/></a>
<a href="https://scholar.google.com/citations?user=ai4daB8AAAAJ&hl" target="_blank"><img src="https://img.shields.io/badge/Scholar-4285F4?style=flat-square&logo=google-scholar&logoColor=white" alt="Google Scholar" style="margin: 0 5px;"/></a>
<a href="https://minkyuchoi-07.github.io" target="_blank"><img src="https://img.shields.io/badge/Website-00C7B7?style=flat-square&logo=internet-explorer&logoColor=white" alt="Website" style="margin: 0 5px;"/></a>
<a href="https://x.com/MinkyuChoi7" target="_blank"><img src="https://img.shields.io/badge/Twitter-1DA1F2?style=flat-square&logo=twitter&logoColor=white" alt="Twitter" style="margin: 0 5px;"/></a>
</div>
## Citation
If you find this repo useful, please cite our paper:
```bibtex
@inproceedings{Choi_2024_ECCV,
author={Choi, Minkyu and Goel, Harsh and Omama, Mohammad and Yang, Yunhao and Shah, Sahil and Chinchali, Sandeep},
title={Towards Neuro-Symbolic Video Understanding},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
month={September},
year={2024}
}
``` |