Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -35,4 +35,45 @@ task_categories:
|
|
| 35 |
- time-series-forecasting
|
| 36 |
size_categories:
|
| 37 |
- 1K<n<10K
|
| 38 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
- time-series-forecasting
|
| 36 |
size_categories:
|
| 37 |
- 1K<n<10K
|
| 38 |
+
---
|
| 39 |
+
# REAL-V-TSFM Dataset
|
| 40 |
+
|
| 41 |
+
REAL-V-TSFM is a novel time series dataset derived entirely from real-world video data using optical flow methods. It was created to evaluate the generalization capabilities of Time Series Foundation Models (TSFMs) on realistic temporal dynamics, bridging the gap between synthetic benchmarks and real data.
|
| 42 |
+
|
| 43 |
+
<p align="center">
|
| 44 |
+
<img src="logo.png" alt="Description" width="300"/>
|
| 45 |
+
</p>
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
## Dataset Overview
|
| 49 |
+
|
| 50 |
+
- **Extraction Method:** Uses the Lucas-Kanade optical flow algorithm to track pixel trajectories at detected keypoints in videos. The x and y coordinates of each track form separate univariate time series.
|
| 51 |
+
- **Source Videos:** Mainly sourced from the LaSOT dataset, which contains long videos primarily featuring humans and animals.
|
| 52 |
+
- **Number of Time Series:** 6,130 time series from 609 distinct objects, providing substantial categorical diversity.
|
| 53 |
+
- **Sequence Length:** Average length of 2,043 time steps; lengths vary from 1,000 to 8,000 time steps.
|
| 54 |
+
- **Data Characteristics:** Approximately 44% of series are stationary according to the Augmented Dickey-Fuller test; average information entropy is 3.88 bits, indicating complexity and diversity.
|
| 55 |
+
|
| 56 |
+
## Dataset Construction Pipeline
|
| 57 |
+
|
| 58 |
+
1. Select videos and extract frame-by-frame images.
|
| 59 |
+
2. Perform foreground detection using Mixture of Gaussians 2 (MOG2) to mask background pixels.
|
| 60 |
+
3. Detect corners in foreground objects with the Shi–Tomasi corner detection algorithm.
|
| 61 |
+
4. Track keypoints across frames using pyramidal Lucas-Kanade optical flow and apply forward-backward consistency checks to filter unstable tracks.
|
| 62 |
+
5. Interpolate tracks to the longest sequence length per video; keep five least correlated tracks to ensure diversity and reduce noise.
|
| 63 |
+
6. Store the x and y coordinates of each track as separate time series in the dataset.
|
| 64 |
+
|
| 65 |
+
## Evaluation and Benchmarking
|
| 66 |
+
|
| 67 |
+
- Evaluated state-of-the-art TSFMs under zero-shot forecasting on REAL-V-TSFM and compared against the M4 dataset.
|
| 68 |
+
- Metrics used: Mean Absolute Percentage Error (MAPE), Symmetric MAPE (sMAPE), Aggregate Relative Weighted Quantile Loss (WQL), and Aggregate Relative Mean Absolute Scaled Error (MASE).
|
| 69 |
+
- Results show performance degradation on REAL-V-TSFM compared to M4, indicating current TSFMs have limited generalizability to real-world video-derived time series.
|
| 70 |
+
|
| 71 |
+
## Datasets
|
| 72 |
+
|
| 73 |
+
The dataset contains six primary columns:
|
| 74 |
+
- **t**: temporal index of the time series,
|
| 75 |
+
- **target**: the corresponding value of the time series at time \( t \),
|
| 76 |
+
- **axis**: indicates the spatial axis (x or y) represented by the time series,
|
| 77 |
+
- **track_id**: the identifier assigned during optical flow tracking, which is not guaranteed to be unique,
|
| 78 |
+
- **timestamp**: a field we introduced to comply with specific model input requirements (e.g., Google's TimesFM (Time Series Foundation Model)), though it has no intrinsic semantic meaning,
|
| 79 |
+
- **prefix**: denotes the source video from the LaSOT dataset from which the data is derived.
|