SoccerNetPro Localization (Tennis)
This repository provides a tennis action spotting / localization dataset in an OpenSportsLab / SoccerNet-style format.
The dataset is organized by split (train/, valid/, test/) with video clips and corresponding localization annotations in JSON.
π Task
- Task type:
action_spotting(a.k.a. temporal action localization / event spotting) - Annotation granularity: clip-relative timestamps in milliseconds (
position_ms) - Label format: single-label events (one label per event)
π Main branch structure
Current structure on main:
main/
βββ annotations-localization-train.json
βββ annotations-localization-valid.json
βββ annotations-localization-test.json
βββ train/
β βββ <clip>.mp4
β βββ ...
βββ valid/
β βββ <clip>.mp4
β βββ ...
βββ test/
βββ <clip>.mp4
βββ ...
- The three folders
train/,valid/,test/contain thousands of short video clips (.mp4). - The three JSON files contain the localization labels for the corresponding split.
π§Ύ Annotation format
Each annotation file follows a SoccerNet-like schema:
Top-level keys:
version: format version (e.g.,"2.0")task:"action_spotting"dataset_name: dataset identifierlabels: list of valid event classes under a givenhead_namedata: list of items (each item corresponds to one clip)
data[] item fields
Each item contains:
id: stable item identifierinputs: list containing a video descriptorevents: list of labeled events in that clipmetadata: optional extra info such asfps,width,height, etc.
Example (simplified):
{
"id": "Tennis_some_clip_name",
"inputs": [
{
"type": "video",
"path": "test/some_clip_name.mp4",
"fps": 25.0
}
],
"events": [
{
"head": "tennis_action",
"label": "near_court_serve",
"position_ms": "4240",
"comment": "serve"
}
]
}
β±οΈ Timestamp β video position relationship (IMPORTANT)
For each event:
position_msis clip-relative time in millisecondsIt is computed from the clip-relative frame index using:
position_ms = round(frame / fps * 1000)
So:
position_ms = 0corresponds to the first frame of the clipposition_ms = 4240means the event happens around 4.240 seconds after the clip start
If you need the approximate frame index back:
frame β round(position_ms / 1000 * fps)
π·οΈ Labels
Labels are stored under:
labels.<head_name>.labels
where <head_name> is typically tennis_action.
π§° Notes
Paths in
inputs[].pathare relative paths pointing to the split folder:train/<clip>.mp4valid/<clip>.mp4test/<clip>.mp4
The repository includes
.gitattributesfor Git/LFS handling of large files.
β Quick sanity check
Pick one entry in annotations-localization-test.json:
- Open the clip video located at
test/<clip>.mp4 - Jump to
position_ms / 1000seconds - You should observe the corresponding tennis event near that timestamp
π Data Source & Attribution
The tennis clips and raw annotations in this dataset are derived from the tennis data released in the official repository of the paper:
Spotting Temporally Precise, Fine-Grained Events in Video (ECCV 2022)
James Hong, Haotian Zhang, MichaΓ«l Gharbi, Matthew Fisher, Kayvon Fatahalian
Source repository (tennis data):
https://github.com/jhong93/spot/tree/main/data/tennis
If you use this dataset, please cite the original paper:
@inproceedings{precisespotting_eccv22,
author={Hong, James and Zhang, Haotian and Gharbi, Micha\"{e}l and Fisher, Matthew and Fatahalian, Kayvon},
title={Spotting Temporally Precise, Fine-Grained Events in Video},
booktitle={ECCV},
year={2022}
}
- Downloads last month
- 3,183