The Dataset Viewer has been disabled on this dataset.

SoccerNetPro Localization (Tennis)

This repository provides a tennis action spotting / localization dataset in an OpenSportsLab / SoccerNet-style format.

The dataset is organized by split (train/, valid/, test/) with video clips and corresponding localization annotations in JSON.


πŸ“Œ Task

  • Task type: action_spotting (a.k.a. temporal action localization / event spotting)
  • Annotation granularity: clip-relative timestamps in milliseconds (position_ms)
  • Label format: single-label events (one label per event)

πŸ“ Main branch structure

Current structure on main:

main/
β”œβ”€β”€ annotations-localization-train.json
β”œβ”€β”€ annotations-localization-valid.json
β”œβ”€β”€ annotations-localization-test.json
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ <clip>.mp4
β”‚   └── ...
β”œβ”€β”€ valid/
β”‚   β”œβ”€β”€ <clip>.mp4
β”‚   └── ...
└── test/
    β”œβ”€β”€ <clip>.mp4
    └── ...
  • The three folders train/, valid/, test/ contain thousands of short video clips (.mp4).
  • The three JSON files contain the localization labels for the corresponding split.

🧾 Annotation format

Each annotation file follows a SoccerNet-like schema:

Top-level keys:

  • version: format version (e.g., "2.0")
  • task: "action_spotting"
  • dataset_name: dataset identifier
  • labels: list of valid event classes under a given head_name
  • data: list of items (each item corresponds to one clip)

data[] item fields

Each item contains:

  • id: stable item identifier
  • inputs: list containing a video descriptor
  • events: list of labeled events in that clip
  • metadata: optional extra info such as fps, width, height, etc.

Example (simplified):

{
  "id": "Tennis_some_clip_name",
  "inputs": [
    {
      "type": "video",
      "path": "test/some_clip_name.mp4",
      "fps": 25.0
    }
  ],
  "events": [
    {
      "head": "tennis_action",
      "label": "near_court_serve",
      "position_ms": "4240",
      "comment": "serve"
    }
  ]
}

⏱️ Timestamp ↔ video position relationship (IMPORTANT)

For each event:

  • position_ms is clip-relative time in milliseconds

  • It is computed from the clip-relative frame index using:

    position_ms = round(frame / fps * 1000)

So:

  • position_ms = 0 corresponds to the first frame of the clip
  • position_ms = 4240 means the event happens around 4.240 seconds after the clip start

If you need the approximate frame index back:

  • frame β‰ˆ round(position_ms / 1000 * fps)

🏷️ Labels

Labels are stored under:

labels.<head_name>.labels

where <head_name> is typically tennis_action.


🧰 Notes

  • Paths in inputs[].path are relative paths pointing to the split folder:

    • train/<clip>.mp4
    • valid/<clip>.mp4
    • test/<clip>.mp4
  • The repository includes .gitattributes for Git/LFS handling of large files.


βœ… Quick sanity check

Pick one entry in annotations-localization-test.json:

  1. Open the clip video located at test/<clip>.mp4
  2. Jump to position_ms / 1000 seconds
  3. You should observe the corresponding tennis event near that timestamp

πŸ“š Data Source & Attribution

The tennis clips and raw annotations in this dataset are derived from the tennis data released in the official repository of the paper:

Spotting Temporally Precise, Fine-Grained Events in Video (ECCV 2022)
James Hong, Haotian Zhang, MichaΓ«l Gharbi, Matthew Fisher, Kayvon Fatahalian

Source repository (tennis data):
https://github.com/jhong93/spot/tree/main/data/tennis

If you use this dataset, please cite the original paper:

@inproceedings{precisespotting_eccv22,
    author={Hong, James and Zhang, Haotian and Gharbi, Micha\"{e}l and Fisher, Matthew and Fatahalian, Kayvon},
    title={Spotting Temporally Precise, Fine-Grained Events in Video},
    booktitle={ECCV},
    year={2022}
}
Downloads last month
3,183