Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ tags:
|
|
| 10 |
- benchmark
|
| 11 |
language:
|
| 12 |
- en
|
| 13 |
-
pretty_name:
|
| 14 |
arxiv: 2510.18822
|
| 15 |
configs:
|
| 16 |
- config_name: default
|
|
@@ -30,18 +30,50 @@ huggingface-cli download MCG-NJU/Tracking-Any-Granularity --repo-type dataset --
|
|
| 30 |
|
| 31 |
## Dataset Summary
|
| 32 |
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
## Dataset Description
|
| 36 |
|
| 37 |
-
Our dataset includes a wide range of video sources, demonstrating strong diversity and serving as a solid benchmark for evaluating tracking performance. Each video sequence is annotated with 18 attributes representing different tracking challenges, which can appear simultaneously in the same video. Common challenges include motion blur, deformation, and partial occlusion, reflecting the dataset’s high difficulty. Most videos contain multiple attributes, indicating the dataset’s coverage of complex and diverse tracking scenarios.
|
|
|
|
|
|
|
| 38 |
|
| 39 |
## Benchmark Results
|
| 40 |
|
| 41 |
We evaluated many representative trackers on the valid and test splits of our dataset:
|
| 42 |
|
| 43 |
-
|
| 44 |
-
|
| 45 |
| Model | 𝒥 & ℱ | 𝒥 | ℱ | 𝒥 & ℱ | 𝒥 | ℱ |
|
| 46 |
|-------------------------------|---------|---------|---------|---------|---------|---------|
|
| 47 |
| STCN | 70.4 | 65.9 | 75 | 76.2 | 72.2 | 80.2 |
|
|
@@ -56,8 +88,7 @@ We evaluated many representative trackers on the valid and test splits of our da
|
|
| 56 |
| JointFormer | 76.6 | 72.8 | 80.5 | 79.1 | 75.5 | 82.7 |
|
| 57 |
| SAM2++ | 87.4 | 84.2 | 90.7 | 87.9 | 84.9 | 90.9 |
|
| 58 |
|
| 59 |
-
|
| 60 |
-
|
| 61 |
| Model | AUC | P_Norm | P | AUC | P_Norm | P |
|
| 62 |
|------------------------------|---------|---------|---------|---------|---------|---------|
|
| 63 |
| OSTrack | 74.8 | 84.4 | 72.7 | 69.7 | 78.8 | 69.9 |
|
|
@@ -73,8 +104,7 @@ We evaluated many representative trackers on the valid and test splits of our da
|
|
| 73 |
| LoRAT | 75.1 | 84.8 | 74.4 | 70.5 | 79.7 | 68.7 |
|
| 74 |
| SAM2++ | 80.7 | 89.7 | 77.8 | 78 | 85.7 | 81.5 |
|
| 75 |
|
| 76 |
-
|
| 77 |
-
|
| 78 |
| Model | Acc | Acc |
|
| 79 |
|------------|---------|---------|
|
| 80 |
| pips | 19.0 | 19.8 |
|
|
|
|
| 10 |
- benchmark
|
| 11 |
language:
|
| 12 |
- en
|
| 13 |
+
pretty_name: TAG
|
| 14 |
arxiv: 2510.18822
|
| 15 |
configs:
|
| 16 |
- config_name: default
|
|
|
|
| 30 |
|
| 31 |
## Dataset Summary
|
| 32 |
|
| 33 |
+
**T**racking-**A**ny-**G**ranularity (TAG) is a comprehensive dataset for training our unified model, termed Tracking-Any-Granularity (TAG), with annotations across three granularities: segmentation masks, bounding boxes, and key points.
|
| 34 |
+
|
| 35 |
+
<table align="center">
|
| 36 |
+
<tbody>
|
| 37 |
+
<tr>
|
| 38 |
+
<td><img width="220" src="assets/data/00025.gif"/></td>
|
| 39 |
+
<td><img width="220" src="assets/data/00076.gif"/></td>
|
| 40 |
+
<td><img width="220" src="assets/data/00045.gif"/></td>
|
| 41 |
+
</tr>
|
| 42 |
+
</tbody>
|
| 43 |
+
</table>
|
| 44 |
+
|
| 45 |
+
<table align="center">
|
| 46 |
+
<tbody>
|
| 47 |
+
<tr>
|
| 48 |
+
<td><img width="220" src="assets/data/00102.gif"/></td>
|
| 49 |
+
<td><img width="220" src="assets/data/00103.gif"/></td>
|
| 50 |
+
<td><img width="220" src="assets/data/00152.gif"/></td>
|
| 51 |
+
</tr>
|
| 52 |
+
</tbody>
|
| 53 |
+
</table>
|
| 54 |
+
|
| 55 |
+
<table align="center">
|
| 56 |
+
<tbody>
|
| 57 |
+
<tr>
|
| 58 |
+
<td><img width="220" src="assets/data/00227.gif"/></td>
|
| 59 |
+
<td><img width="220" src="assets/data/00117.gif"/></td>
|
| 60 |
+
<td><img width="220" src="assets/data/00312.gif"/></td>
|
| 61 |
+
</tr>
|
| 62 |
+
</tbody>
|
| 63 |
+
</table>
|
| 64 |
+
|
| 65 |
|
| 66 |
## Dataset Description
|
| 67 |
|
| 68 |
+
Our dataset includes **a wide range of video sources**, demonstrating strong diversity and serving as a solid benchmark for evaluating tracking performance. Each video sequence is annotated with **18 attributes representing different tracking challenges**, which can appear simultaneously in the same video. Common challenges include motion blur, deformation, and partial occlusion, reflecting the dataset’s high difficulty. Most videos contain multiple attributes, indicating the dataset’s coverage of complex and diverse tracking scenarios.
|
| 69 |
+
|
| 70 |
+

|
| 71 |
|
| 72 |
## Benchmark Results
|
| 73 |
|
| 74 |
We evaluated many representative trackers on the valid and test splits of our dataset:
|
| 75 |
|
| 76 |
+
*video object segmentation*
|
|
|
|
| 77 |
| Model | 𝒥 & ℱ | 𝒥 | ℱ | 𝒥 & ℱ | 𝒥 | ℱ |
|
| 78 |
|-------------------------------|---------|---------|---------|---------|---------|---------|
|
| 79 |
| STCN | 70.4 | 65.9 | 75 | 76.2 | 72.2 | 80.2 |
|
|
|
|
| 88 |
| JointFormer | 76.6 | 72.8 | 80.5 | 79.1 | 75.5 | 82.7 |
|
| 89 |
| SAM2++ | 87.4 | 84.2 | 90.7 | 87.9 | 84.9 | 90.9 |
|
| 90 |
|
| 91 |
+
*single object tracking*
|
|
|
|
| 92 |
| Model | AUC | P_Norm | P | AUC | P_Norm | P |
|
| 93 |
|------------------------------|---------|---------|---------|---------|---------|---------|
|
| 94 |
| OSTrack | 74.8 | 84.4 | 72.7 | 69.7 | 78.8 | 69.9 |
|
|
|
|
| 104 |
| LoRAT | 75.1 | 84.8 | 74.4 | 70.5 | 79.7 | 68.7 |
|
| 105 |
| SAM2++ | 80.7 | 89.7 | 77.8 | 78 | 85.7 | 81.5 |
|
| 106 |
|
| 107 |
+
*point tracking*
|
|
|
|
| 108 |
| Model | Acc | Acc |
|
| 109 |
|------------|---------|---------|
|
| 110 |
| pips | 19.0 | 19.8 |
|