Datasets:
Improve dataset card: Add paper/code links, update size category, add task/tags
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,16 +1,27 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
size_categories:
|
| 4 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
# VGGSound-50k Preprocessed Dataset
|
| 8 |
|
|
|
|
|
|
|
| 9 |
This dataset contains preprocessed data from the VGGSound dataset, specifically processed using the VGGSound-AVEL50k subset for cross-modal knowledge distillation research. The preprocessing is optimized for MST-Distill (Mixture of Specialized Teachers for Cross-Modal Knowledge Distillation) method.
|
| 10 |
|
| 11 |
This preprocessing work is based on the VGGSound-AVEL50k subset from: **jasongief/CPSP: [2023 TPAMI] Contrastive Positive Sample Propagation along the Audio-Visual Event Line**
|
| 12 |
|
| 13 |
-
And related preprocessing works are described in our paper: https://
|
| 14 |
|
| 15 |
---
|
| 16 |
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
size_categories:
|
| 4 |
+
- 10K<n<100K
|
| 5 |
+
task_categories:
|
| 6 |
+
- other
|
| 7 |
+
tags:
|
| 8 |
+
- cross-modal
|
| 9 |
+
- knowledge-distillation
|
| 10 |
+
- audio-visual
|
| 11 |
+
- multimodal
|
| 12 |
+
- vggsound
|
| 13 |
+
- feature-extraction
|
| 14 |
---
|
| 15 |
|
| 16 |
# VGGSound-50k Preprocessed Dataset
|
| 17 |
|
| 18 |
+
[Paper](https://huggingface.co/papers/2507.07015) | [Code](https://github.com/gray1y/MST-Distill)
|
| 19 |
+
|
| 20 |
This dataset contains preprocessed data from the VGGSound dataset, specifically processed using the VGGSound-AVEL50k subset for cross-modal knowledge distillation research. The preprocessing is optimized for MST-Distill (Mixture of Specialized Teachers for Cross-Modal Knowledge Distillation) method.
|
| 21 |
|
| 22 |
This preprocessing work is based on the VGGSound-AVEL50k subset from: **jasongief/CPSP: [2023 TPAMI] Contrastive Positive Sample Propagation along the Audio-Visual Event Line**
|
| 23 |
|
| 24 |
+
And related preprocessing works are described in our paper: [MST-Distill: Mixture of Specialized Teachers for Cross-Modal Knowledge Distillation](https://huggingface.co/papers/2507.07015)
|
| 25 |
|
| 26 |
---
|
| 27 |
|