Gray1y commited on
Commit
7c304c3
·
verified ·
1 Parent(s): aa5c666

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -8,12 +8,18 @@ paper_url: https://arxiv.org/abs/2507.07015
8
 
9
  This dataset contains preprocessed data from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, specifically processed for cross-modal knowledge distillation research.
10
 
 
 
 
 
11
  ## Original Dataset
12
 
13
  The original RAVDESS dataset is available at: https://zenodo.org/records/1188976
14
 
15
  The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 7356 files (total size: 24.8 GB). The dataset contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent.
16
 
 
 
17
  ## Dataset Information
18
 
19
  - **Actors**: 24 professional actors (12 female, 12 male)
@@ -22,6 +28,8 @@ The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contain
22
  - **Statements**: Two lexically-matched statements in neutral North American accent
23
  - **Content**: Speech only (song content is not included in this preprocessed dataset)
24
 
 
 
25
  ## Preprocessing Details
26
 
27
  We have performed normalization preprocessing on the speech portion of the RAVDESS dataset for use in our cross-modal knowledge distillation work. The preprocessing focuses exclusively on the speech content (vocal channel 01) and does not include song data (vocal channel 02). The preprocessing consists of three main steps:
@@ -51,6 +59,8 @@ We have performed normalization preprocessing on the speech portion of the RAVDE
51
  - `audio_data.npy`: MFCC features [N, 15, time_steps]
52
  - `label_data.npy`: Emotion labels [N]
53
 
 
 
54
  ## Data Structure
55
 
56
  The preprocessed dataset contains:
@@ -58,6 +68,8 @@ The preprocessed dataset contains:
58
  - **Audio data**: MFCC features extracted from normalized audio
59
  - **Labels**: Emotion categories (0: neutral, 1: calm, 2: happy, 3: sad, 4: angry, 5: fearful, 6: disgust, 7: surprised)
60
 
 
 
61
  ## Usage
62
 
63
  ```python
@@ -73,12 +85,16 @@ print(f"Audio data shape: {audio_data.shape}")
73
  print(f"Label data shape: {label_data.shape}")
74
  ```
75
 
 
 
76
  ## License
77
 
78
  The RAVDESS is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, CC BY-NC-SA 4.0
79
 
80
  This preprocessed dataset maintains the same license as the original RAVDESS dataset.
81
 
 
 
82
  ## Citation
83
 
84
  If you use this preprocessed dataset in your research, please cite both the original RAVDESS paper and acknowledge the preprocessing:
@@ -110,10 +126,14 @@ If you use this preprocessed dataset in your research, please cite both the orig
110
  }
111
  ```
112
 
 
 
113
  ## Acknowledgments
114
 
115
  We thank the original authors of the RAVDESS dataset for making this valuable resource available to the research community. The original dataset was created by Steven R. Livingstone and Frank A. Russo at Ryerson University.
116
 
 
 
117
  ## Contact
118
 
119
  For questions about the original RAVDESS dataset, please contact the original authors: ravdess@gmail.com
 
8
 
9
  This dataset contains preprocessed data from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, specifically processed for cross-modal knowledge distillation research.
10
 
11
+ This preprocessing work is described in our paper: https://arxiv.org/abs/2507.07015
12
+
13
+ ---
14
+
15
  ## Original Dataset
16
 
17
  The original RAVDESS dataset is available at: https://zenodo.org/records/1188976
18
 
19
  The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 7356 files (total size: 24.8 GB). The dataset contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent.
20
 
21
+ ---
22
+
23
  ## Dataset Information
24
 
25
  - **Actors**: 24 professional actors (12 female, 12 male)
 
28
  - **Statements**: Two lexically-matched statements in neutral North American accent
29
  - **Content**: Speech only (song content is not included in this preprocessed dataset)
30
 
31
+ ---
32
+
33
  ## Preprocessing Details
34
 
35
  We have performed normalization preprocessing on the speech portion of the RAVDESS dataset for use in our cross-modal knowledge distillation work. The preprocessing focuses exclusively on the speech content (vocal channel 01) and does not include song data (vocal channel 02). The preprocessing consists of three main steps:
 
59
  - `audio_data.npy`: MFCC features [N, 15, time_steps]
60
  - `label_data.npy`: Emotion labels [N]
61
 
62
+ ---
63
+
64
  ## Data Structure
65
 
66
  The preprocessed dataset contains:
 
68
  - **Audio data**: MFCC features extracted from normalized audio
69
  - **Labels**: Emotion categories (0: neutral, 1: calm, 2: happy, 3: sad, 4: angry, 5: fearful, 6: disgust, 7: surprised)
70
 
71
+ ---
72
+
73
  ## Usage
74
 
75
  ```python
 
85
  print(f"Label data shape: {label_data.shape}")
86
  ```
87
 
88
+ ---
89
+
90
  ## License
91
 
92
  The RAVDESS is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, CC BY-NC-SA 4.0
93
 
94
  This preprocessed dataset maintains the same license as the original RAVDESS dataset.
95
 
96
+ ---
97
+
98
  ## Citation
99
 
100
  If you use this preprocessed dataset in your research, please cite both the original RAVDESS paper and acknowledge the preprocessing:
 
126
  }
127
  ```
128
 
129
+ ---
130
+
131
  ## Acknowledgments
132
 
133
  We thank the original authors of the RAVDESS dataset for making this valuable resource available to the research community. The original dataset was created by Steven R. Livingstone and Frank A. Russo at Ryerson University.
134
 
135
+ ---
136
+
137
  ## Contact
138
 
139
  For questions about the original RAVDESS dataset, please contact the original authors: ravdess@gmail.com