speechbrain
English
Supradeepdan commited on
Commit
06d5b0a
·
verified ·
1 Parent(s): f0f5f5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +187 -1
README.md CHANGED
@@ -6,4 +6,190 @@ language:
6
  metrics:
7
  - accuracy
8
  library_name: speechbrain
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  metrics:
7
  - accuracy
8
  library_name: speechbrain
9
+ ---
10
+ <iframe src="https://ghbtns.com/github-btn.html?user=SupradeepDanturti&repo=ConvAIProject&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
11
+ <br/><br/>
12
+
13
+ # Speaker Counting with XVector and ECAPA-TDNN models
14
+
15
+ This repository provides tools for detecting and counting the number of speakers in an audio recording using models like XVector and ECAPA-TDNN, trained on datasets such as LibriSpeech.
16
+
17
+ The pre-trained system processes audio inputs to detect the presence and count of speakers, providing output in the following format:
18
+
19
+ '''
20
+ 0.00-2.50 has 1 speaker
21
+ 2.50-4.20 has 2 speakers
22
+ 4.20-5:30 has no speakers
23
+ '''
24
+
25
+ The system expects input recordings sampled at 16kHz. If your signal has a different sample rate, resample it using torchaudio before using the interface.
26
+
27
+
28
+ # Model Performance
29
+
30
+ Error rates for different numbers of speakers detected by the models are provided below:
31
+
32
+ ## XVector Model Error Rates
33
+
34
+ | Number of Speakers | Error Rate |
35
+ |--------------------|------------------|
36
+ | No Speakers | 0.00e+00 |
37
+ | 1 Speaker | 4.06e-01 |
38
+ | 2 Speakers | 3.20e-02 |
39
+ | 3 Speakers | 2.15e-01 |
40
+ | 4 Speakers | 4.67e-01 |
41
+ | **Overall** | **2.29e-01** |
42
+
43
+ ## ECAPA-TDNN Model Error Rates
44
+
45
+ | Number of Speakers | Error Rate |
46
+ |--------------------|------------------|
47
+ | No Speakers | 0.00e+00 |
48
+ | 1 Speaker | 4.12e-01 |
49
+ | 2 Speakers | 4.10e-01 |
50
+ | 3 Speakers | 3.26e-01 |
51
+ | 4 Speakers | 5.23e-01 |
52
+ | **Overall** | **2.40e-01** |
53
+
54
+ ## Self-supervised MLP Model Error Rates
55
+
56
+ | Number of Speakers | Error Rate |
57
+ |--------------------|------------------|
58
+ | No Speakers | 0.00e+00 |
59
+ | 1 Speaker | 8.67e-03 |
60
+ | 2 Speakers | 1.14e-01 |
61
+ | 3 Speakers | 4.08e-01 |
62
+ | 4 Speakers | 4.49e-01 |
63
+ | **Overall** | **2.00e-01** |
64
+
65
+ ## Self-supervised XVector Model Error Rates
66
+
67
+ | Number of Speakers | Error Rate |
68
+ |--------------------|------------------|
69
+ | No Speakers | 0.00e+00 |
70
+ | 1 Speaker | 1.64e-02 |
71
+ | 2 Speakers | 1.39e-01 |
72
+ | 3 Speakers | 3.34e-01 |
73
+ | 4 Speakers | 5.41e-01 |
74
+ | **Overall** | **2.10e-01** |
75
+
76
+ Error rates are crucial to understanding where the models excel and where they may need further improvement. In particular, they can reveal challenges in scenarios with higher numbers of speakers.
77
+
78
+ ### Limitations
79
+ Please note that these models are evaluated under controlled conditions and their performance may vary when applied to different datasets or in real-world scenarios.
80
+
81
+
82
+ ## System Overview
83
+ This project includes interfaces for processing audio files, detecting speech, and counting speakers. It leverages pre-trained models and custom scripts for refining and aggregating predictions.
84
+
85
+ ### Installation
86
+ '''
87
+ pip install speechbrain
88
+
89
+ '''
90
+ Please look at [SpeechBrain](https://speechbrain.github.io/) for tutorials or more information.
91
+
92
+ ## Using the Speaker Counter Interface
93
+ ### For XVector & ECAPA-TDNN:
94
+
95
+ '''
96
+ from interface.SpeakerCounter import SpeakerCounter
97
+
98
+ wav_path = "path/to/your/audio.wav"
99
+ model_path = "path/to/your/model"
100
+ save_dir = "path/to/save/results"
101
+
102
+ # Initialize the SpeakerCounter
103
+ speaker_counter = SpeakerCounter.from_hparams(source=model_path, savedir=save_dir)
104
+
105
+ # Run inference
106
+ speaker_counter.classify_file(wav_path)
107
+ '''
108
+
109
+ ### For a Self-supervised model:
110
+ '''
111
+ from interface.SpeakerCounterSelfsupervisedMLP import SpeakerCounter
112
+
113
+ wav_path = "path/to/your/audio.wav"
114
+ model_path = "path/to/your/selfsupervised_model"
115
+ save_dir = "path/to/save/results"
116
+
117
+ # Initialize the SpeakerCounter
118
+ audio_classifier = SpeakerCounter.from_hparams(source=model_path, savedir=save_dir)
119
+
120
+ # Run inference
121
+ audio_classifier.classify_file(wav_path)
122
+ '''
123
+
124
+
125
+ ## **Setup and Training Instructions**
126
+
127
+ To reproduce the results of this project, follow these steps.
128
+
129
+ ```
130
+ !git clone https://github.com/SupradeepDanturti/ConvAIProject
131
+ %cd ConvAIProject
132
+ ```
133
+ Download the project code from the GitHub repository and navigate into the project directory.
134
+
135
+
136
+ ```
137
+ !python prepare_dataset/download_required_data.py --output_folder <destination_folder_path>
138
+ ```
139
+ Download the necessary datasets (LibriSpeech etc.), specifying the desired destination folder.
140
+
141
+ ```
142
+ !python prepare_dataset/create_custom_dataset.py prepare_dataset/dataset.yaml
143
+ ```
144
+ Create custom dataset based on set parameters as shown in the sample below
145
+
146
+ Sample of dataset.yaml:
147
+ ```
148
+ n_sessions:
149
+ train: 1000 # Creates 1000 sessions per class
150
+ dev: 200 # Creates 200 sessions per class
151
+ eval: 200 # Creates 200 sessions per class
152
+ n_speakers: 4 # max number of speakers. In this case the total classes will be 5 (0-4 speakers)
153
+ max_length: 120 # max length in seconds for each session/utterance.
154
+ ```
155
+ <center>Sample of dataset.yaml</center>
156
+
157
+ To train the XVector model run the following command.
158
+ ```
159
+ %cd xvector
160
+ !python train_xvector_augmentation.py hparams_xvector_augmentation.yaml
161
+ ```
162
+
163
+ To train the ECAPA-TDNN model run the following command.
164
+
165
+ ```
166
+ %cd ecapa_tdnn
167
+ !python train_ecapa_tdnn.py hparams_ecapa_tdnn_augmentation.yaml
168
+ ```
169
+
170
+ To train the SelfSupervised MLP model run the following command.
171
+ ```
172
+ %cd selfsupervised
173
+ !python selfsupervised_mlp.py hparams_selfsupervised_mlp.yaml
174
+ ```
175
+
176
+ To train the SelfSupervised XVector model run the following command.
177
+ ```
178
+ %cd selfsupervised
179
+ !python selfsupervised_xvector.py hparams_selfsupervised_xvector.yaml
180
+ ```
181
+
182
+ ### **This Project was developed completely using [SpeechBrain](https://speechbrain.github.io/) **
183
+
184
+ ## Reference
185
+ '''
186
+ @misc{speechbrain,
187
+ title={{SpeechBrain}: A General-Purpose Speech Toolkit},
188
+ author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
189
+ year={2021},
190
+ eprint={2106.04624},
191
+ archivePrefix={arXiv},
192
+ primaryClass={eess.AS},
193
+ note={arXiv:2106.04624}
194
+ }
195
+ '''