Update README.md
Browse files
README.md
CHANGED
|
@@ -4,13 +4,14 @@ license: mit
|
|
| 4 |
ABX-accent
|
| 5 |
-----------
|
| 6 |
|
| 7 |
-
The ABX-accent project is based on the preparation and evaluation of the Accented English Speech Recognition Challenge (AESRC) dataset [1], using fastABX[2] for evaluation. This repository provides all the
|
|
|
|
| 8 |
|
| 9 |
What is ABX Evaluation?
|
| 10 |
-----------------------
|
| 11 |
-
The
|
| 12 |
|
| 13 |
-
This benchmark focuses on the more challenging ABX across
|
| 14 |
|
| 15 |
About the Dataset
|
| 16 |
-----------------
|
|
|
|
| 4 |
ABX-accent
|
| 5 |
-----------
|
| 6 |
|
| 7 |
+
The ABX-accent project is based on the preparation and evaluation of the Accented English Speech Recognition Challenge (AESRC) dataset [1], using fastABX [2] for evaluation. This repository provides all the items files you can use for evaluation.
|
| 8 |
+
|
| 9 |
|
| 10 |
What is ABX Evaluation?
|
| 11 |
-----------------------
|
| 12 |
+
The ABX metric evaluates whether a representation X of a speech unit (e.g., the word “bap”) is closer to a same-category example A (also “bap”) than to a different-category example B (e.g., “bop”). The ABX error rate is calculated by averaging the classification errors over all minimal phoneme trigrams in the corpus.
|
| 13 |
|
| 14 |
+
This benchmark focuses on the more challenging ABX across/within speaker task, where the X example is spoken by a different speaker than the ones in pair (A, B), testing speaker-invariant phonetic discrimination.
|
| 15 |
|
| 16 |
About the Dataset
|
| 17 |
-----------------
|