Update README.md
Browse files
README.md
CHANGED
|
@@ -11,6 +11,15 @@ language:
|
|
| 11 |
|
| 12 |
DRAL is a bilingual speech corpus of parallel utterances, using recorded conversations and fragments re-enacted in a different language. It is intended as a resource for research, especially for training and evaluating speech-to-speech translation models and systems. We dedicate this corpus to the public domain; there is no copyright (CC 0).
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
- [DRAL home page](https://www.cs.utep.edu/nigel/dral/)
|
| 15 |
- [DRAL GitHub repo](https://github.com/joneavila/DRAL)
|
| 16 |
-
- [DRAL technical report](https://arxiv.org/abs/2211.11584)
|
|
|
|
|
|
| 11 |
|
| 12 |
DRAL is a bilingual speech corpus of parallel utterances, using recorded conversations and fragments re-enacted in a different language. It is intended as a resource for research, especially for training and evaluating speech-to-speech translation models and systems. We dedicate this corpus to the public domain; there is no copyright (CC 0).
|
| 13 |
|
| 14 |
+
DRAL is described in a new technical report: [Dialogs Re-enacted Across Languages, Version 2](https://arxiv.org/abs/2211.11584), Nigel G. Ward, Jonathan E. Avila, Emilia Rivas, Divette Marco.
|
| 15 |
+
|
| 16 |
+
Some initial analyses of this data are described in our [Interspeech 2023 paper](https://arxiv.org/abs/2307.04123).
|
| 17 |
+
|
| 18 |
+
The releases include 2893 short matched Spanish-English pairs (> 2 hours) taken from 104 conversations with 70 unique participants. There are also some illustrative, lower-quality, pairs in Bengali-English, Japanese-English, and French-English. All are packaged together with the full original conversations and full re-enactment recording sessions.
|
| 19 |
+
|
| 20 |
+
## Links
|
| 21 |
+
|
| 22 |
- [DRAL home page](https://www.cs.utep.edu/nigel/dral/)
|
| 23 |
- [DRAL GitHub repo](https://github.com/joneavila/DRAL)
|
| 24 |
+
- [DRAL technical report](https://arxiv.org/abs/2211.11584)
|
| 25 |
+
- [Interspeech 2023 paper](https://arxiv.org/abs/2307.04123)
|