Update README.md
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ The base model pre-trained on 16kHz sampled speech audio. When using the model m
|
|
| 20 |
|
| 21 |
The Finnish Wav2Vec2 Base has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477). It is pre-trained on 158k hours of unlabeled Finnish speech, including [KAVI radio and television archive materials](https://kavi.fi/en/radio-ja-televisioarkistointia-vuodesta-2008/), Lahjoita puhetta (Donate Speech), Finnish Parliament, Finnish VoxPopuli.
|
| 22 |
|
| 23 |
-
You can read more about the pre-trained model from [this paper](TODO).
|
| 24 |
|
| 25 |
## Intended uses & limitations
|
| 26 |
|
|
|
|
| 20 |
|
| 21 |
The Finnish Wav2Vec2 Base has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477). It is pre-trained on 158k hours of unlabeled Finnish speech, including [KAVI radio and television archive materials](https://kavi.fi/en/radio-ja-televisioarkistointia-vuodesta-2008/), Lahjoita puhetta (Donate Speech), Finnish Parliament, Finnish VoxPopuli.
|
| 22 |
|
| 23 |
+
You can read more about the pre-trained model from [this paper](TODO). The training scripts are available on [GitHub](https://github.com/aalto-speech/large-scale-monolingual-speech-foundation-models).
|
| 24 |
|
| 25 |
## Intended uses & limitations
|
| 26 |
|