Update README.md
Browse files
README.md
CHANGED
|
@@ -44,7 +44,7 @@ We preprocess the public data and NLLB-Seed, except:
|
|
| 44 |
7. `indic_nlp` dataset because the total size is about ~11M pairs, which translate to about ~84 GB
|
| 45 |
8. `til` dataset because the total size is about ~80.6M pairs, which translates to about ~615 GB
|
| 46 |
|
| 47 |
-
Note:
|
| 48 |
|
| 49 |
What are the use cases?
|
| 50 |
1. Training a model with embeddings as input. For example, training a Sparse Autoencoder. This saves computation because we do not need to load the encoder during training. Also, we do not need to cache the encoder's output on-the-fly
|
|
|
|
| 44 |
7. `indic_nlp` dataset because the total size is about ~11M pairs, which translate to about ~84 GB
|
| 45 |
8. `til` dataset because the total size is about ~80.6M pairs, which translates to about ~615 GB
|
| 46 |
|
| 47 |
+
Note: New dataset which includes the mentioned dataset above can be downloaded [here](https://huggingface.co/datasets/jasonrichdarmawan/nllb-primary-datasets-public-data-embedding)
|
| 48 |
|
| 49 |
What are the use cases?
|
| 50 |
1. Training a model with embeddings as input. For example, training a Sparse Autoencoder. This saves computation because we do not need to load the encoder during training. Also, we do not need to cache the encoder's output on-the-fly
|