jasonrichdarmawan commited on
Commit
c39c811
·
verified ·
1 Parent(s): 4b07feb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -44,7 +44,7 @@ We preprocess the public data and NLLB-Seed, except:
44
  7. `indic_nlp` dataset because the total size is about ~11M pairs, which translate to about ~84 GB
45
  8. `til` dataset because the total size is about ~80.6M pairs, which translates to about ~615 GB
46
 
47
- Note: We will update the dataset this week which includes preprocessing the `aau`, `hornmt`, `mburisano`, `tico`, `umsuka`, `xhosa_navy`, and `indic_nlp` datasets. However, due to my lack of expertise in updating HF dataset and the new dataset size estimate will be about ~62 GB (old dataset size) + ~85 GB (new dataset size), I will create new dataset instead.
48
 
49
  What are the use cases?
50
  1. Training a model with embeddings as input. For example, training a Sparse Autoencoder. This saves computation because we do not need to load the encoder during training. Also, we do not need to cache the encoder's output on-the-fly
 
44
  7. `indic_nlp` dataset because the total size is about ~11M pairs, which translate to about ~84 GB
45
  8. `til` dataset because the total size is about ~80.6M pairs, which translates to about ~615 GB
46
 
47
+ Note: New dataset which includes the mentioned dataset above can be downloaded [here](https://huggingface.co/datasets/jasonrichdarmawan/nllb-primary-datasets-public-data-embedding)
48
 
49
  What are the use cases?
50
  1. Training a model with embeddings as input. For example, training a Sparse Autoencoder. This saves computation because we do not need to load the encoder during training. Also, we do not need to cache the encoder's output on-the-fly