Dampish's picture
Update README.md
a92e70a verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - audio-to-audio
language:
  - en
size_categories:
  - 100K<n<1M

A dataset with about 500 podcasts chopped into 60s segments, then tokenized into discrete tokens. This is meant for autoregressive training. There is also a variant with 30s segments instead. In total there is 320 million tokens in this dataset or 0.32B tokens. Hopefully enough to train a medium sized model.

By using this dataset you agree:

  • You cant use this commerically
  • You have to reference me
  • You have to use this lisence

If you want to work out something please contact me.