--- license: mit datasets: - neuphonic/emilia-yodas-english-neucodec language: - en pipeline_tag: text-to-speech --- # Echolancer Stage 2 Base This is a TTS model pretrained on the pre-tokenized Emilia dataset. Since there's no speaker conditioning, the speaker is random at inference. This model has 550M parameters and it was trained from scratch on a single AMD Instinct MI300X for ~4 days with the ROCm PyTorch Training v25.7 container. The training objective was standard next-token prediction on concatenated text-audio tokens. # Code For more information including a Colab notebook, see [the repository](https://github.com/ZDisket/Echolancer).