# SynthVoice: This should be a paper Title 📑 [Paper](https://huggingface.co/papers/xxxx.xxxxx) | 🌐 [Project Page](https://synthvoice.github.io/) | 💾 [Released Resources](https://huggingface.co/collections/toolevalxm/synthvoice-67a978e28fd926b56a4f55a2) | 📦 [Repo](https://github.com/xmhtoolathlon/Annoy-DataSync) This is the resource page of our SynthVoice collection on Huggingface. **Dataset** | Dataset | Link | |-|-| | SynthVoice-Processed | [🤗](https://huggingface.co/datasets/toolevalxm/SynthVoice-Processed) | Please also check the raw data: [toolevalxm/SynthVoice-Raw](https://huggingface.co/datasets/toolevalxm/SynthVoice-Raw). **Models** | Base Model / Training | SynthVoice | SynthVoice++ | |-|-|-| | Coqui TTS VITS | [🤗](https://huggingface.co/toolevalxm/synthvoice-vits) | [🤗](https://huggingface.co/toolevalxm/synthvoice-vits-pp) | **Introduction** We utilize the Coqui TTS framework for synthesizing high-quality voice outputs from text transcripts. The synthesis is performed using the VITS model architecture, which has demonstrated superior quality in text-to-speech generation tasks. Our approach involves: 1. Processing raw LibriSpeech transcripts 2. Using Coqui TTS (coqui-ai/TTS) for voice synthesis 3. Post-processing and quality filtering *Due to licensing requirements, we only release the processed subset containing synthesized outputs. **License** The license for this dataset is CC BY 4.0.