Update README.md
Browse files
README.md
CHANGED
|
@@ -1,478 +1,9 @@
|
|
| 1 |
-
<div align="center">
|
| 2 |
-
|
| 3 |
-
<h1>GPT-SoVITS-WebUI</h1>
|
| 4 |
-
A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.<br><br>
|
| 5 |
-
|
| 6 |
-
[](https://github.com/RVC-Boss/GPT-SoVITS)
|
| 7 |
-
|
| 8 |
-
<a href="https://trendshift.io/repositories/7033" target="_blank"><img src="https://trendshift.io/api/badge/repositories/7033" alt="RVC-Boss%2FGPT-SoVITS | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
| 9 |
-
|
| 10 |
-
<!-- img src="https://counter.seku.su/cmoe?name=gptsovits&theme=r34" /><br> -->
|
| 11 |
-
|
| 12 |
-
[](https://www.python.org)
|
| 13 |
-
[](https://github.com/RVC-Boss/gpt-sovits/releases)
|
| 14 |
-
|
| 15 |
-
[](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/Colab-WebUI.ipynb)
|
| 16 |
-
[](https://lj1995-gpt-sovits-proplus.hf.space/)
|
| 17 |
-
[](https://hub.docker.com/r/xxxxrt666/gpt-sovits)
|
| 18 |
-
|
| 19 |
-
[](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e)
|
| 20 |
-
[](https://rentry.co/GPT-SoVITS-guide#/)
|
| 21 |
-
[](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/docs/en/Changelog_EN.md)
|
| 22 |
-
[](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE)
|
| 23 |
-
|
| 24 |
-
**English** | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md) | [**Türkçe**](./docs/tr/README.md)
|
| 25 |
-
|
| 26 |
-
</div>
|
| 27 |
|
| 28 |
---
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
3. **Cross-lingual Support:** Inference in languages different from the training dataset, currently supporting English, Japanese, Korean, Cantonese and Chinese.
|
| 37 |
-
|
| 38 |
-
4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.
|
| 39 |
-
|
| 40 |
-
**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!**
|
| 41 |
-
|
| 42 |
-
Unseen speakers few-shot fine-tuning demo:
|
| 43 |
-
|
| 44 |
-
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
| 45 |
-
|
| 46 |
-
**RTF(inference speed) of GPT-SoVITS v2 ProPlus**:
|
| 47 |
-
0.028 tested in 4060Ti, 0.014 tested in 4090 (1400words~=4min, inference time is 3.36s), 0.526 in M4 CPU. You can test our [huggingface demo](https://lj1995-gpt-sovits-proplus.hf.space/) (half H200) to experience high-speed inference .
|
| 48 |
-
|
| 49 |
-
请不要尬黑GPT-SoVITS推理速度慢,谢谢!
|
| 50 |
-
|
| 51 |
-
**User guide: [简体中文](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) | [English](https://rentry.co/GPT-SoVITS-guide#/)**
|
| 52 |
-
|
| 53 |
-
## Installation
|
| 54 |
-
|
| 55 |
-
For users in China, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online.
|
| 56 |
-
|
| 57 |
-
### Tested Environments
|
| 58 |
-
|
| 59 |
-
| Python Version | PyTorch Version | Device |
|
| 60 |
-
| -------------- | ---------------- | ------------- |
|
| 61 |
-
| Python 3.10 | PyTorch 2.5.1 | CUDA 12.4 |
|
| 62 |
-
| Python 3.11 | PyTorch 2.5.1 | CUDA 12.4 |
|
| 63 |
-
| Python 3.11 | PyTorch 2.7.0 | CUDA 12.8 |
|
| 64 |
-
| Python 3.9 | PyTorch 2.8.0dev | CUDA 12.8 |
|
| 65 |
-
| Python 3.9 | PyTorch 2.5.1 | Apple silicon |
|
| 66 |
-
| Python 3.11 | PyTorch 2.7.0 | Apple silicon |
|
| 67 |
-
| Python 3.9 | PyTorch 2.2.2 | CPU |
|
| 68 |
-
|
| 69 |
-
### Windows
|
| 70 |
-
|
| 71 |
-
If you are a Windows user (tested with win>=10), you can [download the integrated package](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-v3lora-20250228.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI.
|
| 72 |
-
|
| 73 |
-
**Users in China can [download the package here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#KTvnO).**
|
| 74 |
-
|
| 75 |
-
Install the program by running the following commands:
|
| 76 |
-
|
| 77 |
-
```pwsh
|
| 78 |
-
conda create -n GPTSoVits python=3.10
|
| 79 |
-
conda activate GPTSoVits
|
| 80 |
-
pwsh -F install.ps1 --Device <CU126|CU128|CPU> --Source <HF|HF-Mirror|ModelScope> [--DownloadUVR5]
|
| 81 |
-
```
|
| 82 |
-
|
| 83 |
-
### Linux
|
| 84 |
-
|
| 85 |
-
```bash
|
| 86 |
-
conda create -n GPTSoVits python=3.10
|
| 87 |
-
conda activate GPTSoVits
|
| 88 |
-
bash install.sh --device <CU126|CU128|ROCM|CPU> --source <HF|HF-Mirror|ModelScope> [--download-uvr5]
|
| 89 |
-
```
|
| 90 |
-
|
| 91 |
-
### macOS
|
| 92 |
-
|
| 93 |
-
**Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.**
|
| 94 |
-
|
| 95 |
-
Install the program by running the following commands:
|
| 96 |
-
|
| 97 |
-
```bash
|
| 98 |
-
conda create -n GPTSoVits python=3.10
|
| 99 |
-
conda activate GPTSoVits
|
| 100 |
-
bash install.sh --device <MPS|CPU> --source <HF|HF-Mirror|ModelScope> [--download-uvr5]
|
| 101 |
-
```
|
| 102 |
-
|
| 103 |
-
### Install Manually
|
| 104 |
-
|
| 105 |
-
#### Install Dependences
|
| 106 |
-
|
| 107 |
-
```bash
|
| 108 |
-
conda create -n GPTSoVits python=3.10
|
| 109 |
-
conda activate GPTSoVits
|
| 110 |
-
|
| 111 |
-
pip install -r extra-req.txt --no-deps
|
| 112 |
-
pip install -r requirements.txt
|
| 113 |
-
```
|
| 114 |
-
|
| 115 |
-
#### Install FFmpeg
|
| 116 |
-
|
| 117 |
-
##### Conda Users
|
| 118 |
-
|
| 119 |
-
```bash
|
| 120 |
-
conda activate GPTSoVits
|
| 121 |
-
conda install ffmpeg
|
| 122 |
-
```
|
| 123 |
-
|
| 124 |
-
##### Ubuntu/Debian Users
|
| 125 |
-
|
| 126 |
-
```bash
|
| 127 |
-
sudo apt install ffmpeg
|
| 128 |
-
sudo apt install libsox-dev
|
| 129 |
-
```
|
| 130 |
-
|
| 131 |
-
##### Windows Users
|
| 132 |
-
|
| 133 |
-
Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root
|
| 134 |
-
|
| 135 |
-
Install [Visual Studio 2017](https://aka.ms/vs/17/release/vc_redist.x86.exe)
|
| 136 |
-
|
| 137 |
-
##### MacOS Users
|
| 138 |
-
|
| 139 |
-
```bash
|
| 140 |
-
brew install ffmpeg
|
| 141 |
-
```
|
| 142 |
-
|
| 143 |
-
### Running GPT-SoVITS with Docker
|
| 144 |
-
|
| 145 |
-
#### Docker Image Selection
|
| 146 |
-
|
| 147 |
-
Due to rapid development in the codebase and a slower Docker image release cycle, please:
|
| 148 |
-
|
| 149 |
-
- Check [Docker Hub](https://hub.docker.com/r/xxxxrt666/gpt-sovits) for the latest available image tags
|
| 150 |
-
- Choose an appropriate image tag for your environment
|
| 151 |
-
- `Lite` means the Docker image **does not include** ASR models and UVR5 models. You can manually download the UVR5 models, while the program will automatically download the ASR models as needed
|
| 152 |
-
- The appropriate architecture image (amd64/arm64) will be automatically pulled during Docker Compose
|
| 153 |
-
- Docker Compose will mount **all files** in the current directory. Please switch to the project root directory and **pull the latest code** before using the Docker image
|
| 154 |
-
- Optionally, build the image locally using the provided Dockerfile for the most up-to-date changes
|
| 155 |
-
|
| 156 |
-
#### Environment Variables
|
| 157 |
-
|
| 158 |
-
- `is_half`: Controls whether half-precision (fp16) is enabled. Set to `true` if your GPU supports it to reduce memory usage.
|
| 159 |
-
|
| 160 |
-
#### Shared Memory Configuration
|
| 161 |
-
|
| 162 |
-
On Windows (Docker Desktop), the default shared memory size is small and may cause unexpected behavior. Increase `shm_size` (e.g., to `16g`) in your Docker Compose file based on your available system memory.
|
| 163 |
-
|
| 164 |
-
#### Choosing a Service
|
| 165 |
-
|
| 166 |
-
The `docker-compose.yaml` defines two services:
|
| 167 |
-
|
| 168 |
-
- `GPT-SoVITS-CU126` & `GPT-SoVITS-CU128`: Full version with all features.
|
| 169 |
-
- `GPT-SoVITS-CU126-Lite` & `GPT-SoVITS-CU128-Lite`: Lightweight version with reduced dependencies and functionality.
|
| 170 |
-
|
| 171 |
-
To run a specific service with Docker Compose, use:
|
| 172 |
-
|
| 173 |
-
```bash
|
| 174 |
-
docker compose run --service-ports <GPT-SoVITS-CU126-Lite|GPT-SoVITS-CU128-Lite|GPT-SoVITS-CU126|GPT-SoVITS-CU128>
|
| 175 |
-
```
|
| 176 |
-
|
| 177 |
-
#### Building the Docker Image Locally
|
| 178 |
-
|
| 179 |
-
If you want to build the image yourself, use:
|
| 180 |
-
|
| 181 |
-
```bash
|
| 182 |
-
bash docker_build.sh --cuda <12.6|12.8> [--lite]
|
| 183 |
-
```
|
| 184 |
-
|
| 185 |
-
#### Accessing the Running Container (Bash Shell)
|
| 186 |
-
|
| 187 |
-
Once the container is running in the background, you can access it using:
|
| 188 |
-
|
| 189 |
-
```bash
|
| 190 |
-
docker exec -it <GPT-SoVITS-CU126-Lite|GPT-SoVITS-CU128-Lite|GPT-SoVITS-CU126|GPT-SoVITS-CU128> bash
|
| 191 |
-
```
|
| 192 |
-
|
| 193 |
-
## Pretrained Models
|
| 194 |
-
|
| 195 |
-
**If `install.sh` runs successfully, you may skip No.1,2,3**
|
| 196 |
-
|
| 197 |
-
**Users in China can [download all these models here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#nVNhX).**
|
| 198 |
-
|
| 199 |
-
1. Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`.
|
| 200 |
-
|
| 201 |
-
2. Download G2PW models from [G2PWModel.zip(HF)](https://huggingface.co/XXXXRT/GPT-SoVITS-Pretrained/resolve/main/G2PWModel.zip)| [G2PWModel.zip(ModelScope)](https://www.modelscope.cn/models/XXXXRT/GPT-SoVITS-Pretrained/resolve/master/G2PWModel.zip), unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.(Chinese TTS Only)
|
| 202 |
-
|
| 203 |
-
3. For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`.
|
| 204 |
-
|
| 205 |
-
- If you want to use `bs_roformer` or `mel_band_roformer` models for UVR5, you can manually download the model and corresponding configuration file, and put them in `tools/uvr5/uvr5_weights`. **Rename the model file and configuration file, ensure that the model and configuration files have the same and corresponding names except for the suffix**. In addition, the model and configuration file names **must include `roformer`** in order to be recognized as models of the roformer class.
|
| 206 |
-
|
| 207 |
-
- The suggestion is to **directly specify the model type** in the model name and configuration file name, such as `mel_mand_roformer`, `bs_roformer`. If not specified, the features will be compared from the configuration file to determine which type of model it is. For example, the model `bs_roformer_ep_368_sdr_12.9628.ckpt` and its corresponding configuration file `bs_roformer_ep_368_sdr_12.9628.yaml` are a pair, `kim_mel_band_roformer.ckpt` and `kim_mel_band_roformer.yaml` are also a pair.
|
| 208 |
-
|
| 209 |
-
4. For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/asr/models`.
|
| 210 |
-
|
| 211 |
-
5. For English or Japanese ASR (additionally), download models from [Faster Whisper Large V3](https://huggingface.co/Systran/faster-whisper-large-v3) and place them in `tools/asr/models`. Also, [other models](https://huggingface.co/Systran) may have the similar effect with smaller disk footprint.
|
| 212 |
-
|
| 213 |
-
## Dataset Format
|
| 214 |
-
|
| 215 |
-
The TTS annotation .list file format:
|
| 216 |
-
|
| 217 |
-
```
|
| 218 |
-
|
| 219 |
-
vocal_path|speaker_name|language|text
|
| 220 |
-
|
| 221 |
-
```
|
| 222 |
-
|
| 223 |
-
Language dictionary:
|
| 224 |
-
|
| 225 |
-
- 'zh': Chinese
|
| 226 |
-
- 'ja': Japanese
|
| 227 |
-
- 'en': English
|
| 228 |
-
- 'ko': Korean
|
| 229 |
-
- 'yue': Cantonese
|
| 230 |
-
|
| 231 |
-
Example:
|
| 232 |
-
|
| 233 |
-
```
|
| 234 |
-
|
| 235 |
-
D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.
|
| 236 |
-
|
| 237 |
-
```
|
| 238 |
-
|
| 239 |
-
## Finetune and inference
|
| 240 |
-
|
| 241 |
-
### Open WebUI
|
| 242 |
-
|
| 243 |
-
#### Integrated Package Users
|
| 244 |
-
|
| 245 |
-
Double-click `go-webui.bat`or use `go-webui.ps1`
|
| 246 |
-
if you want to switch to V1,then double-click`go-webui-v1.bat` or use `go-webui-v1.ps1`
|
| 247 |
-
|
| 248 |
-
#### Others
|
| 249 |
-
|
| 250 |
-
```bash
|
| 251 |
-
python webui.py <language(optional)>
|
| 252 |
-
```
|
| 253 |
-
|
| 254 |
-
if you want to switch to V1,then
|
| 255 |
-
|
| 256 |
-
```bash
|
| 257 |
-
python webui.py v1 <language(optional)>
|
| 258 |
-
```
|
| 259 |
-
|
| 260 |
-
Or maunally switch version in WebUI
|
| 261 |
-
|
| 262 |
-
### Finetune
|
| 263 |
-
|
| 264 |
-
#### Path Auto-filling is now supported
|
| 265 |
-
|
| 266 |
-
1. Fill in the audio path
|
| 267 |
-
2. Slice the audio into small chunks
|
| 268 |
-
3. Denoise(optinal)
|
| 269 |
-
4. ASR
|
| 270 |
-
5. Proofreading ASR transcriptions
|
| 271 |
-
6. Go to the next Tab, then finetune the model
|
| 272 |
-
|
| 273 |
-
### Open Inference WebUI
|
| 274 |
-
|
| 275 |
-
#### Integrated Package Users
|
| 276 |
-
|
| 277 |
-
Double-click `go-webui-v2.bat` or use `go-webui-v2.ps1` ,then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference`
|
| 278 |
-
|
| 279 |
-
#### Others
|
| 280 |
-
|
| 281 |
-
```bash
|
| 282 |
-
python GPT_SoVITS/inference_webui.py <language(optional)>
|
| 283 |
-
```
|
| 284 |
-
|
| 285 |
-
OR
|
| 286 |
-
|
| 287 |
-
```bash
|
| 288 |
-
python webui.py
|
| 289 |
-
```
|
| 290 |
-
|
| 291 |
-
then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference`
|
| 292 |
-
|
| 293 |
-
## V2 Release Notes
|
| 294 |
-
|
| 295 |
-
New Features:
|
| 296 |
-
|
| 297 |
-
1. Support Korean and Cantonese
|
| 298 |
-
|
| 299 |
-
2. An optimized text frontend
|
| 300 |
-
|
| 301 |
-
3. Pre-trained model extended from 2k hours to 5k hours
|
| 302 |
-
|
| 303 |
-
4. Improved synthesis quality for low-quality reference audio
|
| 304 |
-
|
| 305 |
-
[more details](<https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v2%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)>)
|
| 306 |
-
|
| 307 |
-
Use v2 from v1 environment:
|
| 308 |
-
|
| 309 |
-
1. `pip install -r requirements.txt` to update some packages
|
| 310 |
-
|
| 311 |
-
2. Clone the latest codes from github.
|
| 312 |
-
|
| 313 |
-
3. Download v2 pretrained models from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main/gsv-v2final-pretrained) and put them into `GPT_SoVITS/pretrained_models/gsv-v2final-pretrained`.
|
| 314 |
-
|
| 315 |
-
Chinese v2 additional: [G2PWModel.zip(HF)](https://huggingface.co/XXXXRT/GPT-SoVITS-Pretrained/resolve/main/G2PWModel.zip)| [G2PWModel.zip(ModelScope)](https://www.modelscope.cn/models/XXXXRT/GPT-SoVITS-Pretrained/resolve/master/G2PWModel.zip)(Download G2PW models, unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.)
|
| 316 |
-
|
| 317 |
-
## V3 Release Notes
|
| 318 |
-
|
| 319 |
-
New Features:
|
| 320 |
-
|
| 321 |
-
1. The timbre similarity is higher, requiring less training data to approximate the target speaker (the timbre similarity is significantly improved using the base model directly without fine-tuning).
|
| 322 |
-
|
| 323 |
-
2. GPT model is more stable, with fewer repetitions and omissions, and it is easier to generate speech with richer emotional expression.
|
| 324 |
-
|
| 325 |
-
[more details](<https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v3v4%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)>)
|
| 326 |
-
|
| 327 |
-
Use v3 from v2 environment:
|
| 328 |
-
|
| 329 |
-
1. `pip install -r requirements.txt` to update some packages
|
| 330 |
-
|
| 331 |
-
2. Clone the latest codes from github.
|
| 332 |
-
|
| 333 |
-
3. Download v3 pretrained models (s1v3.ckpt, s2Gv3.pth and models--nvidia--bigvgan_v2_24khz_100band_256x folder) from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) and put them into `GPT_SoVITS/pretrained_models`.
|
| 334 |
-
|
| 335 |
-
additional: for Audio Super Resolution model, you can read [how to download](./tools/AP_BWE_main/24kto48k/readme.txt)
|
| 336 |
-
|
| 337 |
-
## V4 Release Notes
|
| 338 |
-
|
| 339 |
-
New Features:
|
| 340 |
-
|
| 341 |
-
1. Version 4 fixes the issue of metallic artifacts in Version 3 caused by non-integer multiple upsampling, and natively outputs 48k audio to prevent muffled sound (whereas Version 3 only natively outputs 24k audio). The author considers Version 4 a direct replacement for Version 3, though further testing is still needed.
|
| 342 |
-
[more details](<https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v3v4%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)>)
|
| 343 |
-
|
| 344 |
-
Use v4 from v1/v2/v3 environment:
|
| 345 |
-
|
| 346 |
-
1. `pip install -r requirements.txt` to update some packages
|
| 347 |
-
|
| 348 |
-
2. Clone the latest codes from github.
|
| 349 |
-
|
| 350 |
-
3. Download v4 pretrained models (gsv-v4-pretrained/s2v4.pth, and gsv-v4-pretrained/vocoder.pth) from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) and put them into `GPT_SoVITS/pretrained_models`.
|
| 351 |
-
|
| 352 |
-
## V2Pro Release Notes
|
| 353 |
-
|
| 354 |
-
New Features:
|
| 355 |
-
|
| 356 |
-
1. Slightly higher VRAM usage than v2, surpassing v4's performance, with v2's hardware cost and speed.
|
| 357 |
-
[more details](<https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90features-(%E5%90%84%E7%89%88%E6%9C%AC%E7%89%B9%E6%80%A7)>)
|
| 358 |
-
|
| 359 |
-
2.v1/v2 and the v2Pro series share the same characteristics, while v3/v4 have similar features. For training sets with average audio quality, v1/v2/v2Pro can deliver decent results, but v3/v4 cannot. Additionally, the synthesized tone and timebre of v3/v4 lean more toward the reference audio rather than the overall training set.
|
| 360 |
-
|
| 361 |
-
Use v2Pro from v1/v2/v3/v4 environment:
|
| 362 |
-
|
| 363 |
-
1. `pip install -r requirements.txt` to update some packages
|
| 364 |
-
|
| 365 |
-
2. Clone the latest codes from github.
|
| 366 |
-
|
| 367 |
-
3. Download v2Pro pretrained models (v2Pro/s2Dv2Pro.pth, v2Pro/s2Gv2Pro.pth, v2Pro/s2Dv2ProPlus.pth, v2Pro/s2Gv2ProPlus.pth, and sv/pretrained_eres2netv2w24s4ep4.ckpt) from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) and put them into `GPT_SoVITS/pretrained_models`.
|
| 368 |
-
|
| 369 |
-
## Todo List
|
| 370 |
-
|
| 371 |
-
- [x] **High Priority:**
|
| 372 |
-
|
| 373 |
-
- [x] Localization in Japanese and English.
|
| 374 |
-
- [x] User guide.
|
| 375 |
-
- [x] Japanese and English dataset fine tune training.
|
| 376 |
-
|
| 377 |
-
- [ ] **Features:**
|
| 378 |
-
- [x] Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
|
| 379 |
-
- [x] TTS speaking speed control.
|
| 380 |
-
- [ ] ~~Enhanced TTS emotion control.~~ Maybe use pretrained finetuned preset GPT models for better emotion.
|
| 381 |
-
- [ ] Experiment with changing SoVITS token inputs to probability distribution of GPT vocabs (transformer latent).
|
| 382 |
-
- [x] Improve English and Japanese text frontend.
|
| 383 |
-
- [ ] Develop tiny and larger-sized TTS models.
|
| 384 |
-
- [x] Colab scripts.
|
| 385 |
-
- [x] Try expand training dataset (2k hours -> 10k hours).
|
| 386 |
-
- [x] better sovits base model (enhanced audio quality)
|
| 387 |
-
- [ ] model mix
|
| 388 |
-
|
| 389 |
-
## (Additional) Method for running from the command line
|
| 390 |
-
|
| 391 |
-
Use the command line to open the WebUI for UVR5
|
| 392 |
-
|
| 393 |
-
```bash
|
| 394 |
-
python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5>
|
| 395 |
-
```
|
| 396 |
-
|
| 397 |
-
<!-- If you can't open a browser, follow the format below for UVR processing,This is using mdxnet for audio processing
|
| 398 |
-
```
|
| 399 |
-
python mdxnet.py --model --input_root --output_vocal --output_ins --agg_level --format --device --is_half_precision
|
| 400 |
-
``` -->
|
| 401 |
-
|
| 402 |
-
This is how the audio segmentation of the dataset is done using the command line
|
| 403 |
-
|
| 404 |
-
```bash
|
| 405 |
-
python audio_slicer.py \
|
| 406 |
-
--input_path "<path_to_original_audio_file_or_directory>" \
|
| 407 |
-
--output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \
|
| 408 |
-
--threshold <volume_threshold> \
|
| 409 |
-
--min_length <minimum_duration_of_each_subclip> \
|
| 410 |
-
--min_interval <shortest_time_gap_between_adjacent_subclips>
|
| 411 |
-
--hop_size <step_size_for_computing_volume_curve>
|
| 412 |
-
```
|
| 413 |
-
|
| 414 |
-
This is how dataset ASR processing is done using the command line(Only Chinese)
|
| 415 |
-
|
| 416 |
-
```bash
|
| 417 |
-
python tools/asr/funasr_asr.py -i <input> -o <output>
|
| 418 |
-
```
|
| 419 |
-
|
| 420 |
-
ASR processing is performed through Faster_Whisper(ASR marking except Chinese)
|
| 421 |
-
|
| 422 |
-
(No progress bars, GPU performance may cause time delays)
|
| 423 |
-
|
| 424 |
-
```bash
|
| 425 |
-
python ./tools/asr/fasterwhisper_asr.py -i <input> -o <output> -l <language> -p <precision>
|
| 426 |
-
```
|
| 427 |
-
|
| 428 |
-
A custom list save path is enabled
|
| 429 |
-
|
| 430 |
-
## Credits
|
| 431 |
-
|
| 432 |
-
Special thanks to the following projects and contributors:
|
| 433 |
-
|
| 434 |
-
### Theoretical Research
|
| 435 |
-
|
| 436 |
-
- [ar-vits](https://github.com/innnky/ar-vits)
|
| 437 |
-
- [SoundStorm](https://github.com/yangdongchao/SoundStorm/tree/master/soundstorm/s1/AR)
|
| 438 |
-
- [vits](https://github.com/jaywalnut310/vits)
|
| 439 |
-
- [TransferTTS](https://github.com/hcy71o/TransferTTS/blob/master/models.py#L556)
|
| 440 |
-
- [contentvec](https://github.com/auspicious3000/contentvec/)
|
| 441 |
-
- [hifi-gan](https://github.com/jik876/hifi-gan)
|
| 442 |
-
- [fish-speech](https://github.com/fishaudio/fish-speech/blob/main/tools/llama/generate.py#L41)
|
| 443 |
-
- [f5-TTS](https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/model/backbones/dit.py)
|
| 444 |
-
- [shortcut flow matching](https://github.com/kvfrans/shortcut-models/blob/main/targets_shortcut.py)
|
| 445 |
-
|
| 446 |
-
### Pretrained Models
|
| 447 |
-
|
| 448 |
-
- [Chinese Speech Pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
|
| 449 |
-
- [Chinese-Roberta-WWM-Ext-Large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large)
|
| 450 |
-
- [BigVGAN](https://github.com/NVIDIA/BigVGAN)
|
| 451 |
-
- [eresnetv2](https://modelscope.cn/models/iic/speech_eres2netv2w24s4ep4_sv_zh-cn_16k-common)
|
| 452 |
-
|
| 453 |
-
### Text Frontend for Inference
|
| 454 |
-
|
| 455 |
-
- [paddlespeech zh_normalization](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/zh_normalization)
|
| 456 |
-
- [split-lang](https://github.com/DoodleBears/split-lang)
|
| 457 |
-
- [g2pW](https://github.com/GitYCC/g2pW)
|
| 458 |
-
- [pypinyin-g2pW](https://github.com/mozillazg/pypinyin-g2pW)
|
| 459 |
-
- [paddlespeech g2pw](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/g2pw)
|
| 460 |
-
|
| 461 |
-
### WebUI Tools
|
| 462 |
-
|
| 463 |
-
- [ultimatevocalremovergui](https://github.com/Anjok07/ultimatevocalremovergui)
|
| 464 |
-
- [audio-slicer](https://github.com/openvpi/audio-slicer)
|
| 465 |
-
- [SubFix](https://github.com/cronrpc/SubFix)
|
| 466 |
-
- [FFmpeg](https://github.com/FFmpeg/FFmpeg)
|
| 467 |
-
- [gradio](https://github.com/gradio-app/gradio)
|
| 468 |
-
- [faster-whisper](https://github.com/SYSTRAN/faster-whisper)
|
| 469 |
-
- [FunASR](https://github.com/alibaba-damo-academy/FunASR)
|
| 470 |
-
- [AP-BWE](https://github.com/yxlu-0102/AP-BWE)
|
| 471 |
-
|
| 472 |
-
Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge.
|
| 473 |
-
|
| 474 |
-
## Thanks to all contributors for their efforts
|
| 475 |
-
|
| 476 |
-
<a href="https://github.com/RVC-Boss/GPT-SoVITS/graphs/contributors" target="_blank">
|
| 477 |
-
<img src="https://contrib.rocks/image?repo=RVC-Boss/GPT-SoVITS" />
|
| 478 |
-
</a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
| 2 |
---
|
| 3 |
+
title: Ttstrain
|
| 4 |
+
emoji: 🌍
|
| 5 |
+
colorFrom: blue
|
| 6 |
+
colorTo: indigo
|
| 7 |
+
sdk: docker
|
| 8 |
+
pinned: false
|
| 9 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|