- README.md +0 -263
- assets/models/embedders/Crusty/config.json +71 -0
- assets/{ico.png → models/embedders/Crusty/model.safetensors} +2 -2
- assets/{logs/Gura/added_IVF4130_Flat_nprobe_12.index → models/embedders/Crusty/pytorch_model.bin} +2 -2
- assets/weights/Gura.pth +0 -3
- main/app/app.py +56 -63
- main/app/parser.py +326 -294
- main/app/variables.py +269 -35
- main/configs/theme.json +1 -0
README.md
CHANGED
|
@@ -11,266 +11,3 @@ short_description: RVC
|
|
| 11 |
app_file: main/app/app.py
|
| 12 |
startup_duration_timeout: 1h
|
| 13 |
---
|
| 14 |
-
<div align="center">
|
| 15 |
-
<img alt="LOGO" src="assets/ico.png" width="300" height="300" />
|
| 16 |
-
|
| 17 |
-
# Vietnamese RVC BY ANH
|
| 18 |
-
Công cụ chuyển đổi giọng nói chất lượng và hiệu suất cao đơn giản.
|
| 19 |
-
|
| 20 |
-
[](https://github.com/PhamHuynhAnh16/Vietnamese-RVC)
|
| 21 |
-
[](https://colab.research.google.com/github/PhamHuynhAnh16/Vietnamese-RVC-ipynb/blob/main/Vietnamese-RVC.ipynb)
|
| 22 |
-
[](https://github.com/PhamHuynhAnh16/Vietnamese-RVC/blob/main/LICENSE)
|
| 23 |
-
|
| 24 |
-
</div>
|
| 25 |
-
|
| 26 |
-
<div align="center">
|
| 27 |
-
|
| 28 |
-
[](https://huggingface.co/spaces/AnhP/RVC-GUI)
|
| 29 |
-
[](https://huggingface.co/AnhP/Vietnamese-RVC-Project)
|
| 30 |
-
|
| 31 |
-
</div>
|
| 32 |
-
|
| 33 |
-
## Mô tả
|
| 34 |
-
|
| 35 |
-
Dự án này là một công cụ chuyển đổi giọng nói đơn giản, dễ sử dụng. Với mục tiêu tạo ra các sản phẩm chuyển đổi giọng nói chất lượng cao và hiệu suất tối ưu, dự án cho phép người dùng thay đổi giọng nói một cách mượt mà, tự nhiên.
|
| 36 |
-
|
| 37 |
-
## Các tính năng của dự án
|
| 38 |
-
|
| 39 |
-
- Tách nhạc (MDX-Net / Demucs / VR)
|
| 40 |
-
|
| 41 |
-
- Chuyển đổi giọng nói (Chuyển đổi tệp / Chuyển đổi hàng loạt / Chuyển đổi với Whisper / Chuyển đổi văn bản)
|
| 42 |
-
|
| 43 |
-
- Áp dụng hiệu ứng cho âm thanh
|
| 44 |
-
|
| 45 |
-
- Tạo dữ liệu huấn luyện (Từ đường dẫn liên kết)
|
| 46 |
-
|
| 47 |
-
- Huấn luyện mô hình (v1 / v2, bộ mã hóa chất lượng cao, huấn luyện năng lượng)
|
| 48 |
-
|
| 49 |
-
- Dung hợp mô hình
|
| 50 |
-
|
| 51 |
-
- Đọc thông tin mô hình
|
| 52 |
-
|
| 53 |
-
- Xuất mô hình sang ONNX
|
| 54 |
-
|
| 55 |
-
- Tải xuống từ kho mô hình có sẳn
|
| 56 |
-
|
| 57 |
-
- Tìm kiếm mô hình từ web
|
| 58 |
-
|
| 59 |
-
- Trích xuất cao độ
|
| 60 |
-
|
| 61 |
-
- Hỗ trợ suy luận chuyển đổi âm thanh bằng mô hình ONNX
|
| 62 |
-
|
| 63 |
-
- Mô hình ONNX RVC cũng sẽ hỗ trợ chỉ mục để suy luận
|
| 64 |
-
|
| 65 |
-
- Chuyển đổi giọng nói thời gian thực
|
| 66 |
-
|
| 67 |
-
- Tạo tham chiếu huấn luyện
|
| 68 |
-
|
| 69 |
-
**Phương thức trích xuất cao độ: `pm-ac, pm-cc, pm-shs, dio, mangio-crepe-tiny, mangio-crepe-small, mangio-crepe-medium, mangio-crepe-large, mangio-crepe-full, crepe-tiny, crepe-small, crepe-medium, crepe-large, crepe-full, fcpe, fcpe-legacy, fcpe-previous, rmvpe, rmvpe-clipping, rmvpe-medfilt, rmvpe-clipping-medfilt, harvest, yin, pyin, swipe, piptrack, penn, mangio-penn, djcm, djcm-clipping, djcm-medfilt, djcm-clipping-medfilt, swift, pesto`**
|
| 70 |
-
|
| 71 |
-
**Các mô hình trích xuất nhúng: `contentvec_base, hubert_base, vietnamese_hubert_base, japanese_hubert_base, korean_hubert_base, chinese_hubert_base, portuguese_hubert_base, spin-v1, spin-v2, whisper-tiny, whisper-tiny.en, whisper-base, whisper-base.en, whisper-small, whisper-small.en, whisper-medium, whisper-medium.en, whisper-large-v1, whisper-large-v2, whisper-large-v3, whisper-large-v3-turbo`**
|
| 72 |
-
|
| 73 |
-
- **Các mô hình trích xuất nhúng có sẳn các chế độ nhúng như: fairseq, onnx, transformers, spin, whisper.**
|
| 74 |
-
- **Các mô hình trích xuất cao độ đều có phiên bản tăng tốc ONNX trừ các phương thức hoạt động bằng trình bao bọc.**
|
| 75 |
-
- **Các mô hình trích xuất cao độ đều có thể kết hợp với nhau theo tỉ lệ để tạo ra cảm giác mới mẻ, ví dụ: `hybrid[rmvpe+harvest]`.**
|
| 76 |
-
|
| 77 |
-
## Hướng dẫn sử dụng
|
| 78 |
-
|
| 79 |
-
**Sẽ có nếu tôi thực sự rảnh...**
|
| 80 |
-
|
| 81 |
-
## Cài đặt nâng cao
|
| 82 |
-
|
| 83 |
-
Bước 1: Cài đặt các phần phụ trợ cần thiết
|
| 84 |
-
|
| 85 |
-
- Cài đặt Python từ trang chủ: **[PYTHON](https://www.python.org/ftp/python/3.11.8/python-3.11.8-amd64.exe)** (Dự án đã được kiểm tra trên Python 3.10.x và 3.11.x)
|
| 86 |
-
- Cài đặt FFmpeg từ nguồn và thêm vào PATH hệ thống: **[FFMPEG](https://github.com/BtbN/FFmpeg-Builds/releases)**
|
| 87 |
-
|
| 88 |
-
Bước 2: Cài đặt dự án (Dùng Git hoặc đơn giản là tải trên github)
|
| 89 |
-
|
| 90 |
-
Sử dụng đối với Git:
|
| 91 |
-
- git clone https://github.com/PhamHuynhAnh16/Vietnamese-RVC.git
|
| 92 |
-
- cd Vietnamese-RVC
|
| 93 |
-
|
| 94 |
-
Cài đặt bằng github:
|
| 95 |
-
- Vào https://github.com/PhamHuynhAnh16/Vietnamese-RVC
|
| 96 |
-
- Nhấn vào `<> Code` màu xanh lá chọn `Download ZIP`
|
| 97 |
-
- Giải nén `Vietnamese-RVC-main.zip`
|
| 98 |
-
- Vào thư mục Vietnamese-RVC-main chọn vào thanh đường dẫn nhập `cmd` và nhấn Enter
|
| 99 |
-
|
| 100 |
-
Bước 3: Cài đặt thư viện cần thiết:
|
| 101 |
-
|
| 102 |
-
Nhập lệnh:
|
| 103 |
-
```
|
| 104 |
-
python -m venv env
|
| 105 |
-
env\\Scripts\\activate
|
| 106 |
-
|
| 107 |
-
python -m pip install uv
|
| 108 |
-
uv pip install six packaging python-dateutil platformdirs pywin32 onnxconverter_common wget
|
| 109 |
-
```
|
| 110 |
-
|
| 111 |
-
Cài đặt đối với các thiết bị khác nhau
|
| 112 |
-
|
| 113 |
-
<details>
|
| 114 |
-
<summary>Đối với CPU</summary>
|
| 115 |
-
|
| 116 |
-
```
|
| 117 |
-
uv pip install -r requirements.txt
|
| 118 |
-
```
|
| 119 |
-
|
| 120 |
-
</details>
|
| 121 |
-
|
| 122 |
-
<details>
|
| 123 |
-
<summary>Đối với CUDA</summary>
|
| 124 |
-
|
| 125 |
-
Có thể thay cu118 thành bản cu128 mới hơn nếu GPU hỗ trợ:
|
| 126 |
-
```
|
| 127 |
-
uv pip install numpy==1.26.4 numba==0.61.0
|
| 128 |
-
uv pip install torch torchaudio torchvision --index-url https://download.pytorch.org/whl/cu118
|
| 129 |
-
uv pip install -r requirements.txt
|
| 130 |
-
```
|
| 131 |
-
|
| 132 |
-
</details>
|
| 133 |
-
|
| 134 |
-
<details>
|
| 135 |
-
<summary>Đối với OPENCL (AMD)</summary>
|
| 136 |
-
|
| 137 |
-
```
|
| 138 |
-
uv pip install numpy==1.26.4 numba==0.61.0
|
| 139 |
-
uv pip install torch==2.6.0 torchaudio==2.6.0 torchvision
|
| 140 |
-
uv pip install https://github.com/artyom-beilis/pytorch_dlprim/releases/download/0.2.0/pytorch_ocl-0.2.0+torch2.6-cp311-none-win_amd64.whl
|
| 141 |
-
uv pip install onnxruntime-directml
|
| 142 |
-
uv pip install -r requirements.txt
|
| 143 |
-
```
|
| 144 |
-
|
| 145 |
-
Lưu ý:
|
| 146 |
-
- Có vẻ như OPENCL đã không còn được hỗ trợ tiếp.
|
| 147 |
-
- Chỉ nên cài đặt trên python 3.11 do không có bản biên dịch cho python 3.10 với torch 2.6.0.
|
| 148 |
-
- Demucs có thể gây quá tải và tràn bộ nhớ đối với GPU (nếu cần sử dụng demucs hãy mở tệp config.json trong main\configs sửa đối số demucs_cpu_mode thành true).
|
| 149 |
-
- DDP không hỗ trợ huấn luyện đa GPU đối với OPENCL.
|
| 150 |
-
- Một số thuật toán khác phải chạy trên cpu nên có thể hiệu suất của GPU có thể không sử dụng hết.
|
| 151 |
-
|
| 152 |
-
</details>
|
| 153 |
-
|
| 154 |
-
<details>
|
| 155 |
-
<summary>Đối với DIRECTML (AMD)</summary>
|
| 156 |
-
|
| 157 |
-
```
|
| 158 |
-
uv pip install numpy==1.26.4 numba==0.61.0
|
| 159 |
-
uv pip install torch==2.4.1 torchaudio==2.4.1 torchvision
|
| 160 |
-
uv pip install torch-directml==0.2.5.dev240914
|
| 161 |
-
uv pip install onnxruntime-directml
|
| 162 |
-
uv pip install -r requirements.txt
|
| 163 |
-
```
|
| 164 |
-
|
| 165 |
-
Lưu ý:
|
| 166 |
-
- Directml đã ngừng phát triển một khoảng thời gian dài.
|
| 167 |
-
- Directml không hỗ trợ quá tốt tác vụ đa luồng nên khi chạy trích xuất thường sẽ bị khóa ở 1 luồng.
|
| 168 |
-
- Directml có hỗ trợ 1 phần fp16 nhưng không được khuyến khích sử dụng vì có thể chỉ nhận được hiệu năng tương đương fp32.
|
| 169 |
-
- Directml không có hàm để dọn dẹp bộ nhớ, tôi đã tạo 1 hàm đơn giản để dọn dẹp bộ nhớ nhưng có thể sẽ không quá hiệu quả.
|
| 170 |
-
- Directml được thiết kế để suy luận chứ không phải dùng để huấn luyện mặc dù có thể hoàn toàn chạy được huấn luyện nhưng sẽ không được khuyến khích.
|
| 171 |
-
|
| 172 |
-
</details>
|
| 173 |
-
|
| 174 |
-
## Sử dụng
|
| 175 |
-
|
| 176 |
-
**Sử dụng với Google Colab**
|
| 177 |
-
- Mở Google Colab: [Vietnamese-RVC](https://colab.research.google.com/github/PhamHuynhAnh16/Vietnamese-RVC-ipynb/blob/main/Vietnamese-RVC.ipynb)
|
| 178 |
-
- Bước 1: Chạy ô Cài đặt và đợi nó hoàn tất.
|
| 179 |
-
- Bước 2: Chạy ô Mở giao diện sử dụng (Khi này giao diện sẽ in ra 2 đường dẫn 1 là 0.0.0.0.7680 và 1 đường dẫn gradio có thể nhấp được, bạn chọn vào đường dẫn nhấp được và nó sẽ đưa bạn đến giao diện).
|
| 180 |
-
|
| 181 |
-
**Chạy tệp run_app để mở giao diện sử dụng, chạy tệp tensorboard để mở biểu đồ kiểm tra huấn luyện. (Lưu ý: không tắt Command Prompt hoặc Terminal)**
|
| 182 |
-
```
|
| 183 |
-
run_app.bat / tensorboard.bat
|
| 184 |
-
```
|
| 185 |
-
|
| 186 |
-
**Khởi động giao diện sử dụng. (Thêm `--allow_all_disk` vào lệnh để cho phép gradio truy cập tệp ngoài)**
|
| 187 |
-
```
|
| 188 |
-
env\\Scripts\\python.exe main\\app\\app.py --open
|
| 189 |
-
```
|
| 190 |
-
|
| 191 |
-
**Với trường hợp bạn sử dụng Tensorboard để kiểm tra huấn luyện**
|
| 192 |
-
```
|
| 193 |
-
env\\Scripts\\python.exe main/app/run_tensorboard.py
|
| 194 |
-
```
|
| 195 |
-
|
| 196 |
-
**Sử dụng bằng cú pháp**
|
| 197 |
-
```
|
| 198 |
-
python main\\app\\parser.py --help
|
| 199 |
-
```
|
| 200 |
-
|
| 201 |
-
## Cài đặt, sử dụng đơn giản
|
| 202 |
-
|
| 203 |
-
**Cài đặt phiên bản releases từ [Vietnamese_RVC](https://github.com/PhamHuynhAnh16/Vietnamese-RVC/releases)**
|
| 204 |
-
- Chọn bản đúng với bạn và tải về máy.
|
| 205 |
-
- Giải nén dự án.
|
| 206 |
-
- Chạy tệp run_app.bat để mở giao diện hoạt động.
|
| 207 |
-
|
| 208 |
-
**Sử dụng tệp run_install.bat**
|
| 209 |
-
- Tải mã nguồn về máy.
|
| 210 |
-
- Giải nén dự án.
|
| 211 |
-
- Chạy tệp run_install.bat để bắt đầu cài đặt.
|
| 212 |
-
- Chạy tệp run_app.bat để mở giao diện hoạt động.
|
| 213 |
-
|
| 214 |
-
## LƯU Ý
|
| 215 |
-
|
| 216 |
-
- **Hiện tại các bộ mã hóa mới như MRF HIFIGAN vẫn chưa đầy đủ các bộ huấn luyện trước**
|
| 217 |
-
- **Bộ mã hóa MRF HIFIGAN và REFINEGAN không hỗ trợ huấn luyện khi không không huấn luyện cao độ**
|
| 218 |
-
- **Huấn luyện năng lương có thể cải thiện chất lượng mô hình nhưng chưa có mô hình huấn luyện trước dành cho tính năng này**
|
| 219 |
-
- **Các mô hình trong kho lưu trữ Vietnamese-RVC được thu thập rải rác trên AI Hub, HuggingFace và các các kho lưu trữ khác. Có thể mang các giấy phép bản quyền khác nhau**
|
| 220 |
-
|
| 221 |
-
## Tuyên bố miễn trừ trách nhiệm
|
| 222 |
-
|
| 223 |
-
- **Dự án Vietnamese-RVC được phát triển với mục đích nghiên cứu, học tập và giải trí cá nhân. Tôi không khuyến khích cũng như không chịu trách nhiệm đối với bất kỳ hành vi lạm dụng công nghệ chuyển đổi giọng n��i vì mục đích lừa đảo, giả mạo danh tính, hoặc vi phạm quyền riêng tư, bản quyền của bất kỳ cá nhân hay tổ chức nào.**
|
| 224 |
-
|
| 225 |
-
- **Người dùng cần tự chịu trách nhiệm với hành vi sử dụng phần mềm này và cam kết tuân thủ pháp luật hiện hành tại quốc gia nơi họ sinh sống hoặc hoạt động.**
|
| 226 |
-
|
| 227 |
-
- **Việc sử dụng giọng nói của người nổi tiếng, người thật hoặc nhân vật công chúng phải có sự cho phép hoặc đảm bảo không vi phạm pháp luật, đạo đức và quyền lợi của các bên liên quan.**
|
| 228 |
-
|
| 229 |
-
- **Tác giả của dự án không chịu trách nhiệm pháp lý đối với bất kỳ hậu quả nào phát sinh từ việc sử dụng phần mềm này.**
|
| 230 |
-
|
| 231 |
-
## Điều khoản sử dụng
|
| 232 |
-
|
| 233 |
-
- Bạn phải đảm bảo rằng các nội dung âm thanh bạn tải lên và chuyển đổi qua dự án này không vi phạm quyền sở hữu trí tuệ của bên thứ ba.
|
| 234 |
-
|
| 235 |
-
- Không được phép sử dụng dự án này cho bất kỳ hoạt động nào bất hợp pháp, bao gồm nhưng không giới hạn ở việc sử dụng để lừa đảo, quấy rối, hay gây tổn hại đến người khác.
|
| 236 |
-
|
| 237 |
-
- Bạn chịu trách nhiệm hoàn toàn đối với bất kỳ thiệt hại nào phát sinh từ việc sử dụng sản phẩm không đúng cách.
|
| 238 |
-
|
| 239 |
-
- Tôi sẽ không chịu trách nhiệm với bất kỳ thiệt hại trực tiếp hoặc gián tiếp nào phát sinh từ việc sử dụng dự án này.
|
| 240 |
-
|
| 241 |
-
## Dự án này được xây dựng dựa trên các dự án như sau
|
| 242 |
-
|
| 243 |
-
| Tác Phẩm | Tác Giả | Giấy Phép |
|
| 244 |
-
|--------------------------------------------------------------------------------------------------------------------------------|-------------------------|-------------|
|
| 245 |
-
| **[Applio](https://github.com/IAHispano/Applio/tree/main)** | IAHispano | MIT License |
|
| 246 |
-
| **[Python-audio-separator](https://github.com/nomadkaraoke/python-audio-separator/tree/main)** | Nomad Karaoke | MIT License |
|
| 247 |
-
| **[Retrieval-based-Voice-Conversion-WebUI](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/tree/main)** | RVC Project | MIT License |
|
| 248 |
-
| **[RVC-ONNX-INFER-BY-Anh](https://github.com/PhamHuynhAnh16/RVC_Onnx_Infer)** | Phạm Huỳnh Anh | MIT License |
|
| 249 |
-
| **[Torch-Onnx-Crepe-By-Anh](https://github.com/PhamHuynhAnh16/TORCH-ONNX-CREPE)** | Phạm Huỳnh Anh | MIT License |
|
| 250 |
-
| **[Hubert-No-Fairseq](https://github.com/PhamHuynhAnh16/hubert-no-fairseq)** | Phạm Huỳnh Anh | MIT License |
|
| 251 |
-
| **[Local-attention](https://github.com/lucidrains/local-attention)** | Phil Wang | MIT License |
|
| 252 |
-
| **[TorchFcpe](https://github.com/CNChTu/FCPE/tree/main)** | CN_ChiTu | MIT License |
|
| 253 |
-
| **[FcpeONNX](https://github.com/deiteris/voice-changer/blob/master-custom/server/utils/fcpe_onnx.py)** | Yury deiteris | MIT License |
|
| 254 |
-
| **[ContentVec](https://github.com/auspicious3000/contentvec)** | Kaizhi Qian | MIT License |
|
| 255 |
-
| **[Mediafiredl](https://github.com/Gann4Life/mediafiredl)** | Santiago Ariel Mansilla | MIT License |
|
| 256 |
-
| **[Noisereduce](https://github.com/timsainb/noisereduce)** | Tim Sainburg | MIT License |
|
| 257 |
-
| **[World.py-By-Anh](https://github.com/PhamHuynhAnh16/world.py)** | Phạm Huỳnh Anh | MIT License |
|
| 258 |
-
| **[Mega.py](https://github.com/3v1n0/mega.py)** | Marco Trevisan | No License |
|
| 259 |
-
| **[Gdown](https://github.com/wkentaro/gdown)** | Kentaro Wada | MIT License |
|
| 260 |
-
| **[Whisper](https://github.com/openai/whisper)** | OpenAI | MIT License |
|
| 261 |
-
| **[PyannoteAudio](https://github.com/pyannote/pyannote-audio)** | pyannote | MIT License |
|
| 262 |
-
| **[AudioEditingCode](https://github.com/HilaManor/AudioEditingCode)** | Hila Manor | MIT License |
|
| 263 |
-
| **[StftPitchShift](https://github.com/jurihock/stftPitchShift)** | Jürgen Hock | MIT License |
|
| 264 |
-
| **[Penn](https://github.com/interactiveaudiolab/penn)** | Interactive Audio Lab | MIT License |
|
| 265 |
-
| **[Voice Changer](https://github.com/deiteris/voice-changer)** | Yury deiteris | MIT License |
|
| 266 |
-
| **[Pesto](https://github.com/SonyCSLParis/pesto)** | Sony CSL Paris | LGPL 3.0 |
|
| 267 |
-
|
| 268 |
-
## Kho mô hình của công cụ tìm kiếm mô hình
|
| 269 |
-
|
| 270 |
-
- **[VOICE-MODELS.COM](https://voice-models.com/)**
|
| 271 |
-
|
| 272 |
-
## Báo cáo lỗi
|
| 273 |
-
- **Với trường hợp hệ thống báo cáo lỗi không hoạt động bạn có thể báo cáo lỗi cho tôi thông qua Discord `pham_huynh_anh` Hoặc [ISSUE](https://github.com/PhamHuynhAnh16/Vietnamese-RVC/issues)**
|
| 274 |
-
|
| 275 |
-
## ☎️ Liên hệ tôi
|
| 276 |
-
- Discord: **pham_huynh_anh**
|
|
|
|
| 11 |
app_file: main/app/app.py
|
| 12 |
startup_duration_timeout: 1h
|
| 13 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
assets/models/embedders/Crusty/config.json
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"activation_dropout": 0.1,
|
| 3 |
+
"apply_spec_augment": true,
|
| 4 |
+
"architectures": [
|
| 5 |
+
"HubertModel"
|
| 6 |
+
],
|
| 7 |
+
"attention_dropout": 0.1,
|
| 8 |
+
"bos_token_id": 1,
|
| 9 |
+
"classifier_proj_size": 256,
|
| 10 |
+
"conv_bias": false,
|
| 11 |
+
"conv_dim": [
|
| 12 |
+
512,
|
| 13 |
+
512,
|
| 14 |
+
512,
|
| 15 |
+
512,
|
| 16 |
+
512,
|
| 17 |
+
512,
|
| 18 |
+
512
|
| 19 |
+
],
|
| 20 |
+
"conv_kernel": [
|
| 21 |
+
10,
|
| 22 |
+
3,
|
| 23 |
+
3,
|
| 24 |
+
3,
|
| 25 |
+
3,
|
| 26 |
+
2,
|
| 27 |
+
2
|
| 28 |
+
],
|
| 29 |
+
"conv_stride": [
|
| 30 |
+
5,
|
| 31 |
+
2,
|
| 32 |
+
2,
|
| 33 |
+
2,
|
| 34 |
+
2,
|
| 35 |
+
2,
|
| 36 |
+
2
|
| 37 |
+
],
|
| 38 |
+
"ctc_loss_reduction": "sum",
|
| 39 |
+
"ctc_zero_infinity": false,
|
| 40 |
+
"do_stable_layer_norm": false,
|
| 41 |
+
"eos_token_id": 2,
|
| 42 |
+
"feat_extract_activation": "gelu",
|
| 43 |
+
"feat_extract_norm": "group",
|
| 44 |
+
"feat_proj_dropout": 0.0,
|
| 45 |
+
"feat_proj_layer_norm": true,
|
| 46 |
+
"final_dropout": 0.1,
|
| 47 |
+
"hidden_act": "gelu",
|
| 48 |
+
"hidden_dropout": 0.1,
|
| 49 |
+
"hidden_size": 768,
|
| 50 |
+
"initializer_range": 0.02,
|
| 51 |
+
"intermediate_size": 3072,
|
| 52 |
+
"layer_norm_eps": 1e-05,
|
| 53 |
+
"layerdrop": 0.1,
|
| 54 |
+
"mask_feature_length": 10,
|
| 55 |
+
"mask_feature_min_masks": 0,
|
| 56 |
+
"mask_feature_prob": 0.0,
|
| 57 |
+
"mask_time_length": 10,
|
| 58 |
+
"mask_time_min_masks": 2,
|
| 59 |
+
"mask_time_prob": 0.05,
|
| 60 |
+
"model_type": "hubert",
|
| 61 |
+
"num_attention_heads": 12,
|
| 62 |
+
"num_conv_pos_embedding_groups": 16,
|
| 63 |
+
"num_conv_pos_embeddings": 128,
|
| 64 |
+
"num_feat_extract_layers": 7,
|
| 65 |
+
"num_hidden_layers": 12,
|
| 66 |
+
"pad_token_id": 0,
|
| 67 |
+
"torch_dtype": "float16",
|
| 68 |
+
"transformers_version": "4.34.1",
|
| 69 |
+
"use_weighted_layer_sum": false,
|
| 70 |
+
"vocab_size": 32
|
| 71 |
+
}
|
assets/{ico.png → models/embedders/Crusty/model.safetensors}
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd3ea2c9290b9e4c5b1fe4eb18628437ab78c05365ca2f2458781e5d853d3530
|
| 3 |
+
size 188767088
|
assets/{logs/Gura/added_IVF4130_Flat_nprobe_12.index → models/embedders/Crusty/pytorch_model.bin}
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd3ea2c9290b9e4c5b1fe4eb18628437ab78c05365ca2f2458781e5d853d3530
|
| 3 |
+
size 188767088
|
assets/weights/Gura.pth
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:5a706ba94b279763d254e661c8fd0a19775afc5768b71ed8b792e7eec1b770d2
|
| 3 |
-
size 55026095
|
|
|
|
|
|
|
|
|
|
|
|
main/app/app.py
CHANGED
|
@@ -18,7 +18,16 @@ from main.app.tabs.training.training import training_tab
|
|
| 18 |
from main.app.tabs.downloads.downloads import download_tab
|
| 19 |
from main.app.tabs.inference.inference import inference_tab
|
| 20 |
from main.configs.rpc import connect_discord_ipc, send_discord_rpc
|
| 21 |
-
from main.app.variables import
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
ssl._create_default_https_context = ssl._create_unverified_context
|
| 24 |
|
|
@@ -26,7 +35,8 @@ warnings.filterwarnings("ignore")
|
|
| 26 |
for l in ["httpx", "gradio", "uvicorn", "httpcore", "urllib3"]:
|
| 27 |
logging.getLogger(l).setLevel(logging.ERROR)
|
| 28 |
|
| 29 |
-
js_code =
|
|
|
|
| 30 |
() => {
|
| 31 |
window._activeStream = null;
|
| 32 |
window._audioCtx = null;
|
|
@@ -406,75 +416,64 @@ js_code = """
|
|
| 406 |
}
|
| 407 |
};
|
| 408 |
}
|
| 409 |
-
""".replace(
|
| 410 |
-
"
|
| 411 |
-
|
| 412 |
-
"
|
| 413 |
-
|
| 414 |
-
"
|
| 415 |
-
|
| 416 |
-
"
|
| 417 |
-
|
| 418 |
-
"
|
| 419 |
-
|
| 420 |
-
|
| 421 |
-
|
| 422 |
-
|
| 423 |
-
)
|
| 424 |
-
"__WS_CLOSED__", translations["ws_closed"]
|
| 425 |
-
).replace(
|
| 426 |
-
"__REALTIME_STARTED__", translations["realtime_is_ready"]
|
| 427 |
-
).replace(
|
| 428 |
-
"__ERROR__", translations["error_occurred"].format(e="")
|
| 429 |
-
).replace(
|
| 430 |
-
"__REALTIME_HAS_STOP__", translations["realtime_has_stop"]
|
| 431 |
-
).replace(
|
| 432 |
-
"__PROVIDE_MODEL__", translations["provide_file"].format(filename=translations["model"])
|
| 433 |
)
|
| 434 |
|
| 435 |
-
client_mode = True
|
| 436 |
|
| 437 |
with gr.Blocks(
|
| 438 |
-
title="📱 Vietnamese-RVC GUI BY ANH",
|
| 439 |
-
js=js_code if client_mode else None,
|
| 440 |
-
theme=theme,
|
| 441 |
-
css="<style> @import url('{fonts}'); * {{font-family: 'Courgette', cursive !important;}} body, html {{font-family: 'Courgette', cursive !important;}} h1, h2, h3, h4, h5, h6, p, button, input, textarea, label, span, div, select {{font-family: 'Courgette', cursive !important;}} </style>".format(
|
|
|
|
|
|
|
| 442 |
) as app:
|
| 443 |
-
gr.
|
| 444 |
-
gr.HTML(f"<h3 style='text-align: center;'>{translations['title']}</h3>")
|
| 445 |
-
|
| 446 |
-
with gr.Tabs():
|
| 447 |
inference_tab()
|
| 448 |
editing_tab()
|
| 449 |
|
| 450 |
if client_mode:
|
| 451 |
from main.app.tabs.realtime.realtime_client import realtime_client_tab
|
|
|
|
| 452 |
realtime_client_tab()
|
| 453 |
else:
|
| 454 |
from main.app.tabs.realtime.realtime import realtime_tab
|
|
|
|
| 455 |
realtime_tab()
|
| 456 |
|
| 457 |
training_tab()
|
| 458 |
download_tab()
|
| 459 |
extra_tab(app)
|
| 460 |
|
| 461 |
-
with gr.Row():
|
| 462 |
-
gr.Markdown(translations["rick_roll"].format(rickroll=codecs.decode('uggcf://jjj.lbhghor.pbz/jngpu?i=qDj4j9JtKpD', 'rot13')))
|
| 463 |
-
|
| 464 |
-
with gr.Row():
|
| 465 |
gr.Markdown(translations["terms_of_use"])
|
| 466 |
|
| 467 |
with gr.Row():
|
| 468 |
gr.Markdown(translations["exemption"])
|
| 469 |
-
|
| 470 |
if __name__ == "__main__":
|
| 471 |
logger.info(config.device.replace("privateuseone", "dml"))
|
| 472 |
logger.info(translations["start_app"])
|
| 473 |
-
logger.info(translations["set_lang"].format(lang=
|
| 474 |
|
| 475 |
port = configs.get("app_port", 7860)
|
| 476 |
server_name = configs.get("server_name", "0.0.0.0")
|
| 477 |
-
share =
|
| 478 |
|
| 479 |
original_stdout = sys.stdout
|
| 480 |
sys.stdout = io.StringIO()
|
|
@@ -482,15 +481,15 @@ with gr.Blocks(
|
|
| 482 |
for i in range(configs.get("num_of_restart", 5)):
|
| 483 |
try:
|
| 484 |
gradio_app, _, share_url = app.queue().launch(
|
| 485 |
-
favicon_path=configs["ico_path"],
|
| 486 |
-
server_name=server_name,
|
| 487 |
-
server_port=port,
|
| 488 |
-
show_error=configs.get("app_show_error", False),
|
| 489 |
-
inbrowser="--open" in sys.argv,
|
| 490 |
-
share=share,
|
| 491 |
allowed_paths=allow_disk,
|
| 492 |
prevent_thread_lock=True,
|
| 493 |
-
quiet=True
|
| 494 |
)
|
| 495 |
break
|
| 496 |
except OSError:
|
|
@@ -502,23 +501,17 @@ with gr.Blocks(
|
|
| 502 |
|
| 503 |
if client_mode:
|
| 504 |
from main.app.core.realtime_client import app as fastapi_app
|
|
|
|
| 505 |
gradio_app.mount("/api", fastapi_app)
|
| 506 |
-
|
| 507 |
-
sys.stdout = original_stdout
|
| 508 |
|
| 509 |
-
|
| 510 |
-
pipe = connect_discord_ipc()
|
| 511 |
-
if pipe:
|
| 512 |
-
try:
|
| 513 |
-
logger.info(translations["start_rpc"])
|
| 514 |
-
send_discord_rpc(pipe)
|
| 515 |
-
except KeyboardInterrupt:
|
| 516 |
-
logger.info(translations["stop_rpc"])
|
| 517 |
-
pipe.close()
|
| 518 |
|
| 519 |
logger.info(f"{translations['running_local_url']}: {server_name}:{port}")
|
| 520 |
-
if share:
|
| 521 |
-
|
|
|
|
|
|
|
|
|
|
| 522 |
|
| 523 |
while 1:
|
| 524 |
-
time.sleep(5)
|
|
|
|
| 18 |
from main.app.tabs.downloads.downloads import download_tab
|
| 19 |
from main.app.tabs.inference.inference import inference_tab
|
| 20 |
from main.configs.rpc import connect_discord_ipc, send_discord_rpc
|
| 21 |
+
from main.app.variables import (
|
| 22 |
+
logger,
|
| 23 |
+
config,
|
| 24 |
+
translations,
|
| 25 |
+
theme,
|
| 26 |
+
font,
|
| 27 |
+
configs,
|
| 28 |
+
language,
|
| 29 |
+
allow_disk,
|
| 30 |
+
)
|
| 31 |
|
| 32 |
ssl._create_default_https_context = ssl._create_unverified_context
|
| 33 |
|
|
|
|
| 35 |
for l in ["httpx", "gradio", "uvicorn", "httpcore", "urllib3"]:
|
| 36 |
logging.getLogger(l).setLevel(logging.ERROR)
|
| 37 |
|
| 38 |
+
js_code = (
|
| 39 |
+
"""
|
| 40 |
() => {
|
| 41 |
window._activeStream = null;
|
| 42 |
window._audioCtx = null;
|
|
|
|
| 416 |
}
|
| 417 |
};
|
| 418 |
}
|
| 419 |
+
""".replace("__MEDIA_DEVICES__", translations["media_devices"])
|
| 420 |
+
.replace("__MIC_INACCESSIBLE__", translations["mic_inaccessible"])
|
| 421 |
+
.replace("__PROVIDE_AUDIO_DEVICE__", translations["provide_audio_device"])
|
| 422 |
+
.replace("__PROVIDE_MONITOR_DEVICE__", translations["provide_monitor_device"])
|
| 423 |
+
.replace("__START_REALTIME__", translations["start_realtime"])
|
| 424 |
+
.replace("__LATENCY__", translations["latency"])
|
| 425 |
+
.replace("__WS_CONNECTED__", translations["ws_connected"])
|
| 426 |
+
.replace("__WS_CLOSED__", translations["ws_closed"])
|
| 427 |
+
.replace("__REALTIME_STARTED__", translations["realtime_is_ready"])
|
| 428 |
+
.replace("__ERROR__", translations["error_occurred"].format(e=""))
|
| 429 |
+
.replace("__REALTIME_HAS_STOP__", translations["realtime_has_stop"])
|
| 430 |
+
.replace(
|
| 431 |
+
"__PROVIDE_MODEL__",
|
| 432 |
+
translations["provide_file"].format(filename=translations["model"]),
|
| 433 |
+
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 434 |
)
|
| 435 |
|
| 436 |
+
client_mode = True # "--client" in sys.argv
|
| 437 |
|
| 438 |
with gr.Blocks(
|
| 439 |
+
title="📱 Vietnamese-RVC GUI BY ANH",
|
| 440 |
+
js=js_code if client_mode else None,
|
| 441 |
+
theme=theme,
|
| 442 |
+
css="<style> @import url('{fonts}'); * {{font-family: 'Courgette', cursive !important;}} body, html {{font-family: 'Courgette', cursive !important;}} h1, h2, h3, h4, h5, h6, p, button, input, textarea, label, span, div, select {{font-family: 'Courgette', cursive !important;}} </style>".format(
|
| 443 |
+
fonts=font or "https://fonts.googleapis.com/css2?family=Courgette&display=swap"
|
| 444 |
+
),
|
| 445 |
) as app:
|
| 446 |
+
with gr.Tabs():
|
|
|
|
|
|
|
|
|
|
| 447 |
inference_tab()
|
| 448 |
editing_tab()
|
| 449 |
|
| 450 |
if client_mode:
|
| 451 |
from main.app.tabs.realtime.realtime_client import realtime_client_tab
|
| 452 |
+
|
| 453 |
realtime_client_tab()
|
| 454 |
else:
|
| 455 |
from main.app.tabs.realtime.realtime import realtime_tab
|
| 456 |
+
|
| 457 |
realtime_tab()
|
| 458 |
|
| 459 |
training_tab()
|
| 460 |
download_tab()
|
| 461 |
extra_tab(app)
|
| 462 |
|
| 463 |
+
with gr.Row():
|
|
|
|
|
|
|
|
|
|
| 464 |
gr.Markdown(translations["terms_of_use"])
|
| 465 |
|
| 466 |
with gr.Row():
|
| 467 |
gr.Markdown(translations["exemption"])
|
| 468 |
+
|
| 469 |
if __name__ == "__main__":
|
| 470 |
logger.info(config.device.replace("privateuseone", "dml"))
|
| 471 |
logger.info(translations["start_app"])
|
| 472 |
+
logger.info(translations["set_lang"].format(lang=en - US))
|
| 473 |
|
| 474 |
port = configs.get("app_port", 7860)
|
| 475 |
server_name = configs.get("server_name", "0.0.0.0")
|
| 476 |
+
share = False
|
| 477 |
|
| 478 |
original_stdout = sys.stdout
|
| 479 |
sys.stdout = io.StringIO()
|
|
|
|
| 481 |
for i in range(configs.get("num_of_restart", 5)):
|
| 482 |
try:
|
| 483 |
gradio_app, _, share_url = app.queue().launch(
|
| 484 |
+
favicon_path=configs["ico_path"],
|
| 485 |
+
server_name=server_name,
|
| 486 |
+
server_port=port,
|
| 487 |
+
show_error=configs.get("app_show_error", False),
|
| 488 |
+
inbrowser="--open" in sys.argv,
|
| 489 |
+
share=share,
|
| 490 |
allowed_paths=allow_disk,
|
| 491 |
prevent_thread_lock=True,
|
| 492 |
+
quiet=True,
|
| 493 |
)
|
| 494 |
break
|
| 495 |
except OSError:
|
|
|
|
| 501 |
|
| 502 |
if client_mode:
|
| 503 |
from main.app.core.realtime_client import app as fastapi_app
|
| 504 |
+
|
| 505 |
gradio_app.mount("/api", fastapi_app)
|
|
|
|
|
|
|
| 506 |
|
| 507 |
+
sys.stdout = original_stdout
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 508 |
|
| 509 |
logger.info(f"{translations['running_local_url']}: {server_name}:{port}")
|
| 510 |
+
if share:
|
| 511 |
+
logger.info(f"{translations['running_share_url']}: {share_url}")
|
| 512 |
+
logger.info(
|
| 513 |
+
f"{translations['gradio_start']}: {(time.time() - start_time):.2f}s"
|
| 514 |
+
)
|
| 515 |
|
| 516 |
while 1:
|
| 517 |
+
time.sleep(5)
|
main/app/parser.py
CHANGED
|
@@ -8,362 +8,394 @@ try:
|
|
| 8 |
except IndexError:
|
| 9 |
argv = None
|
| 10 |
|
| 11 |
-
argv_is_allows = [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
if argv not in argv_is_allows:
|
| 14 |
-
print("
|
| 15 |
quit()
|
| 16 |
|
| 17 |
-
if argv_is_allows[0] in argv:
|
| 18 |
-
|
| 19 |
-
elif argv_is_allows[
|
| 20 |
-
|
| 21 |
-
elif argv_is_allows[
|
| 22 |
-
|
| 23 |
-
elif argv_is_allows[
|
| 24 |
-
|
| 25 |
-
elif argv_is_allows[
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
elif argv_is_allows[8] in argv:
|
| 27 |
-
|
| 28 |
-
1.
|
| 29 |
-
- `--input_path` (
|
| 30 |
-
- `--output_path` (
|
| 31 |
-
- `--export_format` (
|
| 32 |
|
| 33 |
-
2.
|
| 34 |
-
- `--resample` (
|
| 35 |
-
- `--resample_sr` (
|
| 36 |
|
| 37 |
-
3.
|
| 38 |
-
- `--chorus`:
|
| 39 |
-
- `--chorus_depth`, `--chorus_rate`, `--chorus_mix`, `--chorus_delay`, `--chorus_feedback`:
|
| 40 |
|
| 41 |
-
4.
|
| 42 |
-
- `--distortion`:
|
| 43 |
-
- `--drive_db`:
|
| 44 |
|
| 45 |
-
5.
|
| 46 |
-
- `--reverb`:
|
| 47 |
-
- `--reverb_room_size`, `--reverb_damping`, `--reverb_wet_level`, `--reverb_dry_level`, `--reverb_width`, `--reverb_freeze_mode`:
|
| 48 |
|
| 49 |
-
6.
|
| 50 |
-
- `--pitchshift`:
|
| 51 |
-
- `--pitch_shift`:
|
| 52 |
|
| 53 |
-
7.
|
| 54 |
-
- `--delay`:
|
| 55 |
-
- `--delay_seconds`, `--delay_feedback`, `--delay_mix`:
|
| 56 |
|
| 57 |
8. Compressor:
|
| 58 |
-
- `--compressor`:
|
| 59 |
-
- `--compressor_threshold`, `--compressor_ratio`, `--compressor_attack_ms`, `--compressor_release_ms`:
|
| 60 |
|
| 61 |
9. Limiter:
|
| 62 |
-
- `--limiter`:
|
| 63 |
-
- `--limiter_threshold`, `--limiter_release`:
|
| 64 |
|
| 65 |
-
10. Gain (
|
| 66 |
-
- `--gain`:
|
| 67 |
-
- `--gain_db`:
|
| 68 |
|
| 69 |
11. Bitcrush:
|
| 70 |
-
- `--bitcrush`:
|
| 71 |
-
- `--bitcrush_bit_depth`:
|
| 72 |
|
| 73 |
12. Clipping:
|
| 74 |
-
- `--clipping`:
|
| 75 |
-
- `--clipping_threshold`:
|
| 76 |
|
| 77 |
13. Phaser:
|
| 78 |
-
- `--phaser`:
|
| 79 |
-
- `--phaser_rate_hz`, `--phaser_depth`, `--phaser_centre_frequency_hz`, `--phaser_feedback`, `--phaser_mix`:
|
| 80 |
|
| 81 |
14. Boost bass & treble:
|
| 82 |
-
- `--treble_bass_boost`:
|
| 83 |
-
- `--bass_boost_db`, `--bass_boost_frequency`, `--treble_boost_db`, `--treble_boost_frequency`:
|
| 84 |
|
| 85 |
15. Fade in & fade out:
|
| 86 |
-
- `--fade_in_out`:
|
| 87 |
-
- `--fade_in_duration`, `--fade_out_duration`:
|
| 88 |
-
|
| 89 |
-
16.
|
| 90 |
-
- `--audio_combination`:
|
| 91 |
-
- `--audio_combination_input`:
|
| 92 |
-
- `--main_volume`:
|
| 93 |
-
- `--combination_volume`::
|
| 94 |
""")
|
| 95 |
quit()
|
| 96 |
elif argv_is_allows[9] in argv:
|
| 97 |
-
|
| 98 |
-
1.
|
| 99 |
-
- `--pitch` (
|
| 100 |
-
- `--filter_radius` (
|
| 101 |
-
- `--index_rate` (
|
| 102 |
-
- `--rms_mix_rate` (
|
| 103 |
-
- `--protect` (
|
| 104 |
-
- `--hop_length` (
|
| 105 |
-
|
| 106 |
-
2.
|
| 107 |
-
- `--f0_method` (
|
| 108 |
-
- `--f0_autotune` (
|
| 109 |
-
- `--f0_autotune_strength` (
|
| 110 |
-
- `--f0_file` (
|
| 111 |
-
- `--f0_onnx` (
|
| 112 |
-
- `--proposal_pitch` (
|
| 113 |
-
- `--proposal_pitch_threshold` (
|
| 114 |
-
- `--alpha` (
|
| 115 |
-
|
| 116 |
-
3.
|
| 117 |
-
- `--embedder_model` (
|
| 118 |
-
- `--embedders_mode` (
|
| 119 |
-
|
| 120 |
-
4.
|
| 121 |
-
- `--input_path` (
|
| 122 |
-
- `--output_path` (
|
| 123 |
-
- `--export_format` (
|
| 124 |
-
- `--pth_path` (
|
| 125 |
-
- `--index_path` (
|
| 126 |
-
|
| 127 |
-
5.
|
| 128 |
-
- `--clean_audio` (
|
| 129 |
-
- `--clean_strength` (
|
| 130 |
-
|
| 131 |
-
6. Resampling &
|
| 132 |
-
- `--resample_sr` (
|
| 133 |
-
- `--split_audio` (
|
| 134 |
-
|
| 135 |
-
7.
|
| 136 |
-
- `--checkpointing` (
|
| 137 |
-
|
| 138 |
-
8.
|
| 139 |
-
- `--formant_shifting` (
|
| 140 |
-
- `--formant_qfrency` (
|
| 141 |
-
- `--formant_timbre` (
|
| 142 |
""")
|
| 143 |
quit()
|
| 144 |
elif argv_is_allows[10] in argv:
|
| 145 |
-
print("""
|
| 146 |
-
1.
|
| 147 |
-
- `--input_data` (
|
| 148 |
-
- `--output_dirs` (
|
| 149 |
-
- `--sample_rate` (
|
| 150 |
-
|
| 151 |
-
2.
|
| 152 |
-
- `--clean_dataset` (
|
| 153 |
-
- `--clean_strength` (
|
| 154 |
-
|
| 155 |
-
3.
|
| 156 |
-
- `--separate` (
|
| 157 |
-
- `--separator_reverb` (
|
| 158 |
-
- `--model_name` (
|
| 159 |
-
- `--reverb_model` (
|
| 160 |
-
- `--denoise_model` (
|
| 161 |
|
| 162 |
-
4.
|
| 163 |
-
- `--shifts` (
|
| 164 |
-
- `--batch_size` (
|
| 165 |
-
- `--overlap` (
|
| 166 |
-
- `--aggression` (
|
| 167 |
-
- `--hop_length` (
|
| 168 |
-
- `--window_size` (
|
| 169 |
-
- `--segments_size` (
|
| 170 |
-
- `--post_process_threshold` (
|
| 171 |
-
|
| 172 |
-
5.
|
| 173 |
-
- `--enable_tta` (
|
| 174 |
-
- `--enable_denoise` (
|
| 175 |
-
- `--high_end_process` (
|
| 176 |
-
- `--enable_post_process` (
|
| 177 |
-
|
| 178 |
-
6.
|
| 179 |
-
- `--skip_seconds` (
|
| 180 |
-
- `--skip_start_audios` (
|
| 181 |
-
- `--skip_end_audios` (
|
| 182 |
""")
|
| 183 |
quit()
|
| 184 |
elif argv_is_allows[11] in argv:
|
| 185 |
-
|
| 186 |
-
1.
|
| 187 |
-
- `--model_name` (
|
| 188 |
-
- `--rvc_version` (
|
| 189 |
-
- `--index_algorithm` (
|
| 190 |
""")
|
| 191 |
quit()
|
| 192 |
elif argv_is_allows[12] in argv:
|
| 193 |
-
print("""
|
| 194 |
-
1.
|
| 195 |
-
- `--model_name` (
|
| 196 |
-
- `--rvc_version` (
|
| 197 |
-
|
| 198 |
-
2.
|
| 199 |
-
- `--f0_method` (
|
| 200 |
-
- `--f0_onnx` (
|
| 201 |
-
- `--pitch_guidance` (
|
| 202 |
-
- `--f0_autotune` (
|
| 203 |
-
- `--f0_autotune_strength` (
|
| 204 |
-
- `--alpha` (
|
| 205 |
-
|
| 206 |
-
3.
|
| 207 |
-
- `--hop_length` (
|
| 208 |
-
- `--cpu_cores` (
|
| 209 |
-
- `--gpu` (
|
| 210 |
-
- `--sample_rate` (
|
| 211 |
-
|
| 212 |
-
4.
|
| 213 |
-
- `--embedder_model` (
|
| 214 |
-
- `--embedders_mode` (
|
| 215 |
|
| 216 |
4. RMS:
|
| 217 |
-
- `--rms_extract` (
|
| 218 |
""")
|
| 219 |
quit()
|
| 220 |
elif argv_is_allows[13] in argv:
|
| 221 |
-
print("""
|
| 222 |
-
1.
|
| 223 |
-
- `--model_name` (
|
| 224 |
-
|
| 225 |
-
2.
|
| 226 |
-
- `--dataset_path` (
|
| 227 |
-
- `--sample_rate` (
|
| 228 |
-
|
| 229 |
-
3.
|
| 230 |
-
- `--cpu_cores` (
|
| 231 |
-
- `--cut_preprocess` (
|
| 232 |
-
- `--process_effects` (
|
| 233 |
-
- `--clean_dataset` (
|
| 234 |
-
- `--clean_strength` (
|
| 235 |
|
| 236 |
-
4.
|
| 237 |
-
- `--chunk_len` (
|
| 238 |
-
- `--overlap_len` (
|
| 239 |
-
- `--normalization_mode` (
|
| 240 |
""")
|
| 241 |
quit()
|
| 242 |
elif argv_is_allows[14] in argv:
|
| 243 |
-
print("""
|
| 244 |
-
1.
|
| 245 |
-
- `--input_path` (
|
| 246 |
-
- `--output_dirs` (
|
| 247 |
-
- `--export_format` (
|
| 248 |
-
- `--sample_rate` (
|
| 249 |
-
|
| 250 |
-
2.
|
| 251 |
-
- `--model_name` (
|
| 252 |
-
- `--karaoke_model` (
|
| 253 |
-
- `--reverb_model` (
|
| 254 |
-
- `--denoise_model` (
|
| 255 |
-
|
| 256 |
-
3.
|
| 257 |
-
- `--shifts` (
|
| 258 |
-
- `--batch_size` (
|
| 259 |
-
- `--overlap` (
|
| 260 |
-
- `--aggression` (
|
| 261 |
-
- `--hop_length` (
|
| 262 |
-
- `--window_size` (
|
| 263 |
-
- `--segments_size` (
|
| 264 |
-
- `--post_process_threshold` (
|
| 265 |
-
|
| 266 |
-
4.
|
| 267 |
-
- `--enable_tta` (
|
| 268 |
-
- `--enable_denoise` (
|
| 269 |
-
- `--high_end_process` (
|
| 270 |
-
- `--enable_post_process` (
|
| 271 |
-
- `--separate_backing` (
|
| 272 |
-
- `--separate_reverb` (
|
| 273 |
""")
|
| 274 |
quit()
|
| 275 |
elif argv_is_allows[15] in argv:
|
| 276 |
-
|
| 277 |
-
1.
|
| 278 |
-
- `--model_name` (
|
| 279 |
-
- `--rvc_version` (
|
| 280 |
-
- `--model_author` (
|
| 281 |
-
|
| 282 |
-
2.
|
| 283 |
-
- `--save_every_epoch` (
|
| 284 |
-
- `--save_only_latest` (
|
| 285 |
-
- `--save_every_weights` (
|
| 286 |
-
|
| 287 |
-
3.
|
| 288 |
-
- `--total_epoch` (
|
| 289 |
-
- `--batch_size` (
|
| 290 |
-
|
| 291 |
-
4.
|
| 292 |
-
- `--gpu` (
|
| 293 |
-
- `--cache_data_in_gpu` (
|
| 294 |
-
|
| 295 |
-
5.
|
| 296 |
-
- `--pitch_guidance` (
|
| 297 |
-
- `--g_pretrained_path` (
|
| 298 |
-
- `--d_pretrained_path` (
|
| 299 |
-
- `--vocoder` (
|
| 300 |
-
- `--energy_use` (
|
| 301 |
-
|
| 302 |
-
6.
|
| 303 |
-
- `--overtraining_detector` (
|
| 304 |
-
- `--overtraining_threshold` (
|
| 305 |
-
|
| 306 |
-
7.
|
| 307 |
-
- `--cleanup` (
|
| 308 |
-
|
| 309 |
-
8.
|
| 310 |
-
- `--checkpointing` (
|
| 311 |
-
- `--deterministic` (
|
| 312 |
-
- `--benchmark` (
|
| 313 |
-
- `--optimizer` (
|
| 314 |
-
- `--multiscale_mel_loss` (
|
| 315 |
|
| 316 |
-
9.
|
| 317 |
-
- `--use_custom_reference` (
|
| 318 |
-
- `--reference_path` (
|
| 319 |
""")
|
| 320 |
quit()
|
| 321 |
-
elif argv_is_allows
|
| 322 |
-
print("""
|
| 323 |
-
1.
|
| 324 |
-
- `--audio_path` (
|
| 325 |
-
- `--reference_name` (
|
| 326 |
|
| 327 |
-
2.
|
| 328 |
-
- `--pitch_guidance` (
|
| 329 |
-
- `--energy_use` (
|
| 330 |
-
- `--version` (
|
| 331 |
-
|
| 332 |
-
3.
|
| 333 |
-
- `--embedder_model` (
|
| 334 |
-
- `--embedders_mode` (
|
| 335 |
|
| 336 |
-
4.
|
| 337 |
-
- `--f0_method` (
|
| 338 |
-
- `--f0_onnx` (
|
| 339 |
-
- `--f0_up_key` (
|
| 340 |
-
- `--filter_radius` (
|
| 341 |
-
- `--f0_autotune` (
|
| 342 |
-
- `--f0_autotune_strength` (
|
| 343 |
-
- `--f0_file` (
|
| 344 |
-
- `--proposal_pitch` (
|
| 345 |
-
- `--proposal_pitch_threshold` (
|
| 346 |
-
- `--alpha` (
|
| 347 |
""")
|
| 348 |
quit()
|
| 349 |
-
elif argv_is_allows
|
| 350 |
-
print("""
|
| 351 |
-
1. `--help_audio_effects`:
|
| 352 |
-
2. `--help_convert`:
|
| 353 |
-
3. `--help_create_dataset`:
|
| 354 |
-
4. `--help_create_index`:
|
| 355 |
-
5. `--help_extract`:
|
| 356 |
-
6. `--help_preprocess`:
|
| 357 |
-
7. `--help_separate_music`:
|
| 358 |
-
8. `--help_train`:
|
| 359 |
-
9. `--help_create_reference`:
|
| 360 |
""")
|
| 361 |
quit()
|
| 362 |
|
|
|
|
| 363 |
if __name__ == "__main__":
|
| 364 |
import torch.multiprocessing as mp
|
| 365 |
|
| 366 |
-
if "--train" in argv:
|
| 367 |
-
|
|
|
|
|
|
|
| 368 |
|
| 369 |
-
main()
|
|
|
|
| 8 |
except IndexError:
|
| 9 |
argv = None
|
| 10 |
|
| 11 |
+
argv_is_allows = [
|
| 12 |
+
"--audio_effects",
|
| 13 |
+
"--convert",
|
| 14 |
+
"--create_dataset",
|
| 15 |
+
"--create_index",
|
| 16 |
+
"--extract",
|
| 17 |
+
"--preprocess",
|
| 18 |
+
"--separator_music",
|
| 19 |
+
"--train",
|
| 20 |
+
"--help_audio_effects",
|
| 21 |
+
"--help_convert",
|
| 22 |
+
"--help_create_dataset",
|
| 23 |
+
"--help_create_index",
|
| 24 |
+
"--help_extract",
|
| 25 |
+
"--help_preprocess",
|
| 26 |
+
"--help_separate_music",
|
| 27 |
+
"--help_train",
|
| 28 |
+
"--help",
|
| 29 |
+
"--create_reference",
|
| 30 |
+
"help_create_reference",
|
| 31 |
+
]
|
| 32 |
|
| 33 |
if argv not in argv_is_allows:
|
| 34 |
+
print("Invalid syntax! Use --help for more.")
|
| 35 |
quit()
|
| 36 |
|
| 37 |
+
if argv_is_allows[0] in argv:
|
| 38 |
+
from main.inference.audio_effects import main
|
| 39 |
+
elif argv_is_allows[1] in argv:
|
| 40 |
+
from main.inference.conversion.convert import main
|
| 41 |
+
elif argv_is_allows[2] in argv:
|
| 42 |
+
from main.inference.create_dataset import main
|
| 43 |
+
elif argv_is_allows[3] in argv:
|
| 44 |
+
from main.inference.create_index import main
|
| 45 |
+
elif argv_is_allows[4] in argv:
|
| 46 |
+
from main.inference.extracting.extract import main
|
| 47 |
+
elif argv_is_allows[5] in argv:
|
| 48 |
+
from main.inference.preprocess.preprocess import main
|
| 49 |
+
elif argv_is_allows[6] in argv:
|
| 50 |
+
from main.inference.separate_music import main
|
| 51 |
+
elif argv_is_allows[7] in argv:
|
| 52 |
+
from main.inference.training.train import main
|
| 53 |
+
elif argv_is_allows[17] in argv:
|
| 54 |
+
from main.inference.create_reference import main
|
| 55 |
elif argv_is_allows[8] in argv:
|
| 56 |
+
print("""The parameters for `--audio_effects`:
|
| 57 |
+
1. File path:
|
| 58 |
+
- `--input_path` (required): Path to the input audio file.
|
| 59 |
+
- `--output_path` (default: `./audios/apply_effects.wav`): Path to save the output file.
|
| 60 |
+
- `--export_format` (default: `wav`): File export format (`wav`, `mp3`, ...).
|
| 61 |
|
| 62 |
+
2. Resampling:
|
| 63 |
+
- `--resample` (default: `False`): Whether to resample or not.
|
| 64 |
+
- `--resample_sr` (default: `0`): New sample rate (Hz).
|
| 65 |
|
| 66 |
+
3. Chorus effect:
|
| 67 |
+
- `--chorus`: Enable/disable chorus.
|
| 68 |
+
- `--chorus_depth`, `--chorus_rate`, `--chorus_mix`, `--chorus_delay`, `--chorus_feedback`: Parameters to adjust chorus.
|
| 69 |
|
| 70 |
+
4. Distortion effect:
|
| 71 |
+
- `--distortion`: Enable/disable distortion.
|
| 72 |
+
- `--drive_db`: Audio distortion level.
|
| 73 |
|
| 74 |
+
5. Reverb effect:
|
| 75 |
+
- `--reverb`: Enable/disable reverb.
|
| 76 |
+
- `--reverb_room_size`, `--reverb_damping`, `--reverb_wet_level`, `--reverb_dry_level`, `--reverb_width`, `--reverb_freeze_mode`: Adjust reverb.
|
| 77 |
|
| 78 |
+
6. Pitch shift effect:
|
| 79 |
+
- `--pitchshift`: Enable/disable pitch shift.
|
| 80 |
+
- `--pitch_shift`: Pitch shift value.
|
| 81 |
|
| 82 |
+
7. Delay effect:
|
| 83 |
+
- `--delay`: Enable/disable delay.
|
| 84 |
+
- `--delay_seconds`, `--delay_feedback`, `--delay_mix`: Adjust delay time, feedback, and mix.
|
| 85 |
|
| 86 |
8. Compressor:
|
| 87 |
+
- `--compressor`: Enable/disable compressor.
|
| 88 |
+
- `--compressor_threshold`, `--compressor_ratio`, `--compressor_attack_ms`, `--compressor_release_ms`: Compression parameters.
|
| 89 |
|
| 90 |
9. Limiter:
|
| 91 |
+
- `--limiter`: Enable/disable audio level limiting.
|
| 92 |
+
- `--limiter_threshold`, `--limiter_release`: Limiting threshold and release time.
|
| 93 |
|
| 94 |
+
10. Gain (Amplification):
|
| 95 |
+
- `--gain`: Enable/disable gain.
|
| 96 |
+
- `--gain_db`: Gain level (dB).
|
| 97 |
|
| 98 |
11. Bitcrush:
|
| 99 |
+
- `--bitcrush`: Enable/disable bit reduction effect.
|
| 100 |
+
- `--bitcrush_bit_depth`: Bitcrush bit depth.
|
| 101 |
|
| 102 |
12. Clipping:
|
| 103 |
+
- `--clipping`: Enable/disable audio clipping.
|
| 104 |
+
- `--clipping_threshold`: Clipping threshold.
|
| 105 |
|
| 106 |
13. Phaser:
|
| 107 |
+
- `--phaser`: Enable/disable phaser effect.
|
| 108 |
+
- `--phaser_rate_hz`, `--phaser_depth`, `--phaser_centre_frequency_hz`, `--phaser_feedback`, `--phaser_mix`: Adjust phaser effect.
|
| 109 |
|
| 110 |
14. Boost bass & treble:
|
| 111 |
+
- `--treble_bass_boost`: Enable/disable bass and treble enhancement.
|
| 112 |
+
- `--bass_boost_db`, `--bass_boost_frequency`, `--treble_boost_db`, `--treble_boost_frequency`: Bass and treble boost parameters.
|
| 113 |
|
| 114 |
15. Fade in & fade out:
|
| 115 |
+
- `--fade_in_out`: Enable/disable fade effect.
|
| 116 |
+
- `--fade_in_duration`, `--fade_out_duration`: Fade in/out duration.
|
| 117 |
+
|
| 118 |
+
16. Audio combination:
|
| 119 |
+
- `--audio_combination`: Enable/disable combining multiple audio files.
|
| 120 |
+
- `--audio_combination_input`: Path to additional audio file.
|
| 121 |
+
- `--main_volume`: Volume of the main audio.
|
| 122 |
+
- `--combination_volume`:: Volume of the audio to be combined.
|
| 123 |
""")
|
| 124 |
quit()
|
| 125 |
elif argv_is_allows[9] in argv:
|
| 126 |
+
print("""The parameters for --convert:
|
| 127 |
+
1. Voice processing configuration:
|
| 128 |
+
- `--pitch` (default: `0`): Adjust pitch.
|
| 129 |
+
- `--filter_radius` (default: `3`): Smoothness of the F0 curve.
|
| 130 |
+
- `--index_rate` (default: `0.5`): Rate of using the voice index.
|
| 131 |
+
- `--rms_mix_rate` (default: `1`): Coefficient for adjusting volume amplitude.
|
| 132 |
+
- `--protect` (default: `0.33`): Protect consonants.
|
| 133 |
+
- `--hop_length` (default: `64`): Hop length during audio processing.
|
| 134 |
+
|
| 135 |
+
2. F0 configuration:
|
| 136 |
+
- `--f0_method` (default: `rmvpe`): F0 prediction method (`pm`, `dio`, `mangio-crepe-tiny`, `mangio-crepe-small`, `mangio-crepe-medium`, `mangio-crepe-large`, `mangio-crepe-full`, `crepe-tiny`, `crepe-small`, `crepe-medium`, `crepe-large`, `crepe-full`, `fcpe`, `fcpe-legacy`, `rmvpe`, `rmvpe-legacy`, `harvest`, `yin`, `pyin`, `swipe`).
|
| 137 |
+
- `--f0_autotune` (default: `False`): Whether to automatically adjust F0 or not.
|
| 138 |
+
- `--f0_autotune_strength` (default: `1`): Intensity of automatic F0 correction.
|
| 139 |
+
- `--f0_file` (default: ``): Path to an existing F0 file.
|
| 140 |
+
- `--f0_onnx` (default: `False`): Whether to use the ONNX version of F0 or not.
|
| 141 |
+
- `--proposal_pitch` (default: `False`): Propose pitch instead of manual adjustment.
|
| 142 |
+
- `--proposal_pitch_threshold` (default: `0.0`): Frequency threshold for pitch estimation.
|
| 143 |
+
- `--alpha` (default: `0.5`): Pitch mixing threshold for hybrid pitch estimation.
|
| 144 |
+
|
| 145 |
+
3. Embedding model:
|
| 146 |
+
- `--embedder_model` (default: `hubert_base`): Embedding model to use.
|
| 147 |
+
- `--embedders_mode` (default: `fairseq`): Embedding mode (`fairseq`, `transformers`, `onnx`, `whisper`).
|
| 148 |
+
|
| 149 |
+
4. File path:
|
| 150 |
+
- `--input_path` (required): Input audio file path.
|
| 151 |
+
- `--output_path` (default: `./audios/output.wav`): Path to save the output file.
|
| 152 |
+
- `--export_format` (default: `wav`): File export format.
|
| 153 |
+
- `--pth_path` (required): Path to the `.pth` model file.
|
| 154 |
+
- `--index_path` (default: `None`): Index file path (if any).
|
| 155 |
+
|
| 156 |
+
5. Audio cleaning:
|
| 157 |
+
- `--clean_audio` (default: `False`): Whether to apply audio cleaning or not.
|
| 158 |
+
- `--clean_strength` (default: `0.7`): Cleaning strength level.
|
| 159 |
+
|
| 160 |
+
6. Resampling & audio splitting:
|
| 161 |
+
- `--resample_sr` (default: `0`): New sample rate (0 means keep original).
|
| 162 |
+
- `--split_audio` (default: `False`): Whether to split the audio before processing or not.
|
| 163 |
+
|
| 164 |
+
7. Checking & optimization:
|
| 165 |
+
- `--checkpointing` (default: `False`): Enable/disable checkpointing to save RAM.
|
| 166 |
+
|
| 167 |
+
8. Formant shifting:
|
| 168 |
+
- `--formant_shifting` (default: `False`): Whether to enable the formant shifting effect or not.
|
| 169 |
+
- `--formant_qfrency` (default: `0.8`): Frequency formant shift coefficient.
|
| 170 |
+
- `--formant_timbre` (default: `0.8`): Timbre change coefficient.
|
| 171 |
""")
|
| 172 |
quit()
|
| 173 |
elif argv_is_allows[10] in argv:
|
| 174 |
+
print("""The parameters for --create_dataset:
|
| 175 |
+
1. Path & dataset configuration:
|
| 176 |
+
- `--input_data` (required): Link to audio (Youtube links; can use commas `,` for multiple links).
|
| 177 |
+
- `--output_dirs` (default: `./dataset`): Output data directory.
|
| 178 |
+
- `--sample_rate` (default: `48000`): Sample rate for audio.
|
| 179 |
+
|
| 180 |
+
2. Data cleaning:
|
| 181 |
+
- `--clean_dataset` (default: `False`): Whether to apply data cleaning or not.
|
| 182 |
+
- `--clean_strength` (default: `0.7`): Data cleaning strength level.
|
| 183 |
+
|
| 184 |
+
3. Voice separation & effects:
|
| 185 |
+
- `--separate` (default: `True`): Whether to separate music or not.
|
| 186 |
+
- `--separator_reverb` (default: `False`): Whether to separate vocal reverb or not.
|
| 187 |
+
- `--model_name` (default: `MDXNET_Main`): Music separation model ('Main_340', 'Main_390', 'Main_406', 'Main_427', 'Main_438', 'Inst_full_292', 'Inst_HQ_1', 'Inst_HQ_2', 'Inst_HQ_3', 'Inst_HQ_4', 'Inst_HQ_5', 'Kim_Vocal_1', 'Kim_Vocal_2', 'Kim_Inst', 'Inst_187_beta', 'Inst_82_beta', 'Inst_90_beta', 'Voc_FT', 'Crowd_HQ', 'MDXNET_9482', 'Inst_1', 'Inst_2', 'Inst_3', 'MDXNET_1_9703', 'MDXNET_2_9682', 'MDXNET_3_9662', 'Inst_Main', 'MDXNET_Main', 'HT-Tuned', 'HT-Normal', 'HD_MMI', 'HT_6S', 'HP-1', 'HP-2', 'HP-Vocal-1', 'HP-Vocal-2', 'HP2-1', 'HP2-2', 'HP2-3', 'SP-2B-1', 'SP-2B-2', 'SP-3B-1', 'SP-4B-1', 'SP-4B-2', 'SP-MID-1', 'SP-MID-2').
|
| 188 |
+
- `--reverb_model` (default: `MDX-Reverb`): Music separation model ("MDX-Reverb", 'VR-Reverb', 'Echo-Aggressive', 'Echo-Normal').
|
| 189 |
+
- `--denoise_model` (default: `Normal`): Music separation model ('Lite', 'Normal').
|
| 190 |
|
| 191 |
+
4. Audio processing configuration:
|
| 192 |
+
- `--shifts` (default: `2`): Number of predictions.
|
| 193 |
+
- `--batch_size` (default: `1`): Batch size.
|
| 194 |
+
- `--overlap` (default: `0.25`): Overlap level between segments.
|
| 195 |
+
- `--aggression` (default: `5`): Main stem extraction intensity.
|
| 196 |
+
- `--hop_length` (default: `1024`): MDX hop length during processing.
|
| 197 |
+
- `--window_size` (default: `512`): Window size.
|
| 198 |
+
- `--segments_size` (default: `256`): Audio segment size.
|
| 199 |
+
- `--post_process_threshold` (default: `0.2`): Post-processing level after music separation.
|
| 200 |
+
|
| 201 |
+
5. Other audio processing configuration:
|
| 202 |
+
- `--enable_tta` (default: `False`): Inference enhancement.
|
| 203 |
+
- `--enable_denoise` (default: `False`): Denoise music separation.
|
| 204 |
+
- `--high_end_process` (default: `False`): High-end processing.
|
| 205 |
+
- `--enable_post_process` (default: `False`): Post-processing.
|
| 206 |
+
|
| 207 |
+
6. Skip audio section:
|
| 208 |
+
- `--skip_seconds` (default: `False`): Whether to skip any audio seconds or not.
|
| 209 |
+
- `--skip_start_audios` (default: `0`): Time (seconds) to skip at the start of the audio.
|
| 210 |
+
- `--skip_end_audios` (default: `0`): Time (seconds) to skip at the end of the audio.
|
| 211 |
""")
|
| 212 |
quit()
|
| 213 |
elif argv_is_allows[11] in argv:
|
| 214 |
+
print("""The parameters for --create_index:
|
| 215 |
+
1. Model information:
|
| 216 |
+
- `--model_name` (required): Model name.
|
| 217 |
+
- `--rvc_version` (default: `v2`): Version (`v1`, `v2`).
|
| 218 |
+
- `--index_algorithm` (default: `Auto`): Index algorithm used (`Auto`, `Faiss`, `KMeans`).
|
| 219 |
""")
|
| 220 |
quit()
|
| 221 |
elif argv_is_allows[12] in argv:
|
| 222 |
+
print("""The parameters for --extract:
|
| 223 |
+
1. Model information:
|
| 224 |
+
- `--model_name` (required): Model name.
|
| 225 |
+
- `--rvc_version` (default: `v2`): RVC version (`v1`, `v2`).
|
| 226 |
+
|
| 227 |
+
2. F0 configuration:
|
| 228 |
+
- `--f0_method` (default: `rmvpe`): F0 prediction method (`pm`, `dio`, `mangio-crepe-tiny`, `mangio-crepe-small`, `mangio-crepe-medium`, `mangio-crepe-large`, `mangio-crepe-full`, `crepe-tiny`, `crepe-small`, `crepe-medium`, `crepe-large`, `crepe-full`, `fcpe`, `fcpe-legacy`, `rmvpe`, `rmvpe-legacy`, `harvest`, `yin`, `pyin`, `swipe`).
|
| 229 |
+
- `--f0_onnx` (default: `False`): Whether to use the ONNX version of F0 or not.
|
| 230 |
+
- `--pitch_guidance` (default: `True`): Whether to use pitch guidance or not.
|
| 231 |
+
- `--f0_autotune` (default: `False`): Whether to automatically adjust F0 or not.
|
| 232 |
+
- `--f0_autotune_strength` (default: `1`): Intensity of automatic F0 correction.
|
| 233 |
+
- `--alpha` (default: `0.5`): Pitch mixing threshold for hybrid pitch estimation.
|
| 234 |
+
|
| 235 |
+
3. Processing configuration:
|
| 236 |
+
- `--hop_length` (default: `128`): Hop length during processing.
|
| 237 |
+
- `--cpu_cores` (default: `2`): Number of CPU threads used.
|
| 238 |
+
- `--gpu` (default: `-`): Specify GPU to use (e.g.: `0` for the first GPU, `-` to disable GPU).
|
| 239 |
+
- `--sample_rate` (required): Sample rate of the input audio.
|
| 240 |
+
|
| 241 |
+
4. Embedding configuration:
|
| 242 |
+
- `--embedder_model` (default: `hubert_base`): Name of the embedding model.
|
| 243 |
+
- `--embedders_mode` (default: `fairseq`): Embedding mode (`fairseq`, `transformers`, `onnx`, `whisper`).
|
| 244 |
|
| 245 |
4. RMS:
|
| 246 |
+
- `--rms_extract` (default: False): Extract additional rms energy.
|
| 247 |
""")
|
| 248 |
quit()
|
| 249 |
elif argv_is_allows[13] in argv:
|
| 250 |
+
print("""The parameters for --preprocess:
|
| 251 |
+
1. Model information:
|
| 252 |
+
- `--model_name` (required): Model name.
|
| 253 |
+
|
| 254 |
+
2. Data configuration:
|
| 255 |
+
- `--dataset_path` (default: `./dataset`): Path to the directory containing the data file.
|
| 256 |
+
- `--sample_rate` (required): Sample rate of the audio data.
|
| 257 |
+
|
| 258 |
+
3. Processing configuration:
|
| 259 |
+
- `--cpu_cores` (default: `2`): Number of CPU threads used.
|
| 260 |
+
- `--cut_preprocess` (default: `Automatic`): Preprocess data cutting method (`Automatic`, `Simple`, `Skip`).
|
| 261 |
+
- `--process_effects` (default: `False`): Whether to apply preprocessing or not.
|
| 262 |
+
- `--clean_dataset` (default: `False`): Whether to clean the data file or not.
|
| 263 |
+
- `--clean_strength` (default: `0.7`): Strength of the data cleaning process.
|
| 264 |
|
| 265 |
+
4. Other configuration:
|
| 266 |
+
- `--chunk_len` (default: `3.0`): Audio chunk length for the 'Simple' method.
|
| 267 |
+
- `--overlap_len` (default: `0.3`): Overlap length between slices for the 'Simple' method.
|
| 268 |
+
- `--normalization_mode` (default: `none`): Whether to apply audio normalization (`none`, `pre`, `post`)
|
| 269 |
""")
|
| 270 |
quit()
|
| 271 |
elif argv_is_allows[14] in argv:
|
| 272 |
+
print("""The parameters for --separate_music:
|
| 273 |
+
1. Input/output configuration:
|
| 274 |
+
- `--input_path` (required): Path to the input audio file.
|
| 275 |
+
- `--output_dirs` (default: `./audios`): Directory to save output files.
|
| 276 |
+
- `--export_format` (default: `wav`): File export format (`wav`, `mp3`,...).
|
| 277 |
+
- `--sample_rate` (default: `44100`): Sample rate of the output audio.
|
| 278 |
+
|
| 279 |
+
2. Model configuration:
|
| 280 |
+
- `--model_name` (default: `MDXNET_Main`): Music separation model ('Main_340', 'Main_390', 'Main_406', 'Main_427', 'Main_438', 'Inst_full_292', 'Inst_HQ_1', 'Inst_HQ_2', 'Inst_HQ_3', 'Inst_HQ_4', 'Inst_HQ_5', 'Kim_Vocal_1', 'Kim_Vocal_2', 'Kim_Inst', 'Inst_187_beta', 'Inst_82_beta', 'Inst_90_beta', 'Voc_FT', 'Crowd_HQ', 'MDXNET_9482', 'Inst_1', 'Inst_2', 'Inst_3', 'MDXNET_1_9703', 'MDXNET_2_9682', 'MDXNET_3_9662', 'Inst_Main', 'MDXNET_Main', 'HT-Tuned', 'HT-Normal', 'HD_MMI', 'HT_6S', 'HP-1', 'HP-2', 'HP-Vocal-1', 'HP-Vocal-2', 'HP2-1', 'HP2-2', 'HP2-3', 'SP-2B-1', 'SP-2B-2', 'SP-3B-1', 'SP-4B-1', 'SP-4B-2', 'SP-MID-1', 'SP-MID-2').
|
| 281 |
+
- `--karaoke_model` (default: `MDX-Version-1`): Music separation model ('MDX-Version-1', 'MDX-Version-2', 'VR-Version-1', 'VR-Version-2').
|
| 282 |
+
- `--reverb_model` (default: `MDX-Reverb`): Music separation model ("MDX-Reverb", 'VR-Reverb', 'Echo-Aggressive', 'Echo-Normal').
|
| 283 |
+
- `--denoise_model` (default: `Normal`): Music separation model ('Lite', 'Normal').
|
| 284 |
+
|
| 285 |
+
3. Audio processing configuration:
|
| 286 |
+
- `--shifts` (default: `2`): Number of predictions.
|
| 287 |
+
- `--batch_size` (default: `1`): Batch size.
|
| 288 |
+
- `--overlap` (default: `0.25`): Overlap level between segments.
|
| 289 |
+
- `--aggression` (default: `5`): Main stem extraction intensity.
|
| 290 |
+
- `--hop_length` (default: `1024`): MDX hop length during processing.
|
| 291 |
+
- `--window_size` (default: `512`): Window size.
|
| 292 |
+
- `--segments_size` (default: `256`): Audio segment size.
|
| 293 |
+
- `--post_process_threshold` (default: `0.2`): Post-processing level after music separation.
|
| 294 |
+
|
| 295 |
+
4. Other audio processing configuration:
|
| 296 |
+
- `--enable_tta` (default: `False`): Inference enhancement.
|
| 297 |
+
- `--enable_denoise` (default: `False`): Denoise music separation.
|
| 298 |
+
- `--high_end_process` (default: `False`): High-end processing.
|
| 299 |
+
- `--enable_post_process` (default: `False`): Post-processing.
|
| 300 |
+
- `--separate_backing` (default: `False`): Separate backing vocals.
|
| 301 |
+
- `--separate_reverb` (default: `False`): Separate vocal reverb.
|
| 302 |
""")
|
| 303 |
quit()
|
| 304 |
elif argv_is_allows[15] in argv:
|
| 305 |
+
print("""The parameters for --train:
|
| 306 |
+
1. Model configuration:
|
| 307 |
+
- `--model_name` (required): Model name.
|
| 308 |
+
- `--rvc_version` (default: `v2`): RVC Version (`v1`, `v2`).
|
| 309 |
+
- `--model_author` (optional): Author of the model.
|
| 310 |
+
|
| 311 |
+
2. Save configuration:
|
| 312 |
+
- `--save_every_epoch` (required): Number of epochs between each save.
|
| 313 |
+
- `--save_only_latest` (default: `True`): Only save the latest checkpoint.
|
| 314 |
+
- `--save_every_weights` (default: `True`): Save all model weights.
|
| 315 |
+
|
| 316 |
+
3. Training configuration:
|
| 317 |
+
- `--total_epoch` (default: `300`): Total number of training epochs.
|
| 318 |
+
- `--batch_size` (default: `8`): Batch size during training.
|
| 319 |
+
|
| 320 |
+
4. Device configuration:
|
| 321 |
+
- `--gpu` (default: `0`): Specify GPU to use (number or `-` if not using GPU).
|
| 322 |
+
- `--cache_data_in_gpu` (default: `False`): Save data to GPU for acceleration.
|
| 323 |
+
|
| 324 |
+
5. Advanced training configuration:
|
| 325 |
+
- `--pitch_guidance` (default: `True`): Use pitch guidance.
|
| 326 |
+
- `--g_pretrained_path` (default: ``): Path to the pre-trained G weights.
|
| 327 |
+
- `--d_pretrained_path` (default: ``): Path to the pre-trained D weights.
|
| 328 |
+
- `--vocoder` (default: `Default`): Vocoder used (`Default`, `MRF-HiFi-GAN`, `RefineGAN`).
|
| 329 |
+
- `--energy_use` (default: `False`): Use rms energy.
|
| 330 |
+
|
| 331 |
+
6. Overtraining detection:
|
| 332 |
+
- `--overtraining_detector` (default: `False`): Enable/disable overtraining detection mode.
|
| 333 |
+
- `--overtraining_threshold` (default: `50`): Threshold to determine overtraining.
|
| 334 |
+
|
| 335 |
+
7. Data processing:
|
| 336 |
+
- `--cleanup` (default: `False`): Clean up old training files to restart training from scratch.
|
| 337 |
+
|
| 338 |
+
8. Optimization:
|
| 339 |
+
- `--checkpointing` (default: `False`): Enable/disable checkpointing to save RAM.
|
| 340 |
+
- `--deterministic` (default: `False`): When enabled, uses highly deterministic algorithms, ensuring that each run with the same input data produces the same results.
|
| 341 |
+
- `--benchmark` (default: `False`): When enabled, experiments and selects the optimal algorithm for the specific hardware and size.
|
| 342 |
+
- `--optimizer` (default: `AdamW`): Optimizer used (`AdamW`, `RAdam`, `AnyPrecisionAdamW`).
|
| 343 |
+
- `--multiscale_mel_loss` (default: `False`): Compares the Mel spectrum of real and fake audio at multiple scales. Helps the model learn timbre details, brightness, and frequency structure better, thereby improving the quality and naturalness of the output voice.
|
| 344 |
|
| 345 |
+
9. Reference set:
|
| 346 |
+
- `--use_custom_reference` (default: `False`): Whether to customize the reference set or not.
|
| 347 |
+
- `--reference_path` (default: `False`): Path to the reference set.
|
| 348 |
""")
|
| 349 |
quit()
|
| 350 |
+
elif argv_is_allows in argv:
|
| 351 |
+
print("""The parameters for --create_reference:
|
| 352 |
+
1. File path:
|
| 353 |
+
- `--audio_path` (required): Path to the input audio file.
|
| 354 |
+
- `--reference_name` (default: `reference`): Path to save the output reference set.
|
| 355 |
|
| 356 |
+
2. Reference set configuration:
|
| 357 |
+
- `--pitch_guidance` (default: `True`): Use pitch guidance.
|
| 358 |
+
- `--energy_use` (default: `False`): Use rms energy.
|
| 359 |
+
- `--version` (default: `v2`): RVC Version (`v1`, `v2`).
|
| 360 |
+
|
| 361 |
+
3. Embedding configuration:
|
| 362 |
+
- `--embedder_model` (default: `hubert_base`): Name of the embedding model.
|
| 363 |
+
- `--embedders_mode` (default: `fairseq`): Embedding mode (`fairseq`, `transformers`, `onnx`, `whisper`).
|
| 364 |
|
| 365 |
+
4. F0 configuration:
|
| 366 |
+
- `--f0_method` (default: `rmvpe`): F0 prediction method (`pm`, `dio`, `mangio-crepe-tiny`, `mangio-crepe-small`, `mangio-crepe-medium`, `mangio-crepe-large`, `mangio-crepe-full`, `crepe-tiny`, `crepe-small`, `crepe-medium`, `crepe-large`, `crepe-full`, `fcpe`, `fcpe-legacy`, `rmvpe`, `rmvpe-legacy`, `harvest`, `yin`, `pyin`, `swipe`).
|
| 367 |
+
- `--f0_onnx` (default: `False`): Whether to use the ONNX version of F0 or not.
|
| 368 |
+
- `--f0_up_key` (default: `0`): Adjust pitch.
|
| 369 |
+
- `--filter_radius` (default: `3`): Smoothness of the F0 curve.
|
| 370 |
+
- `--f0_autotune` (default: `False`): Whether to automatically adjust F0 or not.
|
| 371 |
+
- `--f0_autotune_strength` (default: `1`): Intensity of automatic F0 correction.
|
| 372 |
+
- `--f0_file` (default: ``): Path to an existing F0 file.
|
| 373 |
+
- `--proposal_pitch` (default: `False`): Propose pitch instead of manual adjustment.
|
| 374 |
+
- `--proposal_pitch_threshold` (default: `0.0`): Frequency threshold for pitch estimation.
|
| 375 |
+
- `--alpha` (default: `0.5`): Pitch mixing threshold for hybrid pitch estimation.
|
| 376 |
""")
|
| 377 |
quit()
|
| 378 |
+
elif argv_is_allows in argv:
|
| 379 |
+
print("""Usage:
|
| 380 |
+
1. `--help_audio_effects`: Help with adding audio effects.
|
| 381 |
+
2. `--help_convert`: Help with audio conversion.
|
| 382 |
+
3. `--help_create_dataset`: Help with creating training data.
|
| 383 |
+
4. `--help_create_index`: Help with index creation.
|
| 384 |
+
5. `--help_extract`: Help with extracting training data.
|
| 385 |
+
6. `--help_preprocess`: Help with data preprocessing.
|
| 386 |
+
7. `--help_separate_music`: Help with music separation.
|
| 387 |
+
8. `--help_train`: Help with model training.
|
| 388 |
+
9. `--help_create_reference`: Help with creating a reference set.
|
| 389 |
""")
|
| 390 |
quit()
|
| 391 |
|
| 392 |
+
|
| 393 |
if __name__ == "__main__":
|
| 394 |
import torch.multiprocessing as mp
|
| 395 |
|
| 396 |
+
if "--train" in argv:
|
| 397 |
+
mp.set_start_method("spawn")
|
| 398 |
+
if "--preprocess" in argv or "--extract" in argv:
|
| 399 |
+
mp.set_start_method("spawn", force=True)
|
| 400 |
|
| 401 |
+
main()
|
main/app/variables.py
CHANGED
|
@@ -16,17 +16,28 @@ logger.propagate = False
|
|
| 16 |
|
| 17 |
config = Config()
|
| 18 |
python = sys.executable
|
| 19 |
-
translations = config.translations
|
| 20 |
configs_json = os.path.join("main", "configs", "config.json")
|
| 21 |
configs = json.load(open(configs_json, "r"))
|
| 22 |
|
| 23 |
if not logger.hasHandlers():
|
| 24 |
console_handler = logging.StreamHandler()
|
| 25 |
-
console_formatter = logging.Formatter(
|
|
|
|
|
|
|
|
|
|
| 26 |
console_handler.setFormatter(console_formatter)
|
| 27 |
console_handler.setLevel(logging.DEBUG if config.debug_mode else logging.INFO)
|
| 28 |
-
file_handler = logging.handlers.RotatingFileHandler(
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
file_handler.setFormatter(file_formatter)
|
| 31 |
file_handler.setLevel(logging.DEBUG)
|
| 32 |
logger.addHandler(console_handler)
|
|
@@ -43,35 +54,224 @@ if config.device in ["cpu", "mps", "ocl:0"] and configs.get("fp16", False):
|
|
| 43 |
models = {}
|
| 44 |
model_options = {}
|
| 45 |
|
| 46 |
-
method_f0 = [
|
| 47 |
-
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
embedders_mode = ["fairseq", "onnx", "transformers", "spin", "whisper"]
|
| 51 |
-
embedders_model = [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
spin_model = ["spin-v1", "spin-v2"]
|
| 53 |
-
whisper_model = [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
-
paths_for_files = sorted(
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
-
pretrainedD = [
|
| 61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
|
| 63 |
-
presets_file = sorted(
|
| 64 |
-
|
| 65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
-
file_types = [
|
| 68 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
-
language = configs.get("language", "
|
| 71 |
-
theme =
|
|
|
|
| 72 |
|
| 73 |
-
edgetts = configs.get("edge_tts"
|
| 74 |
-
google_tts_voice = configs.get("google_tts_voice", ["
|
| 75 |
|
| 76 |
vr_models = configs.get("vr_models", "")
|
| 77 |
demucs_models = configs.get("demucs_models", "")
|
|
@@ -79,10 +279,26 @@ mdx_models = configs.get("mdx_models", "")
|
|
| 79 |
karaoke_models = configs.get("karaoke_models", "")
|
| 80 |
reverb_models = configs.get("reverb_models", "")
|
| 81 |
denoise_models = configs.get("denoise_models", "")
|
| 82 |
-
uvr_model =
|
|
|
|
|
|
|
| 83 |
|
| 84 |
-
font = configs.get(
|
| 85 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
csv_path = configs["csv_path"]
|
| 87 |
|
| 88 |
if "--allow_all_disk" in sys.argv and sys.platform == "win32":
|
|
@@ -92,19 +308,36 @@ if "--allow_all_disk" in sys.argv and sys.platform == "win32":
|
|
| 92 |
os.system(f"{python} -m pip install pywin32")
|
| 93 |
import win32api
|
| 94 |
|
| 95 |
-
allow_disk = win32api.GetLogicalDriveStrings().split(
|
| 96 |
-
else:
|
|
|
|
| 97 |
|
| 98 |
try:
|
| 99 |
-
if os.path.exists(csv_path):
|
|
|
|
| 100 |
else:
|
| 101 |
-
reader = list(
|
| 102 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
writer.writeheader()
|
| 104 |
writer.writerows(reader)
|
| 105 |
|
| 106 |
for row in reader:
|
| 107 |
-
filename = row[
|
| 108 |
url = None
|
| 109 |
|
| 110 |
for value in row.values():
|
|
@@ -112,6 +345,7 @@ try:
|
|
| 112 |
url = value
|
| 113 |
break
|
| 114 |
|
| 115 |
-
if url:
|
|
|
|
| 116 |
except:
|
| 117 |
-
pass
|
|
|
|
| 16 |
|
| 17 |
config = Config()
|
| 18 |
python = sys.executable
|
| 19 |
+
translations = config.translations
|
| 20 |
configs_json = os.path.join("main", "configs", "config.json")
|
| 21 |
configs = json.load(open(configs_json, "r"))
|
| 22 |
|
| 23 |
if not logger.hasHandlers():
|
| 24 |
console_handler = logging.StreamHandler()
|
| 25 |
+
console_formatter = logging.Formatter(
|
| 26 |
+
fmt="\n%(asctime)s.%(msecs)03d | %(levelname)s | %(module)s | %(message)s",
|
| 27 |
+
datefmt="%Y-%m-%d %H:%M:%S",
|
| 28 |
+
)
|
| 29 |
console_handler.setFormatter(console_formatter)
|
| 30 |
console_handler.setLevel(logging.DEBUG if config.debug_mode else logging.INFO)
|
| 31 |
+
file_handler = logging.handlers.RotatingFileHandler(
|
| 32 |
+
os.path.join(configs["logs_path"], "app.log"),
|
| 33 |
+
maxBytes=5 * 1024 * 1024,
|
| 34 |
+
backupCount=3,
|
| 35 |
+
encoding="utf-8",
|
| 36 |
+
)
|
| 37 |
+
file_formatter = logging.Formatter(
|
| 38 |
+
fmt="\n%(asctime)s.%(msecs)03d | %(levelname)s | %(module)s | %(message)s",
|
| 39 |
+
datefmt="%Y-%m-%d %H:%M:%S",
|
| 40 |
+
)
|
| 41 |
file_handler.setFormatter(file_formatter)
|
| 42 |
file_handler.setLevel(logging.DEBUG)
|
| 43 |
logger.addHandler(console_handler)
|
|
|
|
| 54 |
models = {}
|
| 55 |
model_options = {}
|
| 56 |
|
| 57 |
+
method_f0 = [
|
| 58 |
+
"mangio-crepe-full",
|
| 59 |
+
"crepe-full",
|
| 60 |
+
"fcpe",
|
| 61 |
+
"rmvpe",
|
| 62 |
+
"harvest",
|
| 63 |
+
"pyin",
|
| 64 |
+
"hybrid",
|
| 65 |
+
]
|
| 66 |
+
method_f0_full = [
|
| 67 |
+
"pm-ac",
|
| 68 |
+
"pm-cc",
|
| 69 |
+
"pm-shs",
|
| 70 |
+
"dio",
|
| 71 |
+
"mangio-crepe-tiny",
|
| 72 |
+
"mangio-crepe-small",
|
| 73 |
+
"mangio-crepe-medium",
|
| 74 |
+
"mangio-crepe-large",
|
| 75 |
+
"mangio-crepe-full",
|
| 76 |
+
"crepe-tiny",
|
| 77 |
+
"crepe-small",
|
| 78 |
+
"crepe-medium",
|
| 79 |
+
"crepe-large",
|
| 80 |
+
"crepe-full",
|
| 81 |
+
"fcpe",
|
| 82 |
+
"fcpe-legacy",
|
| 83 |
+
"fcpe-previous",
|
| 84 |
+
"rmvpe",
|
| 85 |
+
"rmvpe-clipping",
|
| 86 |
+
"rmvpe-medfilt",
|
| 87 |
+
"rmvpe-clipping-medfilt",
|
| 88 |
+
"harvest",
|
| 89 |
+
"yin",
|
| 90 |
+
"pyin",
|
| 91 |
+
"swipe",
|
| 92 |
+
"piptrack",
|
| 93 |
+
"penn",
|
| 94 |
+
"mangio-penn",
|
| 95 |
+
"djcm",
|
| 96 |
+
"djcm-clipping",
|
| 97 |
+
"djcm-medfilt",
|
| 98 |
+
"djcm-clipping-medfilt",
|
| 99 |
+
"swift",
|
| 100 |
+
"pesto",
|
| 101 |
+
"hybrid",
|
| 102 |
+
]
|
| 103 |
+
hybrid_f0_method = [
|
| 104 |
+
"hybrid[pm+dio]",
|
| 105 |
+
"hybrid[pm+crepe-tiny]",
|
| 106 |
+
"hybrid[pm+crepe]",
|
| 107 |
+
"hybrid[pm+fcpe]",
|
| 108 |
+
"hybrid[pm+rmvpe]",
|
| 109 |
+
"hybrid[pm+harvest]",
|
| 110 |
+
"hybrid[pm+yin]",
|
| 111 |
+
"hybrid[dio+crepe-tiny]",
|
| 112 |
+
"hybrid[dio+crepe]",
|
| 113 |
+
"hybrid[dio+fcpe]",
|
| 114 |
+
"hybrid[dio+rmvpe]",
|
| 115 |
+
"hybrid[dio+harvest]",
|
| 116 |
+
"hybrid[dio+yin]",
|
| 117 |
+
"hybrid[crepe-tiny+crepe]",
|
| 118 |
+
"hybrid[crepe-tiny+fcpe]",
|
| 119 |
+
"hybrid[crepe-tiny+rmvpe]",
|
| 120 |
+
"hybrid[crepe-tiny+harvest]",
|
| 121 |
+
"hybrid[crepe+fcpe]",
|
| 122 |
+
"hybrid[crepe+rmvpe]",
|
| 123 |
+
"hybrid[crepe+harvest]",
|
| 124 |
+
"hybrid[crepe+yin]",
|
| 125 |
+
"hybrid[fcpe+rmvpe]",
|
| 126 |
+
"hybrid[fcpe+harvest]",
|
| 127 |
+
"hybrid[fcpe+yin]",
|
| 128 |
+
"hybrid[rmvpe+harvest]",
|
| 129 |
+
"hybrid[rmvpe+yin]",
|
| 130 |
+
"hybrid[harvest+yin]",
|
| 131 |
+
]
|
| 132 |
|
| 133 |
embedders_mode = ["fairseq", "onnx", "transformers", "spin", "whisper"]
|
| 134 |
+
embedders_model = [
|
| 135 |
+
"contentvec_base",
|
| 136 |
+
"Crusty",
|
| 137 |
+
"hubert_base",
|
| 138 |
+
"vietnamese_hubert_base",
|
| 139 |
+
"japanese_hubert_base",
|
| 140 |
+
"korean_hubert_base",
|
| 141 |
+
"chinese_hubert_base",
|
| 142 |
+
"portuguese_hubert_base",
|
| 143 |
+
"custom",
|
| 144 |
+
]
|
| 145 |
spin_model = ["spin-v1", "spin-v2"]
|
| 146 |
+
whisper_model = [
|
| 147 |
+
"tiny",
|
| 148 |
+
"tiny.en",
|
| 149 |
+
"base",
|
| 150 |
+
"base.en",
|
| 151 |
+
"small",
|
| 152 |
+
"small.en",
|
| 153 |
+
"medium",
|
| 154 |
+
"medium.en",
|
| 155 |
+
"large-v1",
|
| 156 |
+
"large-v2",
|
| 157 |
+
"large-v3",
|
| 158 |
+
"large-v3-turbo",
|
| 159 |
+
]
|
| 160 |
|
| 161 |
+
paths_for_files = sorted(
|
| 162 |
+
[
|
| 163 |
+
os.path.abspath(os.path.join(root, f))
|
| 164 |
+
for root, _, files in os.walk(configs["audios_path"])
|
| 165 |
+
for f in files
|
| 166 |
+
if os.path.splitext(f)[1].lower()
|
| 167 |
+
in (
|
| 168 |
+
".wav",
|
| 169 |
+
".mp3",
|
| 170 |
+
".flac",
|
| 171 |
+
".ogg",
|
| 172 |
+
".opus",
|
| 173 |
+
".m4a",
|
| 174 |
+
".mp4",
|
| 175 |
+
".aac",
|
| 176 |
+
".alac",
|
| 177 |
+
".wma",
|
| 178 |
+
".aiff",
|
| 179 |
+
".webm",
|
| 180 |
+
".ac3",
|
| 181 |
+
)
|
| 182 |
+
]
|
| 183 |
+
)
|
| 184 |
+
reference_list = sorted(
|
| 185 |
+
[
|
| 186 |
+
name
|
| 187 |
+
for name in os.listdir(configs["reference_path"])
|
| 188 |
+
if os.path.exists(os.path.join(configs["reference_path"], name))
|
| 189 |
+
and os.path.isdir(os.path.join(configs["reference_path"], name))
|
| 190 |
+
]
|
| 191 |
+
)
|
| 192 |
+
model_name = sorted(
|
| 193 |
+
list(
|
| 194 |
+
model
|
| 195 |
+
for model in os.listdir(configs["weights_path"])
|
| 196 |
+
if model.endswith((".pth", ".onnx"))
|
| 197 |
+
and not model.startswith("G_")
|
| 198 |
+
and not model.startswith("D_")
|
| 199 |
+
)
|
| 200 |
+
)
|
| 201 |
+
index_path = sorted(
|
| 202 |
+
[
|
| 203 |
+
os.path.join(root, name)
|
| 204 |
+
for root, _, files in os.walk(configs["logs_path"], topdown=False)
|
| 205 |
+
for name in files
|
| 206 |
+
if name.endswith(".index") and "trained" not in name
|
| 207 |
+
]
|
| 208 |
+
)
|
| 209 |
|
| 210 |
+
pretrainedD = [
|
| 211 |
+
model
|
| 212 |
+
for model in os.listdir(configs["pretrained_custom_path"])
|
| 213 |
+
if model.endswith(".pth") and "D" in model
|
| 214 |
+
]
|
| 215 |
+
pretrainedG = [
|
| 216 |
+
model
|
| 217 |
+
for model in os.listdir(configs["pretrained_custom_path"])
|
| 218 |
+
if model.endswith(".pth") and "G" in model
|
| 219 |
+
]
|
| 220 |
|
| 221 |
+
presets_file = sorted(
|
| 222 |
+
list(
|
| 223 |
+
f for f in os.listdir(configs["presets_path"]) if f.endswith(".conversion.json")
|
| 224 |
+
)
|
| 225 |
+
)
|
| 226 |
+
audio_effect_presets_file = sorted(
|
| 227 |
+
list(f for f in os.listdir(configs["presets_path"]) if f.endswith(".effect.json"))
|
| 228 |
+
)
|
| 229 |
+
f0_file = sorted(
|
| 230 |
+
[
|
| 231 |
+
os.path.abspath(os.path.join(root, f))
|
| 232 |
+
for root, _, files in os.walk(configs["f0_path"])
|
| 233 |
+
for f in files
|
| 234 |
+
if f.endswith(".txt")
|
| 235 |
+
]
|
| 236 |
+
)
|
| 237 |
|
| 238 |
+
file_types = [
|
| 239 |
+
".wav",
|
| 240 |
+
".mp3",
|
| 241 |
+
".flac",
|
| 242 |
+
".ogg",
|
| 243 |
+
".opus",
|
| 244 |
+
".m4a",
|
| 245 |
+
".mp4",
|
| 246 |
+
".aac",
|
| 247 |
+
".alac",
|
| 248 |
+
".wma",
|
| 249 |
+
".aiff",
|
| 250 |
+
".webm",
|
| 251 |
+
".ac3",
|
| 252 |
+
]
|
| 253 |
+
export_format_choices = [
|
| 254 |
+
"wav",
|
| 255 |
+
"mp3",
|
| 256 |
+
"flac",
|
| 257 |
+
"ogg",
|
| 258 |
+
"opus",
|
| 259 |
+
"m4a",
|
| 260 |
+
"mp4",
|
| 261 |
+
"aac",
|
| 262 |
+
"alac",
|
| 263 |
+
"wma",
|
| 264 |
+
"aiff",
|
| 265 |
+
"webm",
|
| 266 |
+
"ac3",
|
| 267 |
+
]
|
| 268 |
|
| 269 |
+
language = configs.get("language", "en-US")
|
| 270 |
+
theme = os.path.join("main", "configs", "theme.json")
|
| 271 |
+
# configs.get("theme", "lainlives/dark")
|
| 272 |
|
| 273 |
+
edgetts = configs.get("edge_tts")
|
| 274 |
+
google_tts_voice = configs.get("google_tts_voice", ["en"])
|
| 275 |
|
| 276 |
vr_models = configs.get("vr_models", "")
|
| 277 |
demucs_models = configs.get("demucs_models", "")
|
|
|
|
| 279 |
karaoke_models = configs.get("karaoke_models", "")
|
| 280 |
reverb_models = configs.get("reverb_models", "")
|
| 281 |
denoise_models = configs.get("denoise_models", "")
|
| 282 |
+
uvr_model = (
|
| 283 |
+
list(demucs_models.keys()) + list(vr_models.keys()) + list(mdx_models.keys())
|
| 284 |
+
)
|
| 285 |
|
| 286 |
+
font = configs.get(
|
| 287 |
+
"font", "https://fonts.googleapis.com/css2?family=Courgette&display=swap"
|
| 288 |
+
)
|
| 289 |
+
sample_rate_choice = [
|
| 290 |
+
8000,
|
| 291 |
+
11025,
|
| 292 |
+
12000,
|
| 293 |
+
16000,
|
| 294 |
+
22050,
|
| 295 |
+
24000,
|
| 296 |
+
32000,
|
| 297 |
+
44100,
|
| 298 |
+
48000,
|
| 299 |
+
96000,
|
| 300 |
+
192000,
|
| 301 |
+
]
|
| 302 |
csv_path = configs["csv_path"]
|
| 303 |
|
| 304 |
if "--allow_all_disk" in sys.argv and sys.platform == "win32":
|
|
|
|
| 308 |
os.system(f"{python} -m pip install pywin32")
|
| 309 |
import win32api
|
| 310 |
|
| 311 |
+
allow_disk = win32api.GetLogicalDriveStrings().split("\x00")[:-1]
|
| 312 |
+
else:
|
| 313 |
+
allow_disk = []
|
| 314 |
|
| 315 |
try:
|
| 316 |
+
if os.path.exists(csv_path):
|
| 317 |
+
reader = list(csv.DictReader(open(csv_path, newline="", encoding="utf-8")))
|
| 318 |
else:
|
| 319 |
+
reader = list(
|
| 320 |
+
csv.DictReader(
|
| 321 |
+
[
|
| 322 |
+
line.decode("utf-8")
|
| 323 |
+
for line in urllib.request.urlopen(
|
| 324 |
+
codecs.decode(
|
| 325 |
+
"uggcf://qbpf.tbbtyr.pbz/fcernqfurrgf/q/1gNHnDeRULtEfz1Yieaw14USUQjWJy0Oq9k0DrCrjApb/rkcbeg?sbezng=pfi&tvq=1977693859",
|
| 326 |
+
"rot13",
|
| 327 |
+
)
|
| 328 |
+
).readlines()
|
| 329 |
+
]
|
| 330 |
+
)
|
| 331 |
+
)
|
| 332 |
+
writer = csv.DictWriter(
|
| 333 |
+
open(csv_path, mode="w", newline="", encoding="utf-8"),
|
| 334 |
+
fieldnames=reader[0].keys(),
|
| 335 |
+
)
|
| 336 |
writer.writeheader()
|
| 337 |
writer.writerows(reader)
|
| 338 |
|
| 339 |
for row in reader:
|
| 340 |
+
filename = row["Filename"]
|
| 341 |
url = None
|
| 342 |
|
| 343 |
for value in row.values():
|
|
|
|
| 345 |
url = value
|
| 346 |
break
|
| 347 |
|
| 348 |
+
if url:
|
| 349 |
+
models[filename] = url
|
| 350 |
except:
|
| 351 |
+
pass
|
main/configs/theme.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"theme": {"_font": ["Optima", "Candara", "Noto Sans", "source-sans-pro", "sans-serif"], "_font_css": ["\n@font-face {\n font-family: 'IBM Plex Mono';\n src: url('static/fonts/IBMPlexMono/IBMPlexMono-Regular.woff2') format('woff2');\n font-weight: Regular;\n font-style: normal;\n}\n\n\n@font-face {\n font-family: 'IBM Plex Mono';\n src: url('static/fonts/IBMPlexMono/IBMPlexMono-Bold.woff2') format('woff2');\n font-weight: Bold;\n font-style: normal;\n}\n"], "_font_mono": [{"__gradio_font__": true, "name": "IBM Plex Mono", "class": "local", "weights": [400, 700]}, "ui-monospace", "Consolas", "monospace"], "_stylesheets": [], "accordion_text_color": "*body_text_color", "accordion_text_color_dark": "*body_text_color", "background_fill_primary": "*neutral_700", "background_fill_primary_dark": "*neutral_950", "background_fill_secondary": "*secondary_800", "background_fill_secondary_dark": "*neutral_900", "block_background_fill": "*secondary_800", "block_background_fill_dark": "*secondary_800", "block_border_color": "*secondary_600", "block_border_color_dark": "*secondary_600", "block_border_width": "1px", "block_border_width_dark": "1px", "block_info_text_color": "*body_text_color_subdued", "block_info_text_color_dark": "*body_text_color_subdued", "block_info_text_size": "*text_sm", "block_info_text_weight": "400", "block_label_background_fill": "*secondary_700", "block_label_background_fill_dark": "*secondary_700", "block_label_border_color": "*secondary_600", "block_label_border_color_dark": "*secondary_600", "block_label_border_width": "1px", "block_label_margin": "0", "block_label_padding": "*spacing_sm *spacing_lg", "block_label_radius": "calc(*radius_sm - 1px) 0 calc(*radius_sm - 1px) 0", "block_label_right_radius": "0 calc(*radius_sm - 1px) 0 calc(*radius_sm - 1px)", "block_label_shadow": "*block_shadow", "block_label_text_color": "*neutral_200", "block_label_text_color_dark": "*neutral_200", "block_label_text_size": "*text_sm", "block_label_text_weight": "600", "block_padding": "*spacing_xl calc(*spacing_xl + 2px)", "block_radius": "*radius_sm", "block_shadow": "none", "block_title_background_fill": "none", "block_title_border_color": "none", "block_title_border_width": "0px", "block_title_padding": "0", "block_title_radius": "none", "block_title_text_color": "*neutral_200", "block_title_text_color_dark": "*neutral_200", "block_title_text_size": "*text_md", "block_title_text_weight": "600", "body_background_fill": "*background_fill_primary", "body_background_fill_dark": "*secondary_800", "body_text_color": "*neutral_800", "body_text_color_dark": "*neutral_100", "body_text_color_subdued": "*neutral_400", "body_text_color_subdued_dark": "*neutral_400", "body_text_size": "*text_md", "body_text_weight": "400", "border_color_accent": "*neutral_600", "border_color_accent_dark": "*neutral_600", "border_color_accent_subdued": "*border_color_accent", "border_color_accent_subdued_dark": "*border_color_accent", "border_color_primary": "*secondary_600", "border_color_primary_dark": "*secondary_600", "button_border_width": "0px", "button_border_width_dark": "0px", "button_cancel_background_fill": "*button_secondary_background_fill", "button_cancel_background_fill_dark": "*button_secondary_background_fill", "button_cancel_background_fill_hover": "*button_secondary_background_fill_hover", "button_cancel_background_fill_hover_dark": "*button_secondary_background_fill_hover", "button_cancel_border_color": "*button_secondary_border_color", "button_cancel_border_color_dark": "*button_secondary_border_color", "button_cancel_border_color_hover": "*button_secondary_border_color_hover", "button_cancel_border_color_hover_dark": "*button_secondary_border_color_hover", "button_cancel_shadow": "*button_secondary_shadow", "button_cancel_shadow_active": "*button_secondary_shadow_active", "button_cancel_shadow_active_dark": "*button_secondary_shadow_active", "button_cancel_shadow_dark": "*button_secondary_shadow", "button_cancel_shadow_hover": "*button_secondary_shadow_hover", "button_cancel_shadow_hover_dark": "*button_secondary_shadow_hover", "button_cancel_text_color": "*button_secondary_text_color", "button_cancel_text_color_dark": "*button_secondary_text_color", "button_cancel_text_color_hover": "white", "button_cancel_text_color_hover_dark": "white", "button_large_padding": "*spacing_lg calc(2 * *spacing_lg)", "button_large_radius": "*radius_md", "button_large_text_size": "*text_lg", "button_large_text_weight": "500", "button_medium_padding": "*spacing_md calc(2 * *spacing_md)", "button_medium_radius": "*radius_md", "button_medium_text_size": "*text_md", "button_medium_text_weight": "500", "button_primary_background_fill": "linear-gradient(30deg, *primary_800 0%, *primary_950 50%)", "button_primary_background_fill_dark": "linear-gradient(30deg, *primary_800 0%, *primary_950 50%)", "button_primary_background_fill_hover": "linear-gradient(90deg, *primary_950 0%, *primary_700 60%)", "button_primary_background_fill_hover_dark": "linear-gradient(90deg, *primary_950 0%, *primary_700 60%)", "button_primary_border_color": "*primary_600", "button_primary_border_color_dark": "*primary_600", "button_primary_border_color_hover": "*primary_500", "button_primary_border_color_hover_dark": "*primary_500", "button_primary_shadow": "*button_primary_shadow", "button_primary_shadow_active": "*button_primary_shadow", "button_primary_shadow_active_dark": "*button_primary_shadow", "button_primary_shadow_hover": "*button_primary_shadow", "button_primary_shadow_hover_dark": "*button_primary_shadow", "button_primary_text_color": "white", "button_primary_text_color_dark": "white", "button_primary_text_color_hover": "*code_background_fill", "button_primary_text_color_hover_dark": "*code_background_fill", "button_secondary_background_fill": "linear-gradient(100deg, *primary_950 0%, *primary_600 70%)", "button_secondary_background_fill_dark": "linear-gradient(100deg, *primary_950 0%, *primary_600 70%)", "button_secondary_background_fill_hover": "linear-gradient(90deg, *primary_700 0%, *primary_950 60%)", "button_secondary_background_fill_hover_dark": "linear-gradient(90deg, *primary_700 0%, *primary_950 60%)", "button_secondary_border_color": "*neutral_600", "button_secondary_border_color_dark": "*neutral_600", "button_secondary_border_color_hover": "*neutral_500", "button_secondary_border_color_hover_dark": "*neutral_500", "button_secondary_shadow": "*button_primary_shadow", "button_secondary_shadow_active": "*shadow_inset", "button_secondary_shadow_active_dark": "*button_secondary_shadow", "button_secondary_shadow_hover": "*button_secondary_shadow", "button_secondary_shadow_hover_dark": "*button_secondary_shadow", "button_secondary_text_color": "white", "button_secondary_text_color_dark": "white", "button_secondary_text_color_hover": "*table_even_background_fill", "button_secondary_text_color_hover_dark": "*table_even_background_fill", "button_small_padding": "*spacing_sm calc(1.5 * *spacing_sm)", "button_small_radius": "*radius_md", "button_small_text_size": "*text_sm", "button_small_text_weight": "400", "button_transform_active": "none", "button_transform_hover": "none", "button_transition": "all 0.5s ease", "chatbot_text_size": "*text_lg", "checkbox_background_color": "*secondary_400", "checkbox_background_color_dark": "*secondary_400", "checkbox_background_color_focus": "*checkbox_background_color", "checkbox_background_color_focus_dark": "*checkbox_background_color", "checkbox_background_color_hover": "*checkbox_background_color", "checkbox_background_color_hover_dark": "*checkbox_background_color", "checkbox_background_color_selected": "*color_accent", "checkbox_background_color_selected_dark": "*color_accent", "checkbox_border_color": "*neutral_700", "checkbox_border_color_dark": "*neutral_700", "checkbox_border_color_focus": "*color_accent", "checkbox_border_color_focus_dark": "*color_accent", "checkbox_border_color_hover": "*neutral_600", "checkbox_border_color_hover_dark": "*neutral_600", "checkbox_border_color_selected": "*color_accent", "checkbox_border_color_selected_dark": "*color_accent", "checkbox_border_radius": "*radius_sm", "checkbox_border_width": "0px", "checkbox_border_width_dark": "0px", "checkbox_check": "url(\"data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3cpath d='M12.207 4.793a1 1 0 010 1.414l-5 5a1 1 0 01-1.414 0l-2-2a1 1 0 011.414-1.414L6.5 9.086l4.293-4.293a1 1 0 011.414 0z'/%3e%3c/svg%3e\")", "checkbox_label_background_fill": "*button_secondary_background_fill", "checkbox_label_background_fill_dark": "*button_secondary_background_fill", "checkbox_label_background_fill_hover": "*button_secondary_background_fill_hover", "checkbox_label_background_fill_hover_dark": "*button_secondary_background_fill_hover", "checkbox_label_background_fill_selected": "*checkbox_label_background_fill", "checkbox_label_background_fill_selected_dark": "*checkbox_label_background_fill", "checkbox_label_border_color": "*secondary_700", "checkbox_label_border_color_dark": "*secondary_700", "checkbox_label_border_color_hover": "*checkbox_label_border_color", "checkbox_label_border_color_hover_dark": "*checkbox_label_border_color", "checkbox_label_border_color_selected": "*checkbox_label_border_color", "checkbox_label_border_color_selected_dark": "*checkbox_label_border_color", "checkbox_label_border_width": "1px", "checkbox_label_border_width_dark": "1px", "checkbox_label_gap": "*form_gap_width", "checkbox_label_padding": "*spacing_md calc(2 * *spacing_md)", "checkbox_label_shadow": "none", "checkbox_label_text_color": "*body_text_color", "checkbox_label_text_color_dark": "*body_text_color", "checkbox_label_text_color_selected": "*checkbox_label_text_color", "checkbox_label_text_color_selected_dark": "*checkbox_label_text_color", "checkbox_label_text_size": "*text_md", "checkbox_label_text_weight": "400", "checkbox_shadow": "*input_shadow", "code_background_fill": "*neutral_800", "code_background_fill_dark": "*neutral_800", "color_accent": "*primary_500", "color_accent_soft": "*neutral_700", "color_accent_soft_dark": "*neutral_700", "container_radius": "*radius_sm", "embed_radius": "*radius_sm", "error_background_fill": "*background_fill_primary", "error_background_fill_dark": "*background_fill_primary", "error_border_color": "#ef4444", "error_border_color_dark": "#ef4444", "error_border_width": "1px", "error_icon_color": "#ef4444", "error_icon_color_dark": "#ef4444", "error_text_color": "#fef2f2", "error_text_color_dark": "#fef2f2", "font": "Optima, Candara, Noto Sans, source-sans-pro, sans-serif", "font_mono": "'IBM Plex Mono', ui-monospace, Consolas, monospace", "form_gap_width": "0px", "input_background_fill": "*secondary_600", "input_background_fill_dark": "*secondary_600", "input_background_fill_focus": "*input_background_fill", "input_background_fill_hover": "*input_background_fill", "input_background_fill_hover_dark": "*input_background_fill", "input_border_color": "*secondary_600", "input_border_color_dark": "*secondary_600", "input_border_color_focus": "*secondary_500", "input_border_color_focus_dark": "*secondary_500", "input_border_color_hover": "*input_border_color", "input_border_color_hover_dark": "*input_border_color", "input_border_width": "1px", "input_padding": "*spacing_xl", "input_placeholder_color": "*neutral_500", "input_placeholder_color_dark": "*neutral_500", "input_radius": "*radius_xxs", "input_shadow": "none", "input_shadow_focus": "*input_shadow", "input_text_size": "*text_md", "input_text_weight": "400", "layout_gap": "*spacing_xxl", "link_text_color": "*secondary_500", "link_text_color_active": "*secondary_500", "link_text_color_active_dark": "*secondary_500", "link_text_color_dark": "*secondary_500", "link_text_color_hover": "*secondary_400", "link_text_color_hover_dark": "*secondary_400", "link_text_color_visited": "*secondary_600", "link_text_color_visited_dark": "*secondary_600", "loader_color": "*color_accent", "name": "glass", "neutral_100": "#f3e8ff", "neutral_200": "#e9d5ff", "neutral_300": "#d8b4fe", "neutral_400": "#c084fc", "neutral_50": "#faf5ff", "neutral_500": "#a855f7", "neutral_600": "rgba(83.78266724809674, 29.540070278324272, 132.9400207519531, 1)", "neutral_700": "rgba(48.28126126334004, 17.30792685680411, 76.3866943359375, 1)", "neutral_800": "rgba(46.03751121625044, 13.894996526550633, 72.53336791992187, 1)", "neutral_900": "#2e0e49", "neutral_950": "#2e0e49", "panel_background_fill": "*background_fill_secondary", "panel_background_fill_dark": "*background_fill_secondary", "panel_border_color": "*border_color_primary", "panel_border_color_dark": "*border_color_primary", "panel_border_width": "1px", "primary_100": "#f3e8ff", "primary_200": "#e9d5ff", "primary_300": "#d8b4fe", "primary_400": "#c084fc", "primary_50": "#faf5ff", "primary_500": "#a855f7", "primary_600": "#9333ea", "primary_700": "#7e22ce", "primary_800": "#6b21a8", "primary_900": "#581c87", "primary_950": "#4c1a73", "prose_header_text_weight": "600", "prose_text_size": "*text_md", "prose_text_weight": "400", "radio_circle": "url(\"data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3ccircle cx='8' cy='8' r='3'/%3e%3c/svg%3e\")", "radius_lg": "0px", "radius_md": "0px", "radius_sm": "0px", "radius_xl": "0px", "radius_xs": "0px", "radius_xxl": "0px", "radius_xxs": "0px", "secondary_100": "#f3f4f6", "secondary_200": "#e5e7eb", "secondary_300": "#d1d5db", "secondary_400": "#9ca3af", "secondary_50": "#f9fafb", "secondary_500": "#6b7280", "secondary_600": "#4b5563", "secondary_700": "#374151", "secondary_800": "rgba(28.422987236857917, 2.1975645867375784, 39.326663208007815, 1)", "secondary_900": "#1c0227", "secondary_950": "#1c0227", "section_header_text_size": "*text_md", "section_header_text_weight": "400", "shadow_drop": "rgba(0,0,0,0.05) 0px 1px 2px 0px", "shadow_drop_lg": "0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)", "shadow_inset": "rgba(0,0,0,0.05) 0px 1px 2px 0px inset", "shadow_spread": "3px", "shadow_spread_dark": "0px", "slider_color": "*color_accent", "spacing_lg": "6px", "spacing_md": "4px", "spacing_sm": "2px", "spacing_xl": "9px", "spacing_xs": "1px", "spacing_xxl": "12px", "spacing_xxs": "1px", "stat_background_fill": "*primary_500", "stat_background_fill_dark": "*primary_500", "table_border_color": "*neutral_700", "table_border_color_dark": "*neutral_700", "table_even_background_fill": "*neutral_700", "table_even_background_fill_dark": "*neutral_700", "table_odd_background_fill": "*neutral_700", "table_odd_background_fill_dark": "*neutral_700", "table_radius": "*radius_sm", "table_row_focus": "*color_accent_soft", "table_row_focus_dark": "*color_accent_soft", "table_text_color": "*body_text_color", "table_text_color_dark": "*body_text_color", "text_lg": "16px", "text_md": "13px", "text_sm": "11px", "text_xl": "20px", "text_xs": "9px", "text_xxl": "24px", "text_xxs": "8px"}}
|