jakezp commited on
Commit
c42ddd3
·
verified ·
1 Parent(s): 988fcb4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -0
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - multilingual
5
+ tags:
6
+ - text-to-speech
7
+ - speech-synthesis
8
+ - pytorch
9
+ - styletts2
10
+ - neural-tts
11
+ - voice-cloning
12
+ pipeline_tag: text-to-speech
13
+ library_name: pytorch
14
+ license: mit
15
+ datasets:
16
+ - LibriTTS
17
+ metrics:
18
+ - naturalness
19
+ - similarity
20
+ widget:
21
+ - text: "Hello, this is a sample of StyleTTS2 speech synthesis."
22
+ example_title: "English Sample"
23
+ - text: "StyleTTS2 can synthesize high-quality speech with style control."
24
+ example_title: "Style Control Sample"
25
+ ---
26
+
27
+ # StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training
28
+
29
+ StyleTTS 2 is a text-to-speech model that leverages style diffusion and adversarial training with large speech language models (SLMs) to achieve human-level text-to-speech synthesis. This model builds upon the original StyleTTS with significant improvements in naturalness and similarity.
30
+
31
+ ## Model Description
32
+
33
+ - **Model Type**: Neural Text-to-Speech (TTS)
34
+ - **Language(s)**: English (primary), with support for 18+ languages
35
+ - **License**: MIT
36
+ - **Paper**: [StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training](https://arxiv.org/abs/2306.07691)
37
+ - **Sample Rate**: 24,000 Hz
38
+ - **Architecture**: Style diffusion with adversarial training
39
+
40
+ ## Features
41
+
42
+ - **High-Quality Synthesis**: Achieves human-level naturalness in speech synthesis
43
+ - **Style Control**: Advanced style transfer and voice cloning capabilities
44
+ - **Multi-Language Support**: Primary English model with support for 18+ additional languages
45
+ - **Voice Cloning**: Can clone voices from reference audio samples
46
+ - **Diffusion-Based**: Uses diffusion models for high-quality audio generation
47
+
48
+ ## Usage
49
+
50
+ This model is designed for text-to-speech synthesis with the following capabilities:
51
+
52
+ 1. **Multi-Voice Synthesis**: Generate speech using preset voice styles
53
+ 2. **Voice Cloning**: Clone voices from reference audio samples
54
+ 3. **Style Control**: Fine-tune synthesis parameters for different styles
55
+ 4. **Multi-Language**: Support for various languages with English-accented pronunciation
56
+
57
+ ### Parameters
58
+
59
+ - `alpha` (0.0-1.0): Style blending factor (default: 0.3)
60
+ - `beta` (0.0-1.0): Style mixing factor (default: 0.7)
61
+ - `diffusion_steps` (3-20): Number of diffusion steps for quality (default: 5)
62
+ - `embedding_scale` (1.0-10.0): Embedding scale factor (default: 1.0)
63
+
64
+ ## Training Data
65
+
66
+ - **Primary Dataset**: LibriTTS
67
+ - **Languages**: English (primary) + 18 additional languages
68
+ - **Training Approach**: Style diffusion with adversarial training using large speech language models
69
+
70
+ ## Performance
71
+
72
+ StyleTTS 2 achieves human-level performance in:
73
+ - **Naturalness**: Comparable to human speech in listening tests
74
+ - **Similarity**: High fidelity voice cloning and style transfer
75
+ - **Quality**: Superior audio quality compared to previous TTS models
76
+
77
+ ## Limitations
78
+
79
+ - **Compute Requirements**: Requires significant computational resources for inference
80
+ - **English-First**: Optimized for English, other languages may have accented pronunciation
81
+ - **Context Dependency**: Performance varies with input text length and complexity
82
+
83
+ ## Citation
84
+
85
+ ```bibtex
86
+ @article{li2024styletts2,
87
+ title={StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models},
88
+ author={Li, Yinghao Aaron and Han, Cong and Mesgarani, Nima},
89
+ journal={arXiv preprint arXiv:2306.07691},
90
+ year={2024}
91
+ }
92
+ ```
93
+
94
+ ## Links
95
+
96
+ - Paper: [https://arxiv.org/abs/2306.07691](https://arxiv.org/abs/2306.07691)
97
+ - Samples: [https://styletts2.github.io/](https://styletts2.github.io/)
98
+ - Code: [https://github.com/yl4579/StyleTTS2](https://github.com/yl4579/StyleTTS2)
99
+ - License: MIT License