prince-canuma commited on
Commit
b7da048
·
verified ·
1 Parent(s): a9e63fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -3
README.md CHANGED
@@ -4,17 +4,35 @@ license: apache-2.0
4
  pipeline_tag: text-to-speech
5
  tags:
6
  - mlx
 
 
 
 
 
7
  ---
8
 
9
  # mlx-community/Soprano-80M-bf16
10
- This model was converted to MLX format from [`ekwek/Soprano-80M`](https://huggingface.co/ekwek/Soprano-80M) using mlx-audio version **0.2.8**.
11
  Refer to the [original model card](https://huggingface.co/ekwek/Soprano-80M) for more details on the model.
12
- ## Use with mlx
 
13
 
14
  ```bash
15
  pip install -U mlx-audio
16
  ```
17
 
 
18
  ```bash
19
- python -m mlx_audio.tts.generate --model mlx-community/Soprano-80M-bf16 --text "Hello world!"
 
 
 
 
 
 
 
 
 
 
 
20
  ```
 
4
  pipeline_tag: text-to-speech
5
  tags:
6
  - mlx
7
+ - text-to-speech
8
+ - speech
9
+ - speech generation
10
+ - voice cloning
11
+ - tts
12
  ---
13
 
14
  # mlx-community/Soprano-80M-bf16
15
+ This model was converted to MLX format from [`ekwek/Soprano-80M`](https://huggingface.co/ekwek/Soprano-80M) using mlx-audio version **0.2.10**.
16
  Refer to the [original model card](https://huggingface.co/ekwek/Soprano-80M) for more details on the model.
17
+
18
+ ## Use with mlx-audio
19
 
20
  ```bash
21
  pip install -U mlx-audio
22
  ```
23
 
24
+ ### CLI Example:
25
  ```bash
26
+ python -m mlx_audio.tts.generate --model mlx-community/Soprano-80M-bf16 --text "Hello, this is a test."
27
+ ```
28
+ ### Python Example:
29
+ ```python
30
+ from mlx_audio.tts.utils import load_model
31
+ from mlx_audio.tts.generate import generate_audio
32
+ model = load_model("mlx-community/Soprano-80M-bf16")
33
+ generate_audio(
34
+ model=model, text="Hello, this is a test.",
35
+ file_prefix="test_audio",
36
+ )
37
+
38
  ```