Rcarvalo commited on
Commit
e89588f
·
verified ·
1 Parent(s): 820457c

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. comparisons/cesame.log +66 -0
  2. comparisons/cesame/baseline/neut_book_s06_0336.wav +3 -0
  3. comparisons/cesame/baseline/neut_book_s08_0163.wav +3 -0
  4. comparisons/cesame/baseline/neut_book_s09_0091.wav +3 -0
  5. comparisons/cesame/baseline/neut_book_s09_0307.wav +3 -0
  6. comparisons/cesame/baseline/neut_parl_s01_0429.wav +3 -0
  7. comparisons/cesame/baseline/neut_parl_s01_0715.wav +3 -0
  8. comparisons/cesame/baseline/neut_parl_s02_0152.wav +3 -0
  9. comparisons/cesame/baseline/neut_parl_s03_0695.wav +3 -0
  10. comparisons/cesame/baseline/neut_parl_s03_0704.wav +3 -0
  11. comparisons/cesame/baseline/neut_parl_s05_0355.wav +3 -0
  12. comparisons/cesame/finetuned/neut_book_s06_0336.wav +3 -0
  13. comparisons/cesame/finetuned/neut_book_s08_0163.wav +3 -0
  14. comparisons/cesame/finetuned/neut_book_s09_0091.wav +3 -0
  15. comparisons/cesame/finetuned/neut_book_s09_0307.wav +3 -0
  16. comparisons/cesame/finetuned/neut_parl_s01_0429.wav +3 -0
  17. comparisons/cesame/finetuned/neut_parl_s01_0715.wav +3 -0
  18. comparisons/cesame/finetuned/neut_parl_s02_0152.wav +3 -0
  19. comparisons/cesame/finetuned/neut_parl_s03_0695.wav +3 -0
  20. comparisons/cesame/finetuned/neut_parl_s03_0704.wav +3 -0
  21. comparisons/cesame/finetuned/neut_parl_s05_0355.wav +3 -0
  22. comparisons/cesame/original/neut_book_s06_0336.wav +3 -0
  23. comparisons/cesame/original/neut_book_s08_0163.wav +3 -0
  24. comparisons/cesame/original/neut_book_s09_0091.wav +3 -0
  25. comparisons/cesame/original/neut_book_s09_0307.wav +3 -0
  26. comparisons/cesame/original/neut_parl_s01_0429.wav +3 -0
  27. comparisons/cesame/original/neut_parl_s01_0715.wav +3 -0
  28. comparisons/cesame/original/neut_parl_s02_0152.wav +3 -0
  29. comparisons/cesame/original/neut_parl_s03_0695.wav +3 -0
  30. comparisons/cesame/original/neut_parl_s03_0704.wav +3 -0
  31. comparisons/cesame/original/neut_parl_s05_0355.wav +3 -0
  32. comparisons/gen_cesame.py +114 -0
  33. comparisons/gen_qwen3tts.py +103 -0
  34. comparisons/gen_vibevoice.py +143 -0
  35. comparisons/qwen3tts.log +87 -0
  36. comparisons/qwen3tts/baseline/neut_book_s06_0336.wav +3 -0
  37. comparisons/qwen3tts/baseline/neut_book_s08_0163.wav +3 -0
  38. comparisons/qwen3tts/baseline/neut_book_s09_0091.wav +3 -0
  39. comparisons/qwen3tts/baseline/neut_book_s09_0307.wav +3 -0
  40. comparisons/qwen3tts/baseline/neut_parl_s01_0429.wav +3 -0
  41. comparisons/qwen3tts/baseline/neut_parl_s01_0715.wav +3 -0
  42. comparisons/qwen3tts/baseline/neut_parl_s02_0152.wav +3 -0
  43. comparisons/qwen3tts/baseline/neut_parl_s03_0695.wav +3 -0
  44. comparisons/qwen3tts/baseline/neut_parl_s03_0704.wav +3 -0
  45. comparisons/qwen3tts/baseline/neut_parl_s05_0355.wav +3 -0
  46. comparisons/qwen3tts/finetuned/neut_book_s06_0336.wav +3 -0
  47. comparisons/qwen3tts/finetuned/neut_book_s08_0163.wav +3 -0
  48. comparisons/qwen3tts/finetuned/neut_book_s09_0091.wav +3 -0
  49. comparisons/qwen3tts/finetuned/neut_book_s09_0307.wav +3 -0
  50. comparisons/qwen3tts/finetuned/neut_parl_s01_0429.wav +3 -0
comparisons/cesame.log ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /home/spice/projects/tts-model-exploration/speech-dev-remi/comparisons/gen_cesame.py:75: UserWarning: WARNING: Unsloth should be imported before [transformers, peft] to ensure all optimizations are applied. Your code may run slower or encounter memory issues without these optimizations.
2
+
3
+ Please restructure your imports with 'import unsloth' at the top of your file.
4
+ from unsloth import FastModel
5
+ ============================================================
6
+ CESAME CSM BASELINE (4-bit)
7
+ ============================================================
8
+ 🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
9
+ 🦥 Unsloth Zoo will now patch everything to make training faster!
10
+ ==((====))== Unsloth 2026.2.1: Fast Csm patching. Transformers: 4.52.3.
11
+ \\ /| NVIDIA RTX 6000 Ada Generation. Num GPUs = 1. Max memory: 47.382 GB. Platform: Linux.
12
+ O^O/ \_/ \ Torch: 2.10.0+cu128. CUDA: 8.9. CUDA Toolkit: 12.8. Triton: 3.6.0
13
+ \ / Bfloat16 = TRUE. FA [Xformers = 0.0.35. FA2 = False]
14
+ "-____-" Free license: http://github.com/unslothai/unsloth
15
+ Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
16
+ unsloth/csm-1b does not have a padding token! Will use pad_token = <|PAD_TOKEN|>.
17
+ [neut_book_s06_0336] Azora fit l'éloge du défunt ; mais elle avoua qu'i...
18
+ OK (10.0s)
19
+ [neut_book_s08_0163] Le lendemain, dès le lever du jour, Cyrus Smith et...
20
+ OK (10.0s)
21
+ [neut_book_s09_0091] À une certaine époque, la terre n'était formée que...
22
+ OK (10.0s)
23
+ [neut_book_s09_0307] L'île Tabor, sorte de côte basse, à peine émergée ...
24
+ OK (7.5s)
25
+ [neut_parl_s01_0429] A défaut, je suggérerai à l'Assemblée de le rejete...
26
+ OK (4.9s)
27
+ [neut_parl_s01_0715] Si peu de choses furent au rendez-vous !...
28
+ OK (4.4s)
29
+ [neut_parl_s02_0152] Là, vous évoquez les difficultés de quelques branc...
30
+ OK (9.3s)
31
+ [neut_parl_s03_0695] Nous refusons ce deux poids, deux mesures....
32
+ OK (5.7s)
33
+ [neut_parl_s03_0704] En première lecture, onze jours et onze nuits ont ...
34
+ OK (6.5s)
35
+ [neut_parl_s05_0355] Cela ne changera rien à la compétitivité du pays o...
36
+ OK (3.7s)
37
+
38
+ ============================================================
39
+ CESAME CSM FINETUNED (4-bit + LoRA)
40
+ ============================================================
41
+ Loading LoRA from /home/spice/projects/tts-model-exploration/finetuning/cesame/lora_cesame_v2...
42
+ LoRA loaded
43
+ [neut_book_s06_0336] Azora fit l'éloge du défunt ; mais elle avoua qu'i...
44
+ OK (5.3s)
45
+ [neut_book_s08_0163] Le lendemain, dès le lever du jour, Cyrus Smith et...
46
+ OK (9.5s)
47
+ [neut_book_s09_0091] À une certaine époque, la terre n'était formée que...
48
+ OK (9.4s)
49
+ [neut_book_s09_0307] L'île Tabor, sorte de côte basse, à peine émergée ...
50
+ OK (6.0s)
51
+ [neut_parl_s01_0429] A défaut, je suggérerai à l'Assemblée de le rejete...
52
+ OK (3.3s)
53
+ [neut_parl_s01_0715] Si peu de choses furent au rendez-vous !...
54
+ OK (2.2s)
55
+ [neut_parl_s02_0152] Là, vous évoquez les difficultés de quelques branc...
56
+ OK (3.6s)
57
+ [neut_parl_s03_0695] Nous refusons ce deux poids, deux mesures....
58
+ OK (2.8s)
59
+ [neut_parl_s03_0704] En première lecture, onze jours et onze nuits ont ...
60
+ OK (5.6s)
61
+ [neut_parl_s05_0355] Cela ne changera rien à la compétitivité du pays o...
62
+ OK (4.2s)
63
+
64
+ Copying original SIWIS audio...
65
+
66
+ Done!
comparisons/cesame/baseline/neut_book_s06_0336.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e700fb7e5be88278fd2eac6521ebe0ec9d5b445953cd88d42fc709f0edfd1d84
3
+ size 480044
comparisons/cesame/baseline/neut_book_s08_0163.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f676096ab85d0a0f0f0b3c16f8c1b05e959b0a40bee88e79b3353ee381f1e701
3
+ size 480044
comparisons/cesame/baseline/neut_book_s09_0091.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96ab925ce73d4a718b528faeff0f6e99a0557cd63bdee05e00be2c230a6a075f
3
+ size 480044
comparisons/cesame/baseline/neut_book_s09_0307.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d72056adcf17ac19ed741ef8872b91083ef98ff04c0cfb526463537e7206009
3
+ size 361004
comparisons/cesame/baseline/neut_parl_s01_0429.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:baa12f61f1dcc084fed8d928d57e00f4daafecd64ca03beb4d5f3515d81a721d
3
+ size 234284
comparisons/cesame/baseline/neut_parl_s01_0715.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7ef9938d029f3dc1902d099baa5ff24c413264992259c9f4ce63b0d1cc945e4
3
+ size 211244
comparisons/cesame/baseline/neut_parl_s02_0152.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd89b227d915905daf43246e9ae8d25bf44a965e3c60bc1f0c9c839ec97031d6
3
+ size 445484
comparisons/cesame/baseline/neut_parl_s03_0695.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6601923e40267cc29588f4b7f89cd139778707edaf9a3331e58c126034372c90
3
+ size 272684
comparisons/cesame/baseline/neut_parl_s03_0704.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e0eb2e951514b8959052e91a8efa9aae74adfcc97debc83a0b0854632f35017
3
+ size 311084
comparisons/cesame/baseline/neut_parl_s05_0355.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47e034781989f722fe677091074edea7a3d2cc05ca5ce4d92279d7147d873ba9
3
+ size 176684
comparisons/cesame/finetuned/neut_book_s06_0336.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98925e885bde31c04f5bb968d2b486c5ec8fe0c7c4e979205903b832927aab08
3
+ size 253484
comparisons/cesame/finetuned/neut_book_s08_0163.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:739f45c6f739c13ecad8183e25ab3ab77f2af3a6cc0ab3ad31513cb182802f03
3
+ size 457004
comparisons/cesame/finetuned/neut_book_s09_0091.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26f0c85a95104fe83d52d6ca47bf4b14f5b408cf6679774f0280de78e3ed1cad
3
+ size 453164
comparisons/cesame/finetuned/neut_book_s09_0307.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d8d23cac2dd6fe91ff1b29d444687be8f514c7d2189f2928b96d4c443482742
3
+ size 288044
comparisons/cesame/finetuned/neut_parl_s01_0429.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:967c14acd86c5d04a491175861279624e7dc798b3c41e5b244c40e87e6112895
3
+ size 157484
comparisons/cesame/finetuned/neut_parl_s01_0715.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f323b127a8e697fe0e968458dff1d36c6fdad9e86d8b190f5f2d9f27f962a3ee
3
+ size 107564
comparisons/cesame/finetuned/neut_parl_s02_0152.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42b6a8bfa6e59ae2bfcdc002d9461ffc80979c3426bf4636df7593a0195d85da
3
+ size 172844
comparisons/cesame/finetuned/neut_parl_s03_0695.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a9ba9ef7ffda50a73c11a68a250134af8de43283c82b11644435a2a449b504b
3
+ size 134444
comparisons/cesame/finetuned/neut_parl_s03_0704.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2eb312fc111c85560518c21b8dd1127c3a72564e43646accad3d7ae1dc60340c
3
+ size 268844
comparisons/cesame/finetuned/neut_parl_s05_0355.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad45f92cad09ce312f38ae762df6b8eefa2affc9da6bc2b8bebb33828a9db7ca
3
+ size 199724
comparisons/cesame/original/neut_book_s06_0336.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04b4740d8104398d4df427ec1ccbbbdadad600ad4c579c39dd747583749d0710
3
+ size 442008
comparisons/cesame/original/neut_book_s08_0163.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19383c41bd680420559c45d7d9cdf88c59af56600d40a68e6b5e61db145dc8d4
3
+ size 796572
comparisons/cesame/original/neut_book_s09_0091.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad2a9d2e5f20d7545a3b8211ca25d4c8205236d90c7209a4a9ab4b13698fd128
3
+ size 763054
comparisons/cesame/original/neut_book_s09_0307.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb49a6f1a20eb7b614e5ac78450e56ee0885d4342fa98b086c035950f2f145ce
3
+ size 533736
comparisons/cesame/original/neut_parl_s01_0429.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e46bafdf9f4ab5f0b56b3c7cb6d6036d29e3199436f0c33a54562a8b1ff8fec
3
+ size 304416
comparisons/cesame/original/neut_parl_s01_0715.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c95152b1d2f044685d3ff6eb9a3a0db5184ec2afb9ab8e969d9319da2bf0bf48
3
+ size 173880
comparisons/cesame/original/neut_parl_s02_0152.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c066507c1529fa4b2195ef796fd54ac6e04847494c3fa2b63588a12ddb17dec
3
+ size 294714
comparisons/cesame/original/neut_parl_s03_0695.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4024e0aabad4985578ec5da5b2d090ee3f145c3d30b30227b03b72710ef19976
3
+ size 197694
comparisons/cesame/original/neut_parl_s03_0704.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6c81d7ef9913b0ff0d2de57936901a082e4d52ac736198f6daed38be46c4aea
3
+ size 444654
comparisons/cesame/original/neut_parl_s05_0355.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e350d8307aae8ecc6f3fc4e99b8bccca995c77cc073230e85782e3406750dff5
3
+ size 329112
comparisons/gen_cesame.py ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Generate 10 proof_unseen texts with CeSAMe baseline + finetuned (LoRA)."""
3
+ import torch
4
+ import soundfile as sf
5
+ import numpy as np
6
+ import shutil
7
+ from pathlib import Path
8
+ from transformers import CsmForConditionalGeneration, AutoProcessor
9
+ from peft import PeftModel
10
+
11
+ TEXTS = {
12
+ "neut_book_s06_0336": "Azora fit l'éloge du défunt ; mais elle avoua qu'il avait des défauts dont Cador était exempt.",
13
+ "neut_book_s08_0163": "Le lendemain, dès le lever du jour, Cyrus Smith et Ayrton, montant le chariot attelé des deux onaggas, prenaient la route du corral et y couraient au grand trot.",
14
+ "neut_book_s09_0091": "À une certaine époque, la terre n'était formée que d'une écorce élastique, soumise à des mouvements alternatifs de haut et de bas, en vertu des lois de l'attraction.",
15
+ "neut_book_s09_0307": "L'île Tabor, sorte de côte basse, à peine émergée des flots, n'était pas éloignée de plus de quinze milles.",
16
+ "neut_parl_s01_0429": "A défaut, je suggérerai à l'Assemblée de le rejeter.",
17
+ "neut_parl_s01_0715": "Si peu de choses furent au rendez-vous !",
18
+ "neut_parl_s02_0152": "Là, vous évoquez les difficultés de quelques branches spécifiques.",
19
+ "neut_parl_s03_0695": "Nous refusons ce deux poids, deux mesures.",
20
+ "neut_parl_s03_0704": "En première lecture, onze jours et onze nuits ont été consacrés à ce texte.",
21
+ "neut_parl_s05_0355": "Cela ne changera rien à la compétitivité du pays ou au chômage.",
22
+ }
23
+
24
+ OUTPUT_DIR = Path("/home/spice/projects/tts-model-exploration/speech-dev-remi/comparisons/cesame")
25
+ LORA_PATH = Path("/home/spice/projects/tts-model-exploration/finetuning/cesame/lora_cesame_v2")
26
+ SAMPLE_RATE = 24000
27
+
28
+ device = torch.device("cuda:0")
29
+
30
+
31
+ def generate_all(model, processor, output_subdir):
32
+ output_subdir.mkdir(parents=True, exist_ok=True)
33
+ for stem, text in TEXTS.items():
34
+ print(f" [{stem}] {text[:50]}...")
35
+ try:
36
+ text_with_speaker = f"[0]{text}"
37
+ inputs = processor(text_with_speaker, add_special_tokens=True, return_tensors="pt").to(device)
38
+
39
+ with torch.no_grad():
40
+ audio_output = model.generate(**inputs, output_audio=True)
41
+
42
+ if isinstance(audio_output, (list, tuple)) and len(audio_output) > 0:
43
+ if isinstance(audio_output[0], torch.Tensor):
44
+ audio = audio_output[0].to(torch.float32).cpu().numpy()
45
+ elif hasattr(audio_output[0], 'audio_values'):
46
+ audio = audio_output[0].audio_values.squeeze().to(torch.float32).cpu().numpy()
47
+ else:
48
+ raise ValueError(f"Unexpected type: {type(audio_output[0])}")
49
+ elif hasattr(audio_output, 'audio_values'):
50
+ audio = audio_output.audio_values.squeeze().to(torch.float32).cpu().numpy()
51
+ elif isinstance(audio_output, torch.Tensor):
52
+ audio = audio_output.squeeze().to(torch.float32).cpu().numpy()
53
+ else:
54
+ raise ValueError(f"Unexpected output: {type(audio_output)}")
55
+
56
+ audio = audio.astype("float32").flatten()
57
+ peak = np.max(np.abs(audio))
58
+ if peak > 1.0:
59
+ audio = audio / peak * 0.99
60
+
61
+ sf.write(str(output_subdir / f"{stem}.wav"), audio, SAMPLE_RATE)
62
+ print(f" OK ({len(audio)/SAMPLE_RATE:.1f}s)")
63
+ except Exception as e:
64
+ print(f" ERROR: {e}")
65
+ import traceback; traceback.print_exc()
66
+
67
+
68
+ def main():
69
+ OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
70
+
71
+ # --- Baseline (4-bit) ---
72
+ print("=" * 60)
73
+ print("CESAME CSM BASELINE (4-bit)")
74
+ print("=" * 60)
75
+ from unsloth import FastModel
76
+ model, _tokenizer = FastModel.from_pretrained(
77
+ model_name="unsloth/csm-1b",
78
+ max_seq_length=2048,
79
+ dtype=None,
80
+ auto_model=CsmForConditionalGeneration,
81
+ load_in_4bit=True,
82
+ )
83
+ processor = AutoProcessor.from_pretrained("unsloth/csm-1b")
84
+ model.eval()
85
+
86
+ generate_all(model, processor, OUTPUT_DIR / "baseline")
87
+
88
+ # --- Finetuned (4-bit + LoRA) ---
89
+ print("\n" + "=" * 60)
90
+ print("CESAME CSM FINETUNED (4-bit + LoRA)")
91
+ print("=" * 60)
92
+ print(f"Loading LoRA from {LORA_PATH}...")
93
+
94
+ # Apply LoRA adapter
95
+ model = PeftModel.from_pretrained(model, str(LORA_PATH))
96
+ model.eval()
97
+ print("LoRA loaded")
98
+
99
+ generate_all(model, processor, OUTPUT_DIR / "finetuned")
100
+
101
+ # Copy originals
102
+ print("\nCopying original SIWIS audio...")
103
+ (OUTPUT_DIR / "original").mkdir(exist_ok=True)
104
+ siwis_wavs = Path("/home/spice/speech/app/liquid-audio/french_finetuning/data/raw/siwis/SiwisFrenchSpeechSynthesisDatabase/wavs")
105
+ for stem in TEXTS:
106
+ src = list(siwis_wavs.rglob(f"{stem}.wav"))
107
+ if src:
108
+ shutil.copy2(src[0], OUTPUT_DIR / "original" / f"{stem}.wav")
109
+
110
+ print("\nDone!")
111
+
112
+
113
+ if __name__ == "__main__":
114
+ main()
comparisons/gen_qwen3tts.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Generate 10 proof_unseen texts with Qwen3-TTS baseline + finetuned."""
3
+ import sys
4
+ import torch
5
+ import soundfile as sf
6
+ import numpy as np
7
+ import shutil
8
+ from pathlib import Path
9
+
10
+ sys.path.insert(0, str(Path("/home/spice/projects/tts-model-exploration/Qwen3-TTS")))
11
+
12
+ from qwen_tts import Qwen3TTSModel
13
+
14
+ TEXTS = {
15
+ "neut_book_s06_0336": "Azora fit l'éloge du défunt ; mais elle avoua qu'il avait des défauts dont Cador était exempt.",
16
+ "neut_book_s08_0163": "Le lendemain, dès le lever du jour, Cyrus Smith et Ayrton, montant le chariot attelé des deux onaggas, prenaient la route du corral et y couraient au grand trot.",
17
+ "neut_book_s09_0091": "À une certaine époque, la terre n'était formée que d'une écorce élastique, soumise à des mouvements alternatifs de haut et de bas, en vertu des lois de l'attraction.",
18
+ "neut_book_s09_0307": "L'île Tabor, sorte de côte basse, à peine émergée des flots, n'était pas éloignée de plus de quinze milles.",
19
+ "neut_parl_s01_0429": "A défaut, je suggérerai à l'Assemblée de le rejeter.",
20
+ "neut_parl_s01_0715": "Si peu de choses furent au rendez-vous !",
21
+ "neut_parl_s02_0152": "Là, vous évoquez les difficultés de quelques branches spécifiques.",
22
+ "neut_parl_s03_0695": "Nous refusons ce deux poids, deux mesures.",
23
+ "neut_parl_s03_0704": "En première lecture, onze jours et onze nuits ont été consacrés à ce texte.",
24
+ "neut_parl_s05_0355": "Cela ne changera rien à la compétitivité du pays ou au chômage.",
25
+ }
26
+
27
+ OUTPUT_DIR = Path("/home/spice/projects/tts-model-exploration/speech-dev-remi/comparisons/qwen3tts")
28
+ FINETUNED_PATH = Path("/home/spice/projects/tts-model-exploration/Qwen3-TTS/finetuning/output_v2/checkpoint-best")
29
+ BASELINE_MODEL_ID = "Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice"
30
+
31
+ device = torch.device("cuda:0")
32
+
33
+
34
+ def generate_all(model, output_subdir, speaker="Ryan", language="French"):
35
+ output_subdir.mkdir(parents=True, exist_ok=True)
36
+ for stem, text in TEXTS.items():
37
+ print(f" [{stem}] {text[:50]}...")
38
+ try:
39
+ with torch.no_grad():
40
+ wavs, sr = model.generate_custom_voice(
41
+ text=text, language=language, speaker=speaker,
42
+ )
43
+ audio = wavs[0]
44
+ if isinstance(audio, torch.Tensor):
45
+ audio = audio.cpu().numpy()
46
+ audio = audio.astype("float32").flatten()
47
+ peak = np.max(np.abs(audio))
48
+ if peak > 1.0:
49
+ audio = audio / peak * 0.99
50
+ sf.write(str(output_subdir / f"{stem}.wav"), audio, sr)
51
+ print(f" OK ({len(audio)/sr:.1f}s)")
52
+ except Exception as e:
53
+ print(f" ERROR: {e}")
54
+
55
+
56
+ def main():
57
+ OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
58
+
59
+ try:
60
+ import flash_attn
61
+ attn_impl = "flash_attention_2"
62
+ except ImportError:
63
+ attn_impl = "sdpa"
64
+
65
+ # --- Baseline ---
66
+ print("=" * 60)
67
+ print("QWEN3-TTS BASELINE")
68
+ print("=" * 60)
69
+ model = Qwen3TTSModel.from_pretrained(
70
+ BASELINE_MODEL_ID, device_map=str(device),
71
+ dtype=torch.bfloat16, attn_implementation=attn_impl,
72
+ )
73
+ generate_all(model, OUTPUT_DIR / "baseline")
74
+ del model
75
+ torch.cuda.empty_cache()
76
+
77
+ # --- Finetuned ---
78
+ print("\n" + "=" * 60)
79
+ print("QWEN3-TTS FINETUNED")
80
+ print("=" * 60)
81
+ print(f"Loading from {FINETUNED_PATH}...")
82
+ model = Qwen3TTSModel.from_pretrained(
83
+ str(FINETUNED_PATH), device_map=str(device),
84
+ dtype=torch.bfloat16, attn_implementation=attn_impl,
85
+ )
86
+ generate_all(model, OUTPUT_DIR / "finetuned", speaker="SIWIS_French")
87
+ del model
88
+ torch.cuda.empty_cache()
89
+
90
+ # Copy originals
91
+ print("\nCopying original SIWIS audio...")
92
+ (OUTPUT_DIR / "original").mkdir(exist_ok=True)
93
+ siwis_wavs = Path("/home/spice/speech/app/liquid-audio/french_finetuning/data/raw/siwis/SiwisFrenchSpeechSynthesisDatabase/wavs")
94
+ for stem in TEXTS:
95
+ src = list(siwis_wavs.rglob(f"{stem}.wav"))
96
+ if src:
97
+ shutil.copy2(src[0], OUTPUT_DIR / "original" / f"{stem}.wav")
98
+
99
+ print("\nDone!")
100
+
101
+
102
+ if __name__ == "__main__":
103
+ main()
comparisons/gen_vibevoice.py ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Generate 10 proof_unseen texts with VibeVoice baseline + finetuned."""
3
+ import torch
4
+ import copy
5
+ import soundfile as sf
6
+ import numpy as np
7
+ from pathlib import Path
8
+ from safetensors.torch import load_file
9
+
10
+ from vibevoice.modular.modeling_vibevoice_streaming_inference import (
11
+ VibeVoiceStreamingForConditionalGenerationInference,
12
+ )
13
+ from vibevoice.processor.vibevoice_streaming_processor import (
14
+ VibeVoiceStreamingProcessor,
15
+ )
16
+
17
+ TEXTS = {
18
+ "neut_book_s06_0336": "Azora fit l'éloge du défunt ; mais elle avoua qu'il avait des défauts dont Cador était exempt.",
19
+ "neut_book_s08_0163": "Le lendemain, dès le lever du jour, Cyrus Smith et Ayrton, montant le chariot attelé des deux onaggas, prenaient la route du corral et y couraient au grand trot.",
20
+ "neut_book_s09_0091": "À une certaine époque, la terre n'était formée que d'une écorce élastique, soumise à des mouvements alternatifs de haut et de bas, en vertu des lois de l'attraction.",
21
+ "neut_book_s09_0307": "L'île Tabor, sorte de côte basse, à peine émergée des flots, n'était pas éloignée de plus de quinze milles.",
22
+ "neut_parl_s01_0429": "A défaut, je suggérerai à l'Assemblée de le rejeter.",
23
+ "neut_parl_s01_0715": "Si peu de choses furent au rendez-vous !",
24
+ "neut_parl_s02_0152": "Là, vous évoquez les difficultés de quelques branches spécifiques.",
25
+ "neut_parl_s03_0695": "Nous refusons ce deux poids, deux mesures.",
26
+ "neut_parl_s03_0704": "En première lecture, onze jours et onze nuits ont été consacrés à ce texte.",
27
+ "neut_parl_s05_0355": "Cela ne changera rien à la compétitivité du pays ou au chômage.",
28
+ }
29
+
30
+ OUTPUT_DIR = Path("/home/spice/projects/tts-model-exploration/speech-dev-remi/comparisons/vibevoice")
31
+ FINETUNED_WEIGHTS = Path("/home/spice/projects/tts-model-exploration/finetuning_vibevoice/outputs/full_ft_vibevoice/tts_lm_best.safetensors")
32
+ VOICES_DIR = Path("/home/spice/speech/VibeVoice/demo/voices/streaming_model")
33
+ SAMPLE_RATE = 24000
34
+
35
+ device = torch.device("cuda:0")
36
+
37
+
38
+ def load_model():
39
+ model_path = "microsoft/VibeVoice-Realtime-0.5B"
40
+ processor = VibeVoiceStreamingProcessor.from_pretrained(model_path)
41
+ try:
42
+ model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
43
+ model_path, torch_dtype=torch.bfloat16, device_map=str(device),
44
+ attn_implementation="flash_attention_2",
45
+ )
46
+ except Exception:
47
+ model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
48
+ model_path, torch_dtype=torch.bfloat16, device_map=str(device),
49
+ attn_implementation="sdpa",
50
+ )
51
+ model.eval()
52
+ model.set_ddpm_inference_steps(num_steps=5)
53
+ return model, processor
54
+
55
+
56
+ def generate_sample(model, processor, text, voice_prompt):
57
+ inputs = processor.process_input_with_cached_prompt(
58
+ text=text, cached_prompt=voice_prompt,
59
+ padding=True, return_tensors="pt", return_attention_mask=True,
60
+ )
61
+ for k, v in inputs.items():
62
+ if torch.is_tensor(v):
63
+ inputs[k] = v.to(device)
64
+
65
+ max_tokens = min(max(int(len(text) * 3.0) + 100, 200), 800)
66
+
67
+ with torch.no_grad():
68
+ outputs = model.generate(
69
+ **inputs, max_new_tokens=max_tokens, cfg_scale=1.5,
70
+ tokenizer=processor.tokenizer,
71
+ generation_config={'do_sample': False}, verbose=False,
72
+ all_prefilled_outputs=copy.deepcopy(voice_prompt),
73
+ )
74
+
75
+ if not outputs.speech_outputs or outputs.speech_outputs[0] is None:
76
+ return None
77
+ audio = outputs.speech_outputs[0].cpu().float().numpy().flatten()
78
+ peak = np.max(np.abs(audio))
79
+ if peak > 1.0:
80
+ audio = audio / peak * 0.99
81
+ return audio
82
+
83
+
84
+ def main():
85
+ OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
86
+ (OUTPUT_DIR / "baseline").mkdir(exist_ok=True)
87
+ (OUTPUT_DIR / "finetuned").mkdir(exist_ok=True)
88
+
89
+ voice_prompt = torch.load(
90
+ VOICES_DIR / "fr-Spk0_man.pt", map_location=device, weights_only=False
91
+ )
92
+
93
+ # --- Baseline ---
94
+ print("=" * 60)
95
+ print("VIBEVOICE BASELINE")
96
+ print("=" * 60)
97
+ model, processor = load_model()
98
+
99
+ for stem, text in TEXTS.items():
100
+ print(f" [{stem}] {text[:50]}...")
101
+ audio = generate_sample(model, processor, text, voice_prompt)
102
+ if audio is not None:
103
+ sf.write(str(OUTPUT_DIR / "baseline" / f"{stem}.wav"), audio, SAMPLE_RATE)
104
+ print(f" OK ({len(audio)/SAMPLE_RATE:.1f}s)")
105
+ else:
106
+ print(" FAILED")
107
+
108
+ # --- Finetuned ---
109
+ print("\n" + "=" * 60)
110
+ print("VIBEVOICE FINETUNED")
111
+ print("=" * 60)
112
+
113
+ # Load finetuned tts_language_model weights
114
+ print(f"Loading finetuned weights from {FINETUNED_WEIGHTS}...")
115
+ ft_weights = load_file(str(FINETUNED_WEIGHTS))
116
+ model.model.tts_language_model.load_state_dict(ft_weights)
117
+ print("Finetuned weights loaded")
118
+
119
+ for stem, text in TEXTS.items():
120
+ print(f" [{stem}] {text[:50]}...")
121
+ audio = generate_sample(model, processor, text, voice_prompt)
122
+ if audio is not None:
123
+ sf.write(str(OUTPUT_DIR / "finetuned" / f"{stem}.wav"), audio, SAMPLE_RATE)
124
+ print(f" OK ({len(audio)/SAMPLE_RATE:.1f}s)")
125
+ else:
126
+ print(" FAILED")
127
+
128
+ # Copy originals
129
+ print("\nCopying original SIWIS audio...")
130
+ import shutil
131
+ (OUTPUT_DIR / "original").mkdir(exist_ok=True)
132
+ siwis_wavs = Path("/home/spice/speech/app/liquid-audio/french_finetuning/data/raw/siwis/SiwisFrenchSpeechSynthesisDatabase/wavs")
133
+ for stem in TEXTS:
134
+ src = list(siwis_wavs.rglob(f"{stem}.wav"))
135
+ if src:
136
+ shutil.copy2(src[0], OUTPUT_DIR / "original" / f"{stem}.wav")
137
+ print(f" {stem} copied")
138
+
139
+ print("\nDone!")
140
+
141
+
142
+ if __name__ == "__main__":
143
+ main()
comparisons/qwen3tts.log ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /bin/sh: 1: sox: not found
2
+ SoX could not be found!
3
+
4
+ If you do not have SoX, proceed here:
5
+ - - - http://sox.sourceforge.net/ - - -
6
+
7
+ If you do (or think that you should) have SoX, double-check your
8
+ path variables.
9
+
10
+
11
+ ********
12
+ Warning: flash-attn is not installed. Will only run the manual PyTorch version. Please install flash-attn for faster inference.
13
+ ********
14
+
15
+ ============================================================
16
+ QWEN3-TTS BASELINE
17
+ ============================================================
18
+
19
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
20
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
21
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
22
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
23
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
24
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
25
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
26
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
27
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
28
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
29
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
30
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
31
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
32
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
33
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
34
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
35
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
36
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
37
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
38
+ Setting `pad_token_id` to `eos_token_id`:2150 for open-end generation.
39
+ [neut_book_s06_0336] Azora fit l'éloge du défunt ; mais elle avoua qu'i...
40
+ OK (6.5s)
41
+ [neut_book_s08_0163] Le lendemain, dès le lever du jour, Cyrus Smith et...
42
+ OK (38.3s)
43
+ [neut_book_s09_0091] À une certaine époque, la terre n'était formée que...
44
+ OK (14.6s)
45
+ [neut_book_s09_0307] L'île Tabor, sorte de côte basse, à peine émergée ...
46
+ OK (17.2s)
47
+ [neut_parl_s01_0429] A défaut, je suggérerai à l'Assemblée de le rejete...
48
+ OK (4.2s)
49
+ [neut_parl_s01_0715] Si peu de choses furent au rendez-vous !...
50
+ OK (4.4s)
51
+ [neut_parl_s02_0152] Là, vous évoquez les difficultés de quelques branc...
52
+ OK (4.4s)
53
+ [neut_parl_s03_0695] Nous refusons ce deux poids, deux mesures....
54
+ OK (2.9s)
55
+ [neut_parl_s03_0704] En première lecture, onze jours et onze nuits ont ...
56
+ OK (5.6s)
57
+ [neut_parl_s05_0355] Cela ne changera rien à la compétitivité du pays o...
58
+ OK (5.4s)
59
+
60
+ ============================================================
61
+ QWEN3-TTS FINETUNED
62
+ ============================================================
63
+ Loading from /home/spice/projects/tts-model-exploration/Qwen3-TTS/finetuning/output_v2/checkpoint-best...
64
+ [neut_book_s06_0336] Azora fit l'éloge du défunt ; mais elle avoua qu'i...
65
+ OK (5.4s)
66
+ [neut_book_s08_0163] Le lendemain, dès le lever du jour, Cyrus Smith et...
67
+ OK (9.0s)
68
+ [neut_book_s09_0091] À une certaine époque, la terre n'était formée que...
69
+ OK (8.5s)
70
+ [neut_book_s09_0307] L'île Tabor, sorte de côte basse, à peine émergée ...
71
+ OK (6.8s)
72
+ [neut_parl_s01_0429] A défaut, je suggérerai à l'Assemblée de le rejete...
73
+ OK (3.5s)
74
+ [neut_parl_s01_0715] Si peu de choses furent au rendez-vous !...
75
+ OK (2.2s)
76
+ [neut_parl_s02_0152] Là, vous évoquez les difficultés de quelques branc...
77
+ OK (4.3s)
78
+ [neut_parl_s03_0695] Nous refusons ce deux poids, deux mesures....
79
+ OK (3.4s)
80
+ [neut_parl_s03_0704] En première lecture, onze jours et onze nuits ont ...
81
+ OK (4.9s)
82
+ [neut_parl_s05_0355] Cela ne changera rien à la compétitivité du pays o...
83
+ OK (4.4s)
84
+
85
+ Copying original SIWIS audio...
86
+
87
+ Done!
comparisons/qwen3tts/baseline/neut_book_s06_0336.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:492d90ea15cb4d19827d5c2c9476126e593ad78641bbb033db0373c136e3a88e
3
+ size 311084
comparisons/qwen3tts/baseline/neut_book_s08_0163.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4c9f47849c0fe2888856b2074e0bb7cf9fa8ddfc9dde833c8b585d75f18fa8b
3
+ size 1839404
comparisons/qwen3tts/baseline/neut_book_s09_0091.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fa84485bb5b0a22573ab060c623f0c4651c1f577fb6bfa39266520f5b57c362
3
+ size 702764
comparisons/qwen3tts/baseline/neut_book_s09_0307.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40fdfb7222e58592665edd3b6ce3b5b13077c65d7ae9aaf4a1fdff6925e445f6
3
+ size 825644
comparisons/qwen3tts/baseline/neut_parl_s01_0429.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d56e07c153dc99d237518f60ab036461c134150402f8f156e8ad8be4c116d598
3
+ size 203564
comparisons/qwen3tts/baseline/neut_parl_s01_0715.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39c1bc9afa4efe4693df583167d564b4d6279f6f411b1979b8e2128a0731ba6d
3
+ size 211244
comparisons/qwen3tts/baseline/neut_parl_s02_0152.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a00fe1d2d6a1c9921d193344ec11407358e75dc9bdbf2f5573a0308c0740f7e6
3
+ size 211244
comparisons/qwen3tts/baseline/neut_parl_s03_0695.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e865ea2062ebc650114f367b2d2a5aef466c93fdb1dc8471b0ff837cbd3dbf53
3
+ size 138284
comparisons/qwen3tts/baseline/neut_parl_s03_0704.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3d4ef140137c357ecd1949a31ed451b66542ca1e4cdf0bb7d57f26b4d4a1d40
3
+ size 268844
comparisons/qwen3tts/baseline/neut_parl_s05_0355.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:251635760dc916684de84b4caf80475c613a078d28641e133eea613a46a2c4d9
3
+ size 261164
comparisons/qwen3tts/finetuned/neut_book_s06_0336.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fdb1e3c699b9c752c7105e10919c2007321396a48a20601a628f8d959ccbd93
3
+ size 257324
comparisons/qwen3tts/finetuned/neut_book_s08_0163.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61ee73d7e75016333afd8ac8260efc14b425ec93b0cdb96a7bcfe0f5d5fd4a20
3
+ size 433964
comparisons/qwen3tts/finetuned/neut_book_s09_0091.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70f07c0e5c3cf314c82878ab0cb4fbeb373506f927c0038f15c25e15a9792c1f
3
+ size 407084
comparisons/qwen3tts/finetuned/neut_book_s09_0307.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:647b32a1d9bee459e6b030ad12496b2ec29f4c5277ca63352c9490cc5d222dd8
3
+ size 326444
comparisons/qwen3tts/finetuned/neut_parl_s01_0429.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a40fcc173311b991891d675dfe0f061e32922b810faa29129a11f45acd4a9b0
3
+ size 169004