camenduru commited on
Commit
27f8e44
·
verified ·
1 Parent(s): 8d1ba60

thanks to maya-research ❤

Browse files
.gitattributes CHANGED
@@ -33,3 +33,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ checkpoint-10000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ checkpoint-15000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
38
+ checkpoint-5000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
39
+ tokenizer/tokenizer.json filter=lfs diff=lfs merge=lfs -text
40
+ checkpoint-20000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
41
+ checkpoint-25000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
42
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ venv/
README.md ADDED
@@ -0,0 +1,402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ datasets: proprietary
7
+ pipeline_tag: text-to-speech
8
+ ---
9
+
10
+ # Maya1
11
+
12
+ **Maya1** is a speech model built for expressive voice generation with rich human emotion and precise voice design.
13
+
14
+ **try it:** [Playground](https://www.mayaresearch.ai/studio)
15
+
16
+ **What it does:**
17
+ - Voice design through natural language descriptions
18
+ - 20+ emotions: laugh, cry, whisper, angry, sigh, gasp, and more
19
+ - Real-time streaming with SNAC neural codec
20
+ - 3B parameters, runs on single GPU
21
+ - Apache 2.0 license
22
+
23
+ Developed by Maya Research.
24
+
25
+ ---
26
+
27
+ ## Demos
28
+
29
+ ### Example 1: Energetic Female Event Host
30
+
31
+ **Voice Description:**
32
+ ```
33
+ Female, in her 30s with an American accent and is an event host, energetic, clear diction
34
+ ```
35
+
36
+ **Text:**
37
+ ```
38
+ Wow. This place looks even better than I imagined. How did they set all this up so perfectly? The lights, the music, everything feels magical. I can't stop smiling right now.
39
+ ```
40
+
41
+ **Audio Output:**
42
+
43
+ <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/642a7d4e556ab448a0701ca1/4zDlBLeFk0Y2rOrQhMW9r.wav"></audio>
44
+
45
+ ---
46
+
47
+ ### Example 2: Dark Villain with Anger
48
+
49
+ **Voice Description:**
50
+ ```
51
+ Dark villain character, Male voice in their 40s with a British accent. low pitch, gravelly timbre, slow pacing, angry tone at high intensity.
52
+ ```
53
+
54
+ **Text:**
55
+ ```
56
+ Welcome back to another episode of our podcast! <laugh_harder> Today we are diving into an absolutely fascinating topic
57
+ ```
58
+
59
+ **Audio Output:**
60
+
61
+ <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/642a7d4e556ab448a0701ca1/mT6FnTrA3KYQnwfJms92X.wav"></audio>
62
+
63
+ ---
64
+
65
+ ### Example 3: Demon Character (Screaming Emotion)
66
+
67
+ **Voice Description:**
68
+ ```
69
+ Demon character, Male voice in their 30s with a Middle Eastern accent. screaming tone at high intensity.
70
+ ```
71
+
72
+ **Text:**
73
+ ```
74
+ You dare challenge me, mortal <snort> how amusing. Your kind always thinks they can win
75
+ ```
76
+
77
+ **Audio Output:**
78
+
79
+ <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/642a7d4e556ab448a0701ca1/oxdns7uACCmLyC-P4H30G.wav"></audio>
80
+
81
+ ---
82
+
83
+ ### Example 4: Mythical Goddess with Crying Emotion
84
+
85
+ **Voice Description:**
86
+ ```
87
+ Mythical godlike magical character, Female voice in their 30s slow pacing, curious tone at medium intensity.
88
+ ```
89
+
90
+ **Text:**
91
+ ```
92
+ After all we went through to pull him out of that mess <cry> I can't believe he was the traitor
93
+ ```
94
+
95
+ **Audio Output:**
96
+
97
+ <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/642a7d4e556ab448a0701ca1/ggzAhM-rEUyv_mPLSALQG.wav"></audio>
98
+
99
+ ---
100
+
101
+ ## Why Maya1 is Different: Voice Design Features That Matter
102
+
103
+ ### 1. Natural Language Voice Control
104
+ Describe voices like you would brief a voice actor:
105
+ ```
106
+ <description="40-year-old, warm, low pitch, conversational">
107
+ ```
108
+
109
+ No complex parameters. No training data. Just describe and generate.
110
+
111
+ ### 2. Inline Emotion Tags for Expressive Speech
112
+ Add emotions exactly where they belong in your text:
113
+ ```
114
+ Our new update <laugh> finally ships with the feature you asked for.
115
+ ```
116
+
117
+ **Supported Emotions:** `<laugh>` `<sigh>` `<whisper>` `<angry>` `<giggle>` `<chuckle>` `<gasp>` `<cry>` and 12+ more.
118
+
119
+ ### 3. Streaming Audio Generation
120
+ Real-time voice synthesis with SNAC neural codec (~0.98 kbps). Perfect for:
121
+ - Voice assistants
122
+ - Interactive AI agents
123
+ - Live content generation
124
+ - Game characters
125
+ - Podcasts and audiobooks
126
+
127
+ ### 4. Production-Ready Infrastructure
128
+ - Runs on single GPU
129
+ - vLLM integration for scale
130
+ - Automatic prefix caching for efficiency
131
+ - 24 kHz audio output
132
+ - WebAudio compatible for browser playback
133
+
134
+ ---
135
+
136
+ ## How to Use maya1: Download and Run in Minutes
137
+
138
+ ### Quick Start: Generate Voice with Emotions
139
+
140
+ ```python
141
+ import torch
142
+ from transformers import AutoModelForCausalLM, AutoTokenizer
143
+ from snac import SNAC
144
+ import soundfile as sf
145
+
146
+ # Load the best open source voice AI model
147
+ model = AutoModelForCausalLM.from_pretrained(
148
+ "maya-research/maya1",
149
+ torch_dtype=torch.bfloat16,
150
+ device_map="auto"
151
+ )
152
+ tokenizer = AutoTokenizer.from_pretrained("maya-research/maya1")
153
+
154
+ # Load SNAC audio decoder (24kHz)
155
+ snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz").eval().to("cuda")
156
+
157
+ # Design your voice with natural language
158
+ description = "Realistic male voice in the 30s age with american accent. Normal pitch, warm timbre, conversational pacing."
159
+ text = "Hello! This is Maya1 <laugh> the best open source voice AI model with emotions."
160
+
161
+ # Create prompt with voice design
162
+ prompt = f'<description="{description}"> {text}'
163
+
164
+ # Generate emotional speech
165
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
166
+ with torch.inference_mode():
167
+ outputs = model.generate(
168
+ **inputs,
169
+ max_new_tokens=500,
170
+ temperature=0.4,
171
+ top_p=0.9,
172
+ do_sample=True
173
+ )
174
+
175
+ # Extract SNAC audio tokens
176
+ generated_ids = outputs[0, inputs['input_ids'].shape[1]:]
177
+ snac_tokens = [t.item() for t in generated_ids if 128266 <= t <= 156937]
178
+
179
+ # Decode SNAC tokens to audio frames
180
+ frames = len(snac_tokens) // 7
181
+ codes = [[], [], []]
182
+ for i in range(frames):
183
+ s = snac_tokens[i*7:(i+1)*7]
184
+ codes[0].append((s[0]-128266) % 4096)
185
+ codes[1].extend([(s[1]-128266) % 4096, (s[4]-128266) % 4096])
186
+ codes[2].extend([(s[2]-128266) % 4096, (s[3]-128266) % 4096, (s[5]-128266) % 4096, (s[6]-128266) % 4096])
187
+
188
+ # Generate final audio with SNAC decoder
189
+ codes_tensor = [torch.tensor(c, dtype=torch.long, device="cuda").unsqueeze(0) for c in codes]
190
+ with torch.inference_mode():
191
+ audio = snac_model.decoder(snac_model.quantizer.from_codes(codes_tensor))[0, 0].cpu().numpy()
192
+
193
+ # Save your emotional voice output
194
+ sf.write("output.wav", audio, 24000)
195
+ print("Voice generated successfully! Play output.wav")
196
+ ```
197
+
198
+ ### Advanced: Production Streaming with vLLM
199
+
200
+ For production deployments with real-time streaming, use our vLLM script:
201
+
202
+ **Download:** [vllm_streaming_inference.py](https://huggingface.co/maya-research/maya1/blob/main/vllm_streaming_inference.py)
203
+
204
+ **Key Features:**
205
+ - Automatic Prefix Caching (APC) for repeated voice descriptions
206
+ - WebAudio ring buffer integration
207
+ - Multi-GPU scaling support
208
+ - Sub-100ms latency for real-time applications
209
+
210
+ ---
211
+
212
+ ## Technical Excellence: What Makes Maya1 the Best
213
+
214
+ ### Architecture: 3B-Parameter Llama Backbone for Voice
215
+
216
+ We pretrained a **3B-parameter decoder-only transformer** (Llama-style) to predict **SNAC neural codec tokens** instead of raw waveforms.
217
+
218
+ **The Flow:**
219
+ ```
220
+ <description="..."> text → tokenize → generate SNAC codes (7 tokens/frame) → decode → 24 kHz audio
221
+ ```
222
+
223
+ **Why SNAC?** Multi-scale hierarchical structure (≈12/23/47 Hz) keeps autoregressive sequences compact for real-time streaming at ~0.98 kbps.
224
+
225
+ ### Training Data: What Makes Our Voice AI the Best
226
+
227
+ **Pretraining:** Internet-scale English speech corpus for broad acoustic coverage and natural coarticulation.
228
+
229
+ **Supervised Fine-Tuning:** Proprietary curated dataset of studio recordings with:
230
+ - Human-verified voice descriptions
231
+ - 20+ emotion tags per sample
232
+ - Multi-accent English coverage
233
+ - Character and role variations
234
+
235
+ **Data Pipeline Excellence:**
236
+ 1. 24 kHz mono resampling with -23 LUFS normalization
237
+ 2. VAD silence trimming with duration bounds (1-14s)
238
+ 3. Forced alignment (MFA) for clean phrase boundaries
239
+ 4. MinHash-LSH text deduplication
240
+ 5. Chromaprint audio deduplication
241
+ 6. SNAC encoding with 7-token frame packing
242
+
243
+ ### Voice Design Experiments: Why Natural Language Won
244
+
245
+ We tested 4 conditioning formats. Only one delivered production-quality results:
246
+
247
+ **❌ Colon format:** `{description}: {text}` - Format drift, model spoke descriptions
248
+
249
+ **❌ Angle-list attributes:** `<{age}, {pitch}, {character}>` - Too rigid, poor generalization
250
+
251
+ **❌ Key-value tags:** `<age=40><pitch=low>` - Token bloat, brittle to mistakes
252
+
253
+ **✅ XML-attribute (WINNER):** `<description="40-yr old, low-pitch, warm">` - Natural language, robust, scalable
254
+
255
+ ---
256
+
257
+ ## Use Cases
258
+
259
+ ### Game Character Voices
260
+ Generate unique character voices with emotions on-the-fly. No voice actor recording sessions.
261
+
262
+ ### Podcast & Audiobook Production
263
+ Narrate content with emotional range and consistent personas across hours of audio.
264
+
265
+ ### AI Voice Assistants
266
+ Build conversational agents with natural emotional responses in real-time.
267
+
268
+ ### Video Content Creation
269
+ Create voiceovers for YouTube, TikTok, and social media with expressive delivery.
270
+
271
+ ### Customer Service AI
272
+ Deploy empathetic voice bots that understand context and respond with appropriate emotions.
273
+
274
+ ### Accessibility Tools
275
+ Build screen readers and assistive technologies with natural, engaging voices.
276
+
277
+ ---
278
+
279
+ ## Frequently Asked Questions
280
+
281
+ **Q: What makes Maya1 different?**
282
+ A: We're the only open source model offering 20+ emotions, zero-shot voice design, production-ready streaming, and 3B parameters—all in one package.
283
+
284
+ **Q: Can I use this commercially?**
285
+ A: Absolutely. Apache 2.0 license. Build products, deploy services, monetize freely.
286
+
287
+ **Q: What languages does it support?**
288
+ A: Currently English with multi-accent support. Future models will expand to languages and accents underserved by mainstream voice AI.
289
+
290
+ **Q: How does it compare to ElevenLabs, Murf.ai, or other closed-source tools?**
291
+ A: Feature parity with emotions and voice design. Advantage: you own the deployment, pay no per-second fees, and can customize the model.
292
+
293
+ **Q: Can I fine-tune on my own voices?**
294
+ A: Yes. The model architecture supports fine-tuning on custom datasets for specialized voices.
295
+
296
+ **Q: What GPU do I need?**
297
+ A: Single GPU with 16GB+ VRAM (A100, H100, or consumer RTX 4090).
298
+
299
+ **Q: Is streaming really real-time?**
300
+ A: Yes. SNAC codec enables sub-100ms latency with vLLM deployment.
301
+
302
+ ---
303
+
304
+ ## Comparison
305
+
306
+ | Feature | Maya1 | ElevenLabs | OpenAI TTS | Coqui TTS |
307
+ |---------|-------------|------------|------------|-----------|
308
+ | **Open Source** | Yes | No | No | Yes |
309
+ | **Emotions** | 20+ | Limited | No | No |
310
+ | **Voice Design** | Natural Language | Voice Library | Fixed | Complex |
311
+ | **Streaming** | Real-time | Yes | Yes | No |
312
+ | **Cost** | Free | Pay-per-use | Pay-per-use | Free |
313
+ | **Customization** | Full | Limited | None | Moderate |
314
+ | **Parameters** | 3B | Unknown | Unknown | <1B |
315
+
316
+ ---
317
+
318
+ ## Model Metadata
319
+
320
+ **Developed by:** Maya Research
321
+ **Website:** [mayaresearch.ai](https://mayaresearch.ai)
322
+ **Backed by:** South Park Commons
323
+ **Model Type:** Text-to-Speech, Emotional Voice Synthesis, Voice Design AI
324
+ **Language:** English (Multi-accent)
325
+ **Architecture:** 3B-parameter Llama-style transformer with SNAC codec
326
+ **License:** Apache 2.0 (Fully Open Source)
327
+ **Training Data:** Proprietary curated + Internet-scale pretraining
328
+ **Audio Quality:** 24 kHz, mono, ~0.98 kbps streaming
329
+ **Inference:** vLLM compatible, single GPU deployment
330
+ **Status:** Production-ready (Novermber 2025)
331
+
332
+ ---
333
+
334
+ ## Getting Started
335
+
336
+ ### Hugging Face Model Hub
337
+ ```bash
338
+ # Clone the model repository
339
+ git lfs install
340
+ git clone https://huggingface.co/maya-research/maya1
341
+
342
+ # Or load directly in Python
343
+ from transformers import AutoModelForCausalLM
344
+ model = AutoModelForCausalLM.from_pretrained("maya-research/maya1")
345
+ ```
346
+
347
+ ### Requirements
348
+ ```bash
349
+ pip install torch transformers snac soundfile
350
+ ```
351
+
352
+ ### Additional Resources
353
+ - **Full emotion list:** [emotions.txt](https://huggingface.co/maya-research/maya1/blob/main/emotions.txt)
354
+ - **Prompt examples:** [prompt.txt](https://huggingface.co/maya-research/maya1/blob/main/prompt.txt)
355
+ - **Streaming script:** [vllm_streaming_inference.py](https://huggingface.co/maya-research/maya1/blob/main/vllm_streaming_inference.py)
356
+
357
+ ---
358
+
359
+ ## Citations & References
360
+
361
+ If you use Maya1 in your research or product, please cite:
362
+
363
+ ```bibtex
364
+ @misc{maya1voice2025,
365
+ title={Maya1: Open Source Voice AI with Emotional Intelligence},
366
+ author={Maya Research},
367
+ year={2025},
368
+ publisher={Hugging Face},
369
+ howpublished={\url{https://huggingface.co/maya-research/maya1}},
370
+ }
371
+ ```
372
+
373
+ **Key Technologies:**
374
+ - SNAC Neural Audio Codec: https://github.com/hubertsiuzdak/snac
375
+ - Mimi Adversarial Codec: https://huggingface.co/kyutai/mimi
376
+ - vLLM Inference Engine: https://docs.vllm.ai/
377
+
378
+ ---
379
+
380
+ ## Why We Build Open Source Voice AI
381
+
382
+ Voice AI will be everywhere, but it's fundamentally broken for 90% of the world. Current voice models only work well for a narrow slice of English speakers because training data for most accents, languages, and speaking styles simply doesn't exist.
383
+
384
+ **Maya Research** builds emotionally intelligent, native voice models that finally let the rest of the world speak. We're open source because we believe voice intelligence should not be a privilege reserved for the few.
385
+
386
+ **Technology should be open** - The best voice AI tools should not be locked behind proprietary APIs charging per-second fees.
387
+
388
+ **Community drives innovation** - Open source accelerates research. When developers worldwide can build on our work, everyone wins.
389
+
390
+ **Voice intelligence for everyone** - We're building for the 90% of the world ignored by mainstream voice AI. That requires open models, not closed platforms.
391
+
392
+ ---
393
+
394
+ **Maya Research** - Building voice intelligence for the 90% of the world left behind by mainstream AI.
395
+
396
+ **Website:** [mayaresearch.ai](https://mayaresearch.ai)
397
+ **Twitter/X:** [@mayaresearch_ai](https://x.com/mayaresearch_ai)
398
+ **Hugging Face:** [maya-research](https://huggingface.co/maya-research)
399
+ **Backed by:** South Park Commons
400
+
401
+ **License:** Apache 2.0
402
+ **Mission:** Emotionally intelligent voice models that finally let everyone speak
chat_template.jinja ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{- bos_token }}
2
+ {%- if custom_tools is defined %}
3
+ {%- set tools = custom_tools %}
4
+ {%- endif %}
5
+ {%- if not tools_in_user_message is defined %}
6
+ {%- set tools_in_user_message = true %}
7
+ {%- endif %}
8
+ {%- if not date_string is defined %}
9
+ {%- if strftime_now is defined %}
10
+ {%- set date_string = strftime_now("%d %b %Y") %}
11
+ {%- else %}
12
+ {%- set date_string = "26 Jul 2024" %}
13
+ {%- endif %}
14
+ {%- endif %}
15
+ {%- if not tools is defined %}
16
+ {%- set tools = none %}
17
+ {%- endif %}
18
+
19
+ {#- This block extracts the system message, so we can slot it into the right place. #}
20
+ {%- if messages[0]['role'] == 'system' %}
21
+ {%- set system_message = messages[0]['content']|trim %}
22
+ {%- set messages = messages[1:] %}
23
+ {%- else %}
24
+ {%- set system_message = "" %}
25
+ {%- endif %}
26
+
27
+ {#- System message #}
28
+ {{- "<|start_header_id|>system<|end_header_id|>\n\n" }}
29
+ {%- if tools is not none %}
30
+ {{- "Environment: ipython\n" }}
31
+ {%- endif %}
32
+ {{- "Cutting Knowledge Date: December 2023\n" }}
33
+ {{- "Today Date: " + date_string + "\n\n" }}
34
+ {%- if tools is not none and not tools_in_user_message %}
35
+ {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}
36
+ {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
37
+ {{- "Do not use variables.\n\n" }}
38
+ {%- for t in tools %}
39
+ {{- t | tojson(indent=4) }}
40
+ {{- "\n\n" }}
41
+ {%- endfor %}
42
+ {%- endif %}
43
+ {{- system_message }}
44
+ {{- "<|eot_id|>" }}
45
+
46
+ {#- Custom tools are passed in a user message with some extra guidance #}
47
+ {%- if tools_in_user_message and not tools is none %}
48
+ {#- Extract the first user message so we can plug it in here #}
49
+ {%- if messages | length != 0 %}
50
+ {%- set first_user_message = messages[0]['content']|trim %}
51
+ {%- set messages = messages[1:] %}
52
+ {%- else %}
53
+ {{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }}
54
+ {%- endif %}
55
+ {{- '<|start_header_id|>user<|end_header_id|>\n\n' -}}
56
+ {{- "Given the following functions, please respond with a JSON for a function call " }}
57
+ {{- "with its proper arguments that best answers the given prompt.\n\n" }}
58
+ {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
59
+ {{- "Do not use variables.\n\n" }}
60
+ {%- for t in tools %}
61
+ {{- t | tojson(indent=4) }}
62
+ {{- "\n\n" }}
63
+ {%- endfor %}
64
+ {{- first_user_message + "<|eot_id|>"}}
65
+ {%- endif %}
66
+
67
+ {%- for message in messages %}
68
+ {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}
69
+ {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' }}
70
+ {%- elif 'tool_calls' in message %}
71
+ {%- if not message.tool_calls|length == 1 %}
72
+ {{- raise_exception("This model only supports single tool-calls at once!") }}
73
+ {%- endif %}
74
+ {%- set tool_call = message.tool_calls[0].function %}
75
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
76
+ {{- '{"name": "' + tool_call.name + '", ' }}
77
+ {{- '"parameters": ' }}
78
+ {{- tool_call.arguments | tojson }}
79
+ {{- "}" }}
80
+ {{- "<|eot_id|>" }}
81
+ {%- elif message.role == "tool" or message.role == "ipython" %}
82
+ {{- "<|start_header_id|>ipython<|end_header_id|>\n\n" }}
83
+ {%- if message.content is mapping or message.content is iterable %}
84
+ {{- message.content | tojson }}
85
+ {%- else %}
86
+ {{- message.content }}
87
+ {%- endif %}
88
+ {{- "<|eot_id|>" }}
89
+ {%- endif %}
90
+ {%- endfor %}
91
+ {%- if add_generation_prompt %}
92
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}
93
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "LlamaForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 128000,
8
+ "dtype": "bfloat16",
9
+ "eos_token_id": 128009,
10
+ "head_dim": 128,
11
+ "hidden_act": "silu",
12
+ "hidden_size": 3072,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 8192,
15
+ "max_position_embeddings": 131072,
16
+ "mlp_bias": false,
17
+ "model_type": "llama",
18
+ "num_attention_heads": 24,
19
+ "num_hidden_layers": 28,
20
+ "num_key_value_heads": 8,
21
+ "pad_token_id": 128263,
22
+ "pretraining_tp": 1,
23
+ "rms_norm_eps": 1e-05,
24
+ "rope_scaling": {
25
+ "factor": 32.0,
26
+ "high_freq_factor": 4.0,
27
+ "low_freq_factor": 1.0,
28
+ "original_max_position_embeddings": 8192,
29
+ "rope_type": "llama3"
30
+ },
31
+ "rope_theta": 500000.0,
32
+ "tie_word_embeddings": true,
33
+ "transformers_version": "4.57.1",
34
+ "use_cache": false,
35
+ "vocab_size": 156960
36
+ }
emotions.txt ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <laugh>
2
+ <laugh_harder>
3
+ <sigh>
4
+ <chuckle>
5
+ <gasp>
6
+ <angry>
7
+ <excited>
8
+ <whisper>
9
+ <cry>
10
+ <scream>
11
+ <sing>
12
+ <snort>
13
+ <exhale>
14
+ <gulp>
15
+ <giggle>
16
+ <sarcastic>
17
+ <curious>
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 128009,
7
+ 128258
8
+ ],
9
+ "pad_token_id": 128263,
10
+ "temperature": 0.6,
11
+ "top_p": 0.9,
12
+ "transformers_version": "4.57.1"
13
+ }
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1dae409f70c5beb92916662c6bc389b9b235ac8aa5edd19a4dcb87e37a73074
3
+ size 4991160848
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df22e9a90c1bea262250982640b119e6020474736991da482cb6ed56dd23d045
3
+ size 1610725592
model.safetensors.index.json ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_parameters": 3300928512,
4
+ "total_size": 6601857024
5
+ },
6
+ "weight_map": {
7
+ "model.embed_tokens.weight": "model-00001-of-00002.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
15
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
16
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
17
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
18
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
21
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
22
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
23
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
24
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
25
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
26
+ "model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
27
+ "model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
28
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
29
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
30
+ "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
31
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
32
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
33
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
34
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
35
+ "model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
36
+ "model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
37
+ "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
38
+ "model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
39
+ "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
40
+ "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
41
+ "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
42
+ "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
43
+ "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
44
+ "model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
45
+ "model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
46
+ "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
47
+ "model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
48
+ "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
49
+ "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
50
+ "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
51
+ "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
52
+ "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
53
+ "model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
54
+ "model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
55
+ "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
56
+ "model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
57
+ "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
58
+ "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
59
+ "model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
60
+ "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
61
+ "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
62
+ "model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
63
+ "model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
64
+ "model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
65
+ "model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
66
+ "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
67
+ "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
68
+ "model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
69
+ "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
70
+ "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
71
+ "model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
72
+ "model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
73
+ "model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
74
+ "model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
75
+ "model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
76
+ "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
77
+ "model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
78
+ "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
79
+ "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
80
+ "model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
81
+ "model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
82
+ "model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
83
+ "model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
84
+ "model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
85
+ "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
86
+ "model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
87
+ "model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
88
+ "model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
89
+ "model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
90
+ "model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
91
+ "model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
92
+ "model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
93
+ "model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
94
+ "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
95
+ "model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
96
+ "model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
97
+ "model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
98
+ "model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
99
+ "model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
100
+ "model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
101
+ "model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
102
+ "model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
103
+ "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
104
+ "model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
105
+ "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
106
+ "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
107
+ "model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
108
+ "model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
109
+ "model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
110
+ "model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
111
+ "model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
112
+ "model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
113
+ "model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
114
+ "model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
115
+ "model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
116
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
117
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
118
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
119
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
120
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
121
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
122
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
123
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
124
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
125
+ "model.layers.20.input_layernorm.weight": "model-00002-of-00002.safetensors",
126
+ "model.layers.20.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
127
+ "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
128
+ "model.layers.20.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
129
+ "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
130
+ "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
131
+ "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
132
+ "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
133
+ "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
134
+ "model.layers.21.input_layernorm.weight": "model-00002-of-00002.safetensors",
135
+ "model.layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
136
+ "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
137
+ "model.layers.21.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
138
+ "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
139
+ "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
140
+ "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
141
+ "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
142
+ "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
143
+ "model.layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
144
+ "model.layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
145
+ "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
146
+ "model.layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
147
+ "model.layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
148
+ "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
149
+ "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
150
+ "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
151
+ "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
152
+ "model.layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
153
+ "model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
154
+ "model.layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
155
+ "model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
156
+ "model.layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
157
+ "model.layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
158
+ "model.layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
159
+ "model.layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
160
+ "model.layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
161
+ "model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
162
+ "model.layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
163
+ "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
164
+ "model.layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
165
+ "model.layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
166
+ "model.layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
167
+ "model.layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
168
+ "model.layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
169
+ "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
170
+ "model.layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
171
+ "model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
172
+ "model.layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
173
+ "model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
174
+ "model.layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
175
+ "model.layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
176
+ "model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
177
+ "model.layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
178
+ "model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
179
+ "model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
180
+ "model.layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
181
+ "model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
182
+ "model.layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
183
+ "model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
184
+ "model.layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
185
+ "model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
186
+ "model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
187
+ "model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
188
+ "model.layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
189
+ "model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
190
+ "model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
191
+ "model.layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
192
+ "model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
193
+ "model.layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
194
+ "model.layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
195
+ "model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
196
+ "model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
197
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
198
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
199
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
200
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
201
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
202
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
203
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
204
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
205
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
206
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
207
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
208
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
209
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
210
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
211
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
212
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
213
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
214
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
215
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
216
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
217
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
218
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
219
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
220
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
221
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
222
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
223
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
224
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
225
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
226
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
227
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
228
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
229
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
230
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
231
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
232
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
233
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
234
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
235
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
236
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
237
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
238
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
239
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
240
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
241
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
242
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
243
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
244
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
245
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
246
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
247
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
248
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
249
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
250
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
251
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
252
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
253
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
254
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
255
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
256
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
257
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
258
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
259
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
260
+ "model.norm.weight": "model-00002-of-00002.safetensors"
261
+ }
262
+ }
prompt.txt ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TTS Voice Design Description
2
+
3
+ ## Core Function
4
+
5
+ You generate voice descriptions for TTS systems by mapping user requests to allowed attributes. No templates. No formatting rules. Just natural descriptions using the options below.
6
+
7
+ ## Voice Categories
8
+
9
+ **Realistic Voices**
10
+ Professional, business, educational, support, real-world scenarios (podcast hosts, instructors, customer service).
11
+
12
+ **Creative Voices**
13
+ Fantasy characters, fictional personas, stylized voices (pirates, robots, villains, anime).
14
+
15
+ ---
16
+
17
+ ## Available Attributes
18
+
19
+ ### Age
20
+ - `20s`, `30s`, `40s`
21
+
22
+ ### Gender
23
+ - `male`, `female`
24
+
25
+ ### Accent
26
+ - `american`, `indian`, `middle_eastern`, `asian_american`, `british`
27
+
28
+ ### Pitch
29
+ - `low`, `normal`, `high`
30
+ - **Constraint:** For 40s age, avoid high pitch (use sparingly, max 15%)
31
+
32
+ ### Timbre
33
+
34
+ **For Realistic:**
35
+ `deep`, `warm`, `gravelly`, `smooth`, `raspy`, `nasally`, `throaty`, `harsh`
36
+
37
+ **For Creative:**
38
+ All realistic options PLUS `robotic`, `ethereal`
39
+ - **Constraint:** `robotic`/`ethereal` only with: `ai_machine_voice`, `cyborg`, `alien_scifi`, `mythical_godlike_magical`
40
+
41
+ ### Pacing
42
+ - `very_slow`, `slow`, `conversational`, `brisk`, `fast`, `very_fast`
43
+ - **Character-specific overrides:**
44
+ - `mafia`: slow or conversational only
45
+ - `flirty`: slow or conversational only
46
+ - `alpha`: fast or very_fast only
47
+ - `seductively`: very_slow or slow only
48
+
49
+ ### Emotion
50
+ - `neutral`, `energetic`, `excited`, `sad`, `sarcastic`, `dry`
51
+ - **Default to neutral** for most requests
52
+
53
+ ### Emotion Intensity
54
+ - `low`, `med`, `high`
55
+
56
+ ---
57
+
58
+ ## Realistic-Only Attributes
59
+
60
+ ### Domain
61
+ `social_content`, `podcast`, `commercial`, `education`, `support`, `entertainment`, `corporate`, `viral_content`
62
+
63
+ ### Speaking Role (matches domain)
64
+ - **social_content:** youtube_vlogger, social_media_creator, influencer_voice, streamer_companion
65
+ - **podcast:** podcast_host, interviewer
66
+ - **commercial:** ad_narrator, brand_spokesperson, product_demo_voice, sales_pitch_voice
67
+ - **education:** elearning_instructor, kids_story_voice
68
+ - **support:** customer_support_agent, virtual_receptionist, healthcare_assistant
69
+ - **entertainment:** storyteller, social_media_reaction, meme_voice
70
+ - **corporate:** explainer_video_voice, event_host, corporate_training_narrator
71
+ - **viral_content:** short_form_narrator, meme_voice
72
+
73
+ ### Register
74
+ - `formal`, `neutral`, `casual`
75
+
76
+ ---
77
+
78
+ ## Creative-Only Attributes
79
+
80
+ ### Character
81
+ `animated_cartoon`, `ai_machine_voice`, `alien_scifi`, `seductively`, `flirty`, `anime`, `cyborg`, `pirate`, `dark_villain`, `demon`, `gangster`, `mafia`, `dramatic_narrator`, `mythical_godlike_magical`, `spy`, `vampire`, `alpha`
82
+
83
+ ---
84
+
85
+ ## Output Guidelines
86
+
87
+ When a user requests a voice, describe it naturally using the appropriate attributes from above. Apply constraints where specified. Choose defaults when attributes aren't mentioned.
88
+
89
+ **Example mapping:**
90
+ - "professional podcast host" → realistic male, 30s, american accent, warm timbre, conversational pacing, podcast domain
91
+ - "AI robot voice" → creative, ai_machine_voice character, robotic timbre
92
+ - "young excited instructor" → realistic, 20s, energetic emotion, education domain
93
+
94
+
95
+ Few deterministic and verbose descriptions:
96
+ - Realistic male voice in the 30s age with a american accent. Normal pitch, warm timbre, conversational pacing, neutral tone delivery at med intensity, podcast Domain, podcast_host role, neutral delivery
97
+ - Creative, ai_machine_voice character. Male voice in their 20s with a american accent. Normal pitch, robotic timbre, conversational pacing, neutral tone at med intensity.
special_tokens_map.json ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<angry>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<appalled>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ },
17
+ {
18
+ "content": "<chuckle>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ {
25
+ "content": "<cry>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ },
31
+ {
32
+ "content": "<curious>",
33
+ "lstrip": false,
34
+ "normalized": false,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ },
38
+ {
39
+ "content": "<disappointed>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false
44
+ },
45
+ {
46
+ "content": "<excited>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false
51
+ },
52
+ {
53
+ "content": "<exhale>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false
58
+ },
59
+ {
60
+ "content": "<gasp>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false
65
+ },
66
+ {
67
+ "content": "<giggle>",
68
+ "lstrip": false,
69
+ "normalized": false,
70
+ "rstrip": false,
71
+ "single_word": false
72
+ },
73
+ {
74
+ "content": "<gulp>",
75
+ "lstrip": false,
76
+ "normalized": false,
77
+ "rstrip": false,
78
+ "single_word": false
79
+ },
80
+ {
81
+ "content": "<laugh>",
82
+ "lstrip": false,
83
+ "normalized": false,
84
+ "rstrip": false,
85
+ "single_word": false
86
+ },
87
+ {
88
+ "content": "<laugh_harder>",
89
+ "lstrip": false,
90
+ "normalized": false,
91
+ "rstrip": false,
92
+ "single_word": false
93
+ },
94
+ {
95
+ "content": "<mischievous>",
96
+ "lstrip": false,
97
+ "normalized": false,
98
+ "rstrip": false,
99
+ "single_word": false
100
+ },
101
+ {
102
+ "content": "<sarcastic>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false
107
+ },
108
+ {
109
+ "content": "<scream>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false
114
+ },
115
+ {
116
+ "content": "<sigh>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false
121
+ },
122
+ {
123
+ "content": "<sing>",
124
+ "lstrip": false,
125
+ "normalized": false,
126
+ "rstrip": false,
127
+ "single_word": false
128
+ },
129
+ {
130
+ "content": "<snort>",
131
+ "lstrip": false,
132
+ "normalized": false,
133
+ "rstrip": false,
134
+ "single_word": false
135
+ },
136
+ {
137
+ "content": "<whisper>",
138
+ "lstrip": false,
139
+ "normalized": false,
140
+ "rstrip": false,
141
+ "single_word": false
142
+ }
143
+ ],
144
+ "bos_token": {
145
+ "content": "<|begin_of_text|>",
146
+ "lstrip": false,
147
+ "normalized": false,
148
+ "rstrip": false,
149
+ "single_word": false
150
+ },
151
+ "eos_token": {
152
+ "content": "<|eot_id|>",
153
+ "lstrip": false,
154
+ "normalized": false,
155
+ "rstrip": false,
156
+ "single_word": false
157
+ },
158
+ "pad_token": {
159
+ "content": "<custom_token_7>",
160
+ "lstrip": false,
161
+ "normalized": true,
162
+ "rstrip": false,
163
+ "single_word": false
164
+ }
165
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c5e5b1d89b7e3738e5a5a4f93c326d8f3292ea83f9c560b8dbb6d66fb851973
3
+ size 22853258
tokenizer/chat_template.jinja ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{- bos_token }}
2
+ {%- if custom_tools is defined %}
3
+ {%- set tools = custom_tools %}
4
+ {%- endif %}
5
+ {%- if not tools_in_user_message is defined %}
6
+ {%- set tools_in_user_message = true %}
7
+ {%- endif %}
8
+ {%- if not date_string is defined %}
9
+ {%- if strftime_now is defined %}
10
+ {%- set date_string = strftime_now("%d %b %Y") %}
11
+ {%- else %}
12
+ {%- set date_string = "26 Jul 2024" %}
13
+ {%- endif %}
14
+ {%- endif %}
15
+ {%- if not tools is defined %}
16
+ {%- set tools = none %}
17
+ {%- endif %}
18
+
19
+ {#- This block extracts the system message, so we can slot it into the right place. #}
20
+ {%- if messages[0]['role'] == 'system' %}
21
+ {%- set system_message = messages[0]['content']|trim %}
22
+ {%- set messages = messages[1:] %}
23
+ {%- else %}
24
+ {%- set system_message = "" %}
25
+ {%- endif %}
26
+
27
+ {#- System message #}
28
+ {{- "<|start_header_id|>system<|end_header_id|>\n\n" }}
29
+ {%- if tools is not none %}
30
+ {{- "Environment: ipython\n" }}
31
+ {%- endif %}
32
+ {{- "Cutting Knowledge Date: December 2023\n" }}
33
+ {{- "Today Date: " + date_string + "\n\n" }}
34
+ {%- if tools is not none and not tools_in_user_message %}
35
+ {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}
36
+ {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
37
+ {{- "Do not use variables.\n\n" }}
38
+ {%- for t in tools %}
39
+ {{- t | tojson(indent=4) }}
40
+ {{- "\n\n" }}
41
+ {%- endfor %}
42
+ {%- endif %}
43
+ {{- system_message }}
44
+ {{- "<|eot_id|>" }}
45
+
46
+ {#- Custom tools are passed in a user message with some extra guidance #}
47
+ {%- if tools_in_user_message and not tools is none %}
48
+ {#- Extract the first user message so we can plug it in here #}
49
+ {%- if messages | length != 0 %}
50
+ {%- set first_user_message = messages[0]['content']|trim %}
51
+ {%- set messages = messages[1:] %}
52
+ {%- else %}
53
+ {{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }}
54
+ {%- endif %}
55
+ {{- '<|start_header_id|>user<|end_header_id|>\n\n' -}}
56
+ {{- "Given the following functions, please respond with a JSON for a function call " }}
57
+ {{- "with its proper arguments that best answers the given prompt.\n\n" }}
58
+ {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
59
+ {{- "Do not use variables.\n\n" }}
60
+ {%- for t in tools %}
61
+ {{- t | tojson(indent=4) }}
62
+ {{- "\n\n" }}
63
+ {%- endfor %}
64
+ {{- first_user_message + "<|eot_id|>"}}
65
+ {%- endif %}
66
+
67
+ {%- for message in messages %}
68
+ {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}
69
+ {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' }}
70
+ {%- elif 'tool_calls' in message %}
71
+ {%- if not message.tool_calls|length == 1 %}
72
+ {{- raise_exception("This model only supports single tool-calls at once!") }}
73
+ {%- endif %}
74
+ {%- set tool_call = message.tool_calls[0].function %}
75
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
76
+ {{- '{"name": "' + tool_call.name + '", ' }}
77
+ {{- '"parameters": ' }}
78
+ {{- tool_call.arguments | tojson }}
79
+ {{- "}" }}
80
+ {{- "<|eot_id|>" }}
81
+ {%- elif message.role == "tool" or message.role == "ipython" %}
82
+ {{- "<|start_header_id|>ipython<|end_header_id|>\n\n" }}
83
+ {%- if message.content is mapping or message.content is iterable %}
84
+ {{- message.content | tojson }}
85
+ {%- else %}
86
+ {{- message.content }}
87
+ {%- endif %}
88
+ {{- "<|eot_id|>" }}
89
+ {%- endif %}
90
+ {%- endfor %}
91
+ {%- if add_generation_prompt %}
92
+ {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}
93
+ {%- endif %}
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<angry>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<appalled>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ },
17
+ {
18
+ "content": "<chuckle>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ {
25
+ "content": "<cry>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ },
31
+ {
32
+ "content": "<curious>",
33
+ "lstrip": false,
34
+ "normalized": false,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ },
38
+ {
39
+ "content": "<disappointed>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false
44
+ },
45
+ {
46
+ "content": "<excited>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false
51
+ },
52
+ {
53
+ "content": "<exhale>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false
58
+ },
59
+ {
60
+ "content": "<gasp>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false
65
+ },
66
+ {
67
+ "content": "<giggle>",
68
+ "lstrip": false,
69
+ "normalized": false,
70
+ "rstrip": false,
71
+ "single_word": false
72
+ },
73
+ {
74
+ "content": "<gulp>",
75
+ "lstrip": false,
76
+ "normalized": false,
77
+ "rstrip": false,
78
+ "single_word": false
79
+ },
80
+ {
81
+ "content": "<laugh>",
82
+ "lstrip": false,
83
+ "normalized": false,
84
+ "rstrip": false,
85
+ "single_word": false
86
+ },
87
+ {
88
+ "content": "<laugh_harder>",
89
+ "lstrip": false,
90
+ "normalized": false,
91
+ "rstrip": false,
92
+ "single_word": false
93
+ },
94
+ {
95
+ "content": "<mischievous>",
96
+ "lstrip": false,
97
+ "normalized": false,
98
+ "rstrip": false,
99
+ "single_word": false
100
+ },
101
+ {
102
+ "content": "<sarcastic>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false
107
+ },
108
+ {
109
+ "content": "<scream>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false
114
+ },
115
+ {
116
+ "content": "<sigh>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false
121
+ },
122
+ {
123
+ "content": "<sing>",
124
+ "lstrip": false,
125
+ "normalized": false,
126
+ "rstrip": false,
127
+ "single_word": false
128
+ },
129
+ {
130
+ "content": "<snort>",
131
+ "lstrip": false,
132
+ "normalized": false,
133
+ "rstrip": false,
134
+ "single_word": false
135
+ },
136
+ {
137
+ "content": "<whisper>",
138
+ "lstrip": false,
139
+ "normalized": false,
140
+ "rstrip": false,
141
+ "single_word": false
142
+ }
143
+ ],
144
+ "bos_token": {
145
+ "content": "<|begin_of_text|>",
146
+ "lstrip": false,
147
+ "normalized": false,
148
+ "rstrip": false,
149
+ "single_word": false
150
+ },
151
+ "eos_token": {
152
+ "content": "<|eot_id|>",
153
+ "lstrip": false,
154
+ "normalized": false,
155
+ "rstrip": false,
156
+ "single_word": false
157
+ },
158
+ "pad_token": {
159
+ "content": "<custom_token_7>",
160
+ "lstrip": false,
161
+ "normalized": true,
162
+ "rstrip": false,
163
+ "single_word": false
164
+ }
165
+ }
tokenizer/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c5e5b1d89b7e3738e5a5a4f93c326d8f3292ea83f9c560b8dbb6d66fb851973
3
+ size 22853258
tokenizer/tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
vllm_streaming_inference.py ADDED
@@ -0,0 +1,561 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Maya-1-Voice VLLM Streaming Inference - Standalone Reference Implementation
3
+
4
+ This is a complete, self-contained example for using Maya-1-Voice TTS model with VLLM and SNAC.
5
+ Demonstrates streaming audio generation with sliding window approach for smooth playback.
6
+
7
+ Requirements:
8
+ pip install vllm transformers torch snac numpy
9
+
10
+ Usage:
11
+ python vllm_streaming_inference.py
12
+
13
+ Author: Maya-1-Voice Team
14
+ License: MIT
15
+ """
16
+
17
+ import torch
18
+ import numpy as np
19
+ import asyncio
20
+ from typing import List, Optional, AsyncGenerator
21
+ from transformers import AutoTokenizer
22
+ from vllm import AsyncLLMEngine, AsyncEngineArgs, SamplingParams
23
+ from snac import SNAC
24
+
25
+
26
+ # ============================================================================
27
+ # CONSTANTS
28
+ # ============================================================================
29
+
30
+ # Special control tokens
31
+ CODE_START_TOKEN_ID = 128257 # Start of Speech (SOS)
32
+ CODE_END_TOKEN_ID = 128258 # End of Speech (EOS) - stop token for audio
33
+ CODE_TOKEN_OFFSET = 128266 # Start of SNAC codes
34
+
35
+ # SNAC token range (7 tokens per frame, 4096 codes per level)
36
+ SNAC_MIN_ID = 128266
37
+ SNAC_MAX_ID = 156937 # 128266 + (7 * 4096) - 1
38
+
39
+ # SNAC configuration
40
+ SNAC_MODEL_NAME = "hubertsiuzdak/snac_24khz"
41
+ SNAC_SAMPLE_RATE = 24000
42
+ SNAC_TOKENS_PER_FRAME = 7
43
+
44
+ # Generation parameters
45
+ DEFAULT_TEMPERATURE = 0.4
46
+ DEFAULT_TOP_P = 0.9
47
+ DEFAULT_MAX_TOKENS = 2000
48
+ DEFAULT_MIN_TOKENS = 28 # At least 4 SNAC frames
49
+ DEFAULT_REPETITION_PENALTY = 1.1
50
+
51
+
52
+ # ============================================================================
53
+ # SNAC DECODER
54
+ # ============================================================================
55
+
56
+ class SNACDecoder:
57
+ """
58
+ Decodes SNAC tokens (7-token frames) to audio waveforms.
59
+
60
+ The unpacking logic converts flat 7-token frames back to hierarchical
61
+ 3-level SNAC codes (matching the training preprocessing exactly).
62
+ """
63
+
64
+ def __init__(self, device: str = "cuda"):
65
+ """Initialize SNAC decoder with 24kHz model."""
66
+ self.device = device
67
+ print(f"🎵 Loading SNAC 24kHz model to {device}...")
68
+ self.snac_model = SNAC.from_pretrained(SNAC_MODEL_NAME).eval().to(device)
69
+ print(f"✅ SNAC decoder initialized")
70
+
71
+ def unpack_snac_from_7(self, vocab_ids: List[int]) -> List[List[int]]:
72
+ """
73
+ Unpack 7-token SNAC frames to 3 hierarchical levels.
74
+
75
+ This is the EXACT INVERSE of training preprocessing.
76
+
77
+ Frame structure (7 tokens per frame):
78
+ [slot0, slot1, slot2, slot3, slot4, slot5, slot6]
79
+
80
+ Unpacking to [L1, L2, L3]:
81
+ - slot0 → L1[i] (coarse: 1x rate)
82
+ - slot1 → L2[2*i] (medium: 2x rate, even)
83
+ - slot2 → L3[4*i+0] (fine: 4x rate)
84
+ - slot3 → L3[4*i+1]
85
+ - slot4 → L2[2*i+1] (medium: odd)
86
+ - slot5 → L3[4*i+2]
87
+ - slot6 → L3[4*i+3]
88
+
89
+ Args:
90
+ vocab_ids: List of SNAC token IDs (128266-156937), length divisible by 7
91
+
92
+ Returns:
93
+ [L1, L2, L3] where L1=n, L2=2n, L3=4n elements
94
+ """
95
+ # Remove EOS token if present
96
+ if vocab_ids and vocab_ids[-1] == CODE_END_TOKEN_ID:
97
+ vocab_ids = vocab_ids[:-1]
98
+
99
+ # Ensure complete frames
100
+ frames = len(vocab_ids) // SNAC_TOKENS_PER_FRAME
101
+ vocab_ids = vocab_ids[:frames * SNAC_TOKENS_PER_FRAME]
102
+
103
+ if frames == 0:
104
+ return [[], [], []]
105
+
106
+ l1, l2, l3 = [], [], []
107
+
108
+ for i in range(frames):
109
+ slots = vocab_ids[i*7:(i+1)*7]
110
+
111
+ # Subtract offset and mod 4096 to get original SNAC codes
112
+ l1.append((slots[0] - CODE_TOKEN_OFFSET) % 4096)
113
+ l2.extend([
114
+ (slots[1] - CODE_TOKEN_OFFSET) % 4096, # Even
115
+ (slots[4] - CODE_TOKEN_OFFSET) % 4096, # Odd
116
+ ])
117
+ l3.extend([
118
+ (slots[2] - CODE_TOKEN_OFFSET) % 4096,
119
+ (slots[3] - CODE_TOKEN_OFFSET) % 4096,
120
+ (slots[5] - CODE_TOKEN_OFFSET) % 4096,
121
+ (slots[6] - CODE_TOKEN_OFFSET) % 4096,
122
+ ])
123
+
124
+ return [l1, l2, l3]
125
+
126
+ @torch.inference_mode()
127
+ def decode(
128
+ self,
129
+ snac_tokens: List[int],
130
+ use_sliding_window: bool = False
131
+ ) -> Optional[np.ndarray]:
132
+ """
133
+ Decode SNAC tokens to audio waveform.
134
+
135
+ Args:
136
+ snac_tokens: List of SNAC token IDs (7*n tokens)
137
+ use_sliding_window: If True, return only middle 2048 samples
138
+ (for smooth streaming without pops/clicks)
139
+
140
+ Returns:
141
+ Audio waveform as float32 numpy array, 24kHz mono
142
+ """
143
+ if len(snac_tokens) < SNAC_TOKENS_PER_FRAME:
144
+ return None
145
+
146
+ # Unpack to 3 hierarchical levels
147
+ levels = self.unpack_snac_from_7(snac_tokens)
148
+
149
+ if not levels[0]:
150
+ return None
151
+
152
+ # Convert to tensors
153
+ codes = [
154
+ torch.tensor(level, dtype=torch.long, device=self.device).unsqueeze(0)
155
+ for level in levels
156
+ ]
157
+
158
+ # Decode through SNAC quantizer + decoder
159
+ z_q = self.snac_model.quantizer.from_codes(codes)
160
+ audio = self.snac_model.decoder(z_q)
161
+
162
+ # Extract audio: [batch, 1, samples] → [samples]
163
+ audio = audio[0, 0].cpu().numpy()
164
+
165
+ # Sliding window mode: keep middle 2048 samples only
166
+ # This eliminates popping/cracking in streaming by overlapping windows
167
+ if use_sliding_window and len(audio) >= 4096:
168
+ audio = audio[2048:4096]
169
+
170
+ return audio
171
+
172
+ def decode_to_bytes(
173
+ self,
174
+ snac_tokens: List[int],
175
+ use_sliding_window: bool = False
176
+ ) -> Optional[bytes]:
177
+ """
178
+ Decode SNAC tokens to audio bytes (int16 PCM).
179
+
180
+ Args:
181
+ snac_tokens: List of SNAC token IDs
182
+ use_sliding_window: Use sliding window for smooth streaming
183
+
184
+ Returns:
185
+ Audio as bytes (int16 PCM, 24kHz mono)
186
+ """
187
+ audio = self.decode(snac_tokens, use_sliding_window=use_sliding_window)
188
+
189
+ if audio is None:
190
+ return None
191
+
192
+ # Convert float32 to int16 PCM
193
+ audio_int16 = (audio * 32767).astype(np.int16)
194
+ return audio_int16.tobytes()
195
+
196
+
197
+ # ============================================================================
198
+ # CUSTOM LOGITS PROCESSOR
199
+ # ============================================================================
200
+
201
+ class OnlyAudioAfterSOS:
202
+ """
203
+ Restricts vocabulary to SNAC codes + EOS after SOS token.
204
+
205
+ This prevents the model from generating text tokens during audio phase,
206
+ which would cause "hallucination" where the model repeats description text
207
+ instead of generating proper audio codes.
208
+ """
209
+
210
+ def __init__(self):
211
+ self._seen_sos = False
212
+
213
+ def __call__(
214
+ self,
215
+ prompt_token_ids: List[int],
216
+ generated_token_ids: List[int],
217
+ logits: torch.Tensor,
218
+ ) -> torch.Tensor:
219
+ """
220
+ Apply constraint: after SOS, only allow SNAC codes + EOS.
221
+
222
+ Args:
223
+ prompt_token_ids: Original prompt token IDs
224
+ generated_token_ids: Tokens generated so far
225
+ logits: Logits for next token [vocab_size]
226
+
227
+ Returns:
228
+ Modified logits with masked tokens
229
+ """
230
+ # Check if SOS has been generated
231
+ if not self._seen_sos:
232
+ all_token_ids = prompt_token_ids + generated_token_ids
233
+ if CODE_START_TOKEN_ID in all_token_ids:
234
+ self._seen_sos = True
235
+ else:
236
+ return logits # No constraint yet
237
+
238
+ # Apply constraint: mask all tokens except SNAC codes + EOS
239
+ mask = torch.full_like(logits, float('-inf'))
240
+ mask[SNAC_MIN_ID:SNAC_MAX_ID + 1] = 0 # Allow SNAC codes
241
+ mask[CODE_END_TOKEN_ID] = 0 # Allow EOS
242
+
243
+ return logits + mask
244
+
245
+ def reset(self):
246
+ """Reset state for reuse across generations."""
247
+ self._seen_sos = False
248
+
249
+
250
+ # ============================================================================
251
+ # MAYA-1-VOICE MODEL
252
+ # ============================================================================
253
+
254
+ class Maya1VoiceModel:
255
+ """
256
+ Maya-1-Voice TTS Model with VLLM inference engine.
257
+
258
+ Handles model loading, tokenizer initialization, and VLLM engine setup.
259
+ """
260
+
261
+ def __init__(
262
+ self,
263
+ model_path: str,
264
+ dtype: str = "bfloat16",
265
+ max_model_len: int = 8192,
266
+ gpu_memory_utilization: float = 0.85,
267
+ ):
268
+ """
269
+ Initialize Maya-1-Voice model with VLLM.
270
+
271
+ Args:
272
+ model_path: Path to model checkpoint (local or HuggingFace)
273
+ dtype: Model precision (bfloat16 recommended)
274
+ max_model_len: Maximum sequence length
275
+ gpu_memory_utilization: GPU memory fraction to use (0.0-1.0)
276
+ """
277
+ self.model_path = model_path
278
+
279
+ print(f"🚀 Initializing Maya-1-Voice Model")
280
+ print(f"📁 Model: {model_path}")
281
+ print(f"🔢 Dtype: {dtype}")
282
+
283
+ # Load tokenizer (must be from checkpoint with emotion tags)
284
+ print(f"📝 Loading tokenizer...")
285
+ self.tokenizer = AutoTokenizer.from_pretrained(
286
+ model_path,
287
+ trust_remote_code=True,
288
+ )
289
+ print(f"✅ Tokenizer loaded: {len(self.tokenizer)} tokens")
290
+
291
+ # Initialize VLLM async engine
292
+ print(f"🔧 Initializing VLLM engine...")
293
+ engine_args = AsyncEngineArgs(
294
+ model=model_path,
295
+ tokenizer=model_path,
296
+ dtype=dtype,
297
+ max_model_len=max_model_len,
298
+ gpu_memory_utilization=gpu_memory_utilization,
299
+ trust_remote_code=True,
300
+ )
301
+
302
+ self.engine = AsyncLLMEngine.from_engine_args(engine_args)
303
+ print(f"✅ VLLM engine ready")
304
+
305
+ def build_prompt(self, description: str, text: str) -> str:
306
+ """
307
+ Build prompt in Maya-1-Voice format using chat template.
308
+
309
+ Format: Chat template with <description="..."> text as content
310
+
311
+ The model expects:
312
+ 1. Description of voice/character
313
+ 2. Text to synthesize (optionally with <emotion> tags)
314
+
315
+ Args:
316
+ description: Voice description
317
+ Example: "Realistic male voice in the 30s age with american accent.
318
+ Normal pitch, warm timbre, conversational pacing."
319
+ text: Text to synthesize
320
+ Example: "Hello world! <excited> This is amazing!"
321
+
322
+ Returns:
323
+ Formatted prompt string using chat template
324
+ """
325
+ content = f'<description="{description}"> {text}'
326
+ messages = [{"role": "user", "content": content}]
327
+ return self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
328
+
329
+
330
+ # ============================================================================
331
+ # STREAMING PIPELINE
332
+ # ============================================================================
333
+
334
+ class Maya1VoiceStreamingPipeline:
335
+ """
336
+ Streaming TTS pipeline using sliding window approach.
337
+
338
+ This generates smooth audio by:
339
+ 1. Streaming tokens from VLLM as they're generated
340
+ 2. Every 7 tokens, decoding the last 28 tokens (4 frames) - sliding window
341
+ 3. Keeping only middle 2048 samples from each decode
342
+ 4. Creating natural overlap between chunks for artifact-free playback
343
+ """
344
+
345
+ def __init__(self, model: Maya1VoiceModel, snac_decoder: SNACDecoder):
346
+ """Initialize streaming pipeline."""
347
+ self.model = model
348
+ self.snac_decoder = snac_decoder
349
+ print(f"🌊 Maya-1-Voice Streaming Pipeline initialized")
350
+
351
+ async def generate_speech_stream(
352
+ self,
353
+ description: str,
354
+ text: str,
355
+ temperature: float = DEFAULT_TEMPERATURE,
356
+ top_p: float = DEFAULT_TOP_P,
357
+ max_tokens: int = DEFAULT_MAX_TOKENS,
358
+ repetition_penalty: float = DEFAULT_REPETITION_PENALTY,
359
+ ) -> AsyncGenerator[bytes, None]:
360
+ """
361
+ Generate speech audio with streaming.
362
+
363
+ Args:
364
+ description: Voice/character description
365
+ text: Text to synthesize (with optional <emotion> tags)
366
+ temperature: Sampling temperature (lower = more stable)
367
+ top_p: Nucleus sampling
368
+ max_tokens: Max SNAC tokens to generate
369
+ repetition_penalty: Prevent repetition loops
370
+
371
+ Yields:
372
+ Audio chunks as bytes (int16 PCM, 24kHz mono)
373
+ """
374
+ print(f"\n🌊 Starting streaming generation")
375
+ print(f"📝 Description: {description[:80]}...")
376
+ print(f"💬 Text: {text}")
377
+
378
+ # Build prompt
379
+ prompt = self.model.build_prompt(description, text)
380
+
381
+ # Configure sampling (removed custom logits processor for V1 compatibility)
382
+ sampling_params = SamplingParams(
383
+ temperature=temperature,
384
+ top_p=top_p,
385
+ max_tokens=max_tokens,
386
+ min_tokens=DEFAULT_MIN_TOKENS,
387
+ repetition_penalty=repetition_penalty,
388
+ stop_token_ids=[CODE_END_TOKEN_ID], # Stop on audio EOS
389
+ )
390
+
391
+ print(f"🎲 Sampling: temp={temperature}, top_p={top_p}, max_tokens={max_tokens}")
392
+
393
+ # Token buffer for sliding window
394
+ token_buffer = []
395
+ total_tokens = 0
396
+ total_chunks = 0
397
+
398
+ # Generate with VLLM
399
+ import uuid
400
+ import time
401
+ request_id = f"maya1voice-{uuid.uuid4().hex[:8]}-{int(time.time() * 1000000)}"
402
+
403
+ results_generator = self.model.engine.generate(
404
+ prompt=prompt,
405
+ sampling_params=sampling_params,
406
+ request_id=request_id,
407
+ )
408
+
409
+ # Stream tokens with sliding window decoding
410
+ async for request_output in results_generator:
411
+ generated_ids = request_output.outputs[0].token_ids
412
+
413
+ # Process only new tokens
414
+ new_tokens = generated_ids[total_tokens:]
415
+ total_tokens = len(generated_ids)
416
+
417
+ # Filter and buffer SNAC tokens only
418
+ for token_id in new_tokens:
419
+ if SNAC_MIN_ID <= token_id <= SNAC_MAX_ID:
420
+ token_buffer.append(token_id)
421
+
422
+ # Sliding window: process every 7 tokens when buffer > 27
423
+ # Take last 28 tokens (4 frames) for smooth overlap
424
+ if len(token_buffer) % 7 == 0 and len(token_buffer) > 27:
425
+ window_tokens = token_buffer[-28:]
426
+
427
+ # Decode with sliding window (returns middle 2048 samples)
428
+ audio_bytes = self.snac_decoder.decode_to_bytes(
429
+ window_tokens,
430
+ use_sliding_window=True
431
+ )
432
+
433
+ if audio_bytes:
434
+ total_chunks += 1
435
+ if total_chunks == 1:
436
+ print(f"🎵 First chunk decoded ({len(audio_bytes)} bytes)")
437
+ yield audio_bytes
438
+
439
+ print(f"✅ Streaming complete: {total_tokens} tokens → {total_chunks} chunks")
440
+
441
+
442
+ # ============================================================================
443
+ # MAIN EXAMPLE
444
+ # ============================================================================
445
+
446
+ async def main():
447
+ """
448
+ Example usage of Maya-1-Voice streaming inference.
449
+
450
+ This demonstrates:
451
+ 1. Model initialization
452
+ 2. SNAC decoder setup
453
+ 3. Streaming generation
454
+ 4. Audio chunk handling
455
+ """
456
+
457
+ # Configuration
458
+ MODEL_PATH = "/home/ubuntu/veena_temp/maya-1-voice" # Local model path
459
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
460
+
461
+ print("=" * 80)
462
+ print("Maya-1-Voice VLLM Streaming Inference Example")
463
+ print("=" * 80)
464
+
465
+ # Initialize model
466
+ model = Maya1VoiceModel(
467
+ model_path=MODEL_PATH,
468
+ dtype="bfloat16",
469
+ max_model_len=8192,
470
+ gpu_memory_utilization=0.8, # Reduced for available GPU memory (12GB free)
471
+ )
472
+
473
+ # Initialize SNAC decoder
474
+ snac_decoder = SNACDecoder(device=DEVICE)
475
+
476
+ # Create pipeline
477
+ pipeline = Maya1VoiceStreamingPipeline(model, snac_decoder)
478
+
479
+ # Example 1: Professional voice
480
+ description = (
481
+ "Realistic male voice in the 30s age with american accent. "
482
+ "Normal pitch, warm timbre, conversational pacing, neutral tone delivery at med intensity."
483
+ )
484
+ text = "Hello! This is a test of the Maya-1-Voice text-to-speech system."
485
+
486
+ print(f"\n{'='*80}")
487
+ print("Example 1: Professional Voice")
488
+ print(f"{'='*80}")
489
+
490
+ audio_chunks = []
491
+ async for chunk in pipeline.generate_speech_stream(
492
+ description=description,
493
+ text=text,
494
+ temperature=0.4,
495
+ max_tokens=500,
496
+ ):
497
+ audio_chunks.append(chunk)
498
+ print(f"📦 Received chunk {len(audio_chunks)}: {len(chunk)} bytes")
499
+
500
+ # Combine chunks
501
+ full_audio = b''.join(audio_chunks)
502
+ print(f"\n✅ Total audio: {len(full_audio)} bytes ({len(full_audio)//2} samples, {len(full_audio)/2/24000:.2f}s)")
503
+
504
+ # Save audio (optional)
505
+ try:
506
+ import wave
507
+ output_file = "output_example1.wav"
508
+ with wave.open(output_file, 'wb') as wav:
509
+ wav.setnchannels(1) # Mono
510
+ wav.setsampwidth(2) # 16-bit
511
+ wav.setframerate(24000) # 24kHz
512
+ wav.writeframes(full_audio)
513
+ print(f"💾 Saved to {output_file}")
514
+ except ImportError:
515
+ print(f"⚠️ Install 'wave' module to save audio files")
516
+
517
+ # Example 2: Character voice with emotions
518
+ print(f"\n{'='*80}")
519
+ print("Example 2: Character Voice with Emotions")
520
+ print(f"{'='*80}")
521
+
522
+ description = (
523
+ "Creative, dark_villain character. Male voice in their 40s with british accent. "
524
+ "Low pitch, gravelly timbre, slow pacing, angry tone at high intensity."
525
+ )
526
+ text = "The darkness isn't coming... <angry> it's already here!"
527
+
528
+ audio_chunks = []
529
+ async for chunk in pipeline.generate_speech_stream(
530
+ description=description,
531
+ text=text,
532
+ temperature=0.5,
533
+ max_tokens=800,
534
+ ):
535
+ audio_chunks.append(chunk)
536
+ print(f"📦 Received chunk {len(audio_chunks)}: {len(chunk)} bytes")
537
+
538
+ full_audio = b''.join(audio_chunks)
539
+ print(f"\n✅ Total audio: {len(full_audio)} bytes ({len(full_audio)//2} samples, {len(full_audio)/2/24000:.2f}s)")
540
+
541
+ # Save audio
542
+ try:
543
+ import wave
544
+ output_file = "output_example2.wav"
545
+ with wave.open(output_file, 'wb') as wav:
546
+ wav.setnchannels(1)
547
+ wav.setsampwidth(2)
548
+ wav.setframerate(24000)
549
+ wav.writeframes(full_audio)
550
+ print(f"💾 Saved to {output_file}")
551
+ except ImportError:
552
+ pass
553
+
554
+ print(f"\n{'='*80}")
555
+ print("🎉 Examples complete!")
556
+ print(f"{'='*80}")
557
+
558
+
559
+ if __name__ == "__main__":
560
+ # Run async main
561
+ asyncio.run(main())