Text-to-Speech
Transformers
ONNX
GGUF
Chinese
English
voice-dialogue
speech-recognition
large-language-model
asr
tts
llm
chinese
english
real-time
conversational
Instructions to use MoYoYoTech/VoiceDialogue with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MoYoYoTech/VoiceDialogue with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-to-speech", model="MoYoYoTech/VoiceDialogue") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("MoYoYoTech/VoiceDialogue", dtype="auto") - llama-cpp-python
How to use MoYoYoTech/VoiceDialogue with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="MoYoYoTech/VoiceDialogue", filename="assets/models/llm/qwen/Qwen3-8B-Q6_K.gguf", )
llm.create_chat_completion( messages = "\"The answer to the universe is 42\"" )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use MoYoYoTech/VoiceDialogue with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K # Run inference directly in the terminal: llama-cli -hf MoYoYoTech/VoiceDialogue:Q6_K
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K # Run inference directly in the terminal: llama-cli -hf MoYoYoTech/VoiceDialogue:Q6_K
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K # Run inference directly in the terminal: ./llama-cli -hf MoYoYoTech/VoiceDialogue:Q6_K
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K # Run inference directly in the terminal: ./build/bin/llama-cli -hf MoYoYoTech/VoiceDialogue:Q6_K
Use Docker
docker model run hf.co/MoYoYoTech/VoiceDialogue:Q6_K
- LM Studio
- Jan
- Ollama
How to use MoYoYoTech/VoiceDialogue with Ollama:
ollama run hf.co/MoYoYoTech/VoiceDialogue:Q6_K
- Unsloth Studio new
How to use MoYoYoTech/VoiceDialogue with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for MoYoYoTech/VoiceDialogue to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for MoYoYoTech/VoiceDialogue to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for MoYoYoTech/VoiceDialogue to start chatting
- Pi new
How to use MoYoYoTech/VoiceDialogue with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "MoYoYoTech/VoiceDialogue:Q6_K" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use MoYoYoTech/VoiceDialogue with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default MoYoYoTech/VoiceDialogue:Q6_K
Run Hermes
hermes
- Docker Model Runner
How to use MoYoYoTech/VoiceDialogue with Docker Model Runner:
docker model run hf.co/MoYoYoTech/VoiceDialogue:Q6_K
- Lemonade
How to use MoYoYoTech/VoiceDialogue with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull MoYoYoTech/VoiceDialogue:Q6_K
Run and chat with the model
lemonade run user.VoiceDialogue-Q6_K
List all available models
lemonade list
liumaolin commited on
Commit ·
7d8046a
1
Parent(s): 5f9eaee
Refactor threading in `launcher.py` to standardize worker initialization, enforce daemon mode, and improve naming consistency.
Browse files
src/voice_dialogue/core/launcher.py
CHANGED
|
@@ -61,41 +61,24 @@ def launch_system(
|
|
| 61 |
|
| 62 |
threads = []
|
| 63 |
|
| 64 |
-
# 音频采集
|
| 65 |
-
enable_echo_cancellation = not disable_echo_cancellation
|
| 66 |
-
audio_frame_probe = AudioCapture(
|
| 67 |
-
audio_frames_queue=audio_frames_queue,
|
| 68 |
-
enable_echo_cancellation=enable_echo_cancellation
|
| 69 |
-
)
|
| 70 |
-
audio_frame_probe.start()
|
| 71 |
-
threads.append(audio_frame_probe)
|
| 72 |
-
|
| 73 |
-
# 语音状态监测
|
| 74 |
-
enable_vad = disable_echo_cancellation
|
| 75 |
-
user_voice_checker = SpeechStateMonitor(
|
| 76 |
-
audio_frame_queue=audio_frames_queue,
|
| 77 |
-
user_voice_queue=user_voice_queue,
|
| 78 |
-
enable_vad=enable_vad
|
| 79 |
-
)
|
| 80 |
-
user_voice_checker.start()
|
| 81 |
-
threads.append(user_voice_checker)
|
| 82 |
-
|
| 83 |
# 语音识别
|
| 84 |
-
|
| 85 |
user_voice_queue=user_voice_queue,
|
| 86 |
transcribed_text_queue=transcribed_text_queue,
|
| 87 |
language=user_language
|
| 88 |
)
|
| 89 |
-
|
| 90 |
-
|
|
|
|
| 91 |
|
| 92 |
# 文本生成
|
| 93 |
-
|
| 94 |
user_question_queue=transcribed_text_queue,
|
| 95 |
generated_answer_queue=text_input_queue
|
| 96 |
)
|
| 97 |
-
|
| 98 |
-
|
|
|
|
| 99 |
|
| 100 |
# 动态获取TTS配置
|
| 101 |
tts_speaker_config = get_tts_config_by_speaker_name(speaker)
|
|
@@ -105,18 +88,41 @@ def launch_system(
|
|
| 105 |
raise ValueError(f"不支持的TTS说话人: {speaker}。可用说话人: {', '.join(available_speakers)}")
|
| 106 |
|
| 107 |
# 语音合成
|
| 108 |
-
|
| 109 |
text_input_queue=text_input_queue,
|
| 110 |
audio_output_queue=audio_output_queue,
|
| 111 |
tts_config=tts_speaker_config
|
| 112 |
)
|
| 113 |
-
|
| 114 |
-
|
|
|
|
| 115 |
|
| 116 |
# 音频播放
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 120 |
|
| 121 |
# 等待所有线程准备就绪
|
| 122 |
while not all([thread.is_ready for thread in threads]):
|
|
|
|
| 61 |
|
| 62 |
threads = []
|
| 63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
# 语音识别
|
| 65 |
+
asr_worker = ASRWorker(
|
| 66 |
user_voice_queue=user_voice_queue,
|
| 67 |
transcribed_text_queue=transcribed_text_queue,
|
| 68 |
language=user_language
|
| 69 |
)
|
| 70 |
+
asr_worker.daemon = True
|
| 71 |
+
asr_worker.start()
|
| 72 |
+
threads.append(asr_worker)
|
| 73 |
|
| 74 |
# 文本生成
|
| 75 |
+
text_generator = LLMResponseGenerator(
|
| 76 |
user_question_queue=transcribed_text_queue,
|
| 77 |
generated_answer_queue=text_input_queue
|
| 78 |
)
|
| 79 |
+
text_generator.daemon = True
|
| 80 |
+
text_generator.start()
|
| 81 |
+
threads.append(text_generator)
|
| 82 |
|
| 83 |
# 动态获取TTS配置
|
| 84 |
tts_speaker_config = get_tts_config_by_speaker_name(speaker)
|
|
|
|
| 88 |
raise ValueError(f"不支持的TTS说话人: {speaker}。可用说话人: {', '.join(available_speakers)}")
|
| 89 |
|
| 90 |
# 语音合成
|
| 91 |
+
audio_generator = TTSAudioGenerator(
|
| 92 |
text_input_queue=text_input_queue,
|
| 93 |
audio_output_queue=audio_output_queue,
|
| 94 |
tts_config=tts_speaker_config
|
| 95 |
)
|
| 96 |
+
audio_generator.daemon = True
|
| 97 |
+
audio_generator.start()
|
| 98 |
+
threads.append(audio_generator)
|
| 99 |
|
| 100 |
# 音频播放
|
| 101 |
+
audio_player = AudioStreamPlayer(audio_playing_queue=audio_output_queue)
|
| 102 |
+
audio_player.daemon = True
|
| 103 |
+
audio_player.start()
|
| 104 |
+
threads.append(audio_player)
|
| 105 |
+
|
| 106 |
+
# 语音状态监测
|
| 107 |
+
enable_vad = disable_echo_cancellation
|
| 108 |
+
speech_monitor = SpeechStateMonitor(
|
| 109 |
+
audio_frame_queue=audio_frames_queue,
|
| 110 |
+
user_voice_queue=user_voice_queue,
|
| 111 |
+
enable_vad=enable_vad
|
| 112 |
+
)
|
| 113 |
+
speech_monitor.daemon = True
|
| 114 |
+
speech_monitor.start()
|
| 115 |
+
threads.append(speech_monitor)
|
| 116 |
+
|
| 117 |
+
# 音频采集
|
| 118 |
+
enable_echo_cancellation = not disable_echo_cancellation
|
| 119 |
+
audio_capture = AudioCapture(
|
| 120 |
+
audio_frames_queue=audio_frames_queue,
|
| 121 |
+
enable_echo_cancellation=enable_echo_cancellation
|
| 122 |
+
)
|
| 123 |
+
audio_capture.daemon = True
|
| 124 |
+
audio_capture.start()
|
| 125 |
+
threads.append(audio_capture)
|
| 126 |
|
| 127 |
# 等待所有线程准备就绪
|
| 128 |
while not all([thread.is_ready for thread in threads]):
|