Text-to-Speech
Transformers
ONNX
GGUF
Chinese
English
voice-dialogue
speech-recognition
large-language-model
asr
tts
llm
chinese
english
real-time
conversational
Instructions to use MoYoYoTech/VoiceDialogue with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MoYoYoTech/VoiceDialogue with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-to-speech", model="MoYoYoTech/VoiceDialogue") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("MoYoYoTech/VoiceDialogue", dtype="auto") - llama-cpp-python
How to use MoYoYoTech/VoiceDialogue with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="MoYoYoTech/VoiceDialogue", filename="assets/models/llm/qwen/Qwen3-8B-Q6_K.gguf", )
llm.create_chat_completion( messages = "\"The answer to the universe is 42\"" )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use MoYoYoTech/VoiceDialogue with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K # Run inference directly in the terminal: llama-cli -hf MoYoYoTech/VoiceDialogue:Q6_K
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K # Run inference directly in the terminal: llama-cli -hf MoYoYoTech/VoiceDialogue:Q6_K
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K # Run inference directly in the terminal: ./llama-cli -hf MoYoYoTech/VoiceDialogue:Q6_K
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K # Run inference directly in the terminal: ./build/bin/llama-cli -hf MoYoYoTech/VoiceDialogue:Q6_K
Use Docker
docker model run hf.co/MoYoYoTech/VoiceDialogue:Q6_K
- LM Studio
- Jan
- Ollama
How to use MoYoYoTech/VoiceDialogue with Ollama:
ollama run hf.co/MoYoYoTech/VoiceDialogue:Q6_K
- Unsloth Studio new
How to use MoYoYoTech/VoiceDialogue with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for MoYoYoTech/VoiceDialogue to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for MoYoYoTech/VoiceDialogue to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for MoYoYoTech/VoiceDialogue to start chatting
- Pi new
How to use MoYoYoTech/VoiceDialogue with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "MoYoYoTech/VoiceDialogue:Q6_K" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use MoYoYoTech/VoiceDialogue with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf MoYoYoTech/VoiceDialogue:Q6_K
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default MoYoYoTech/VoiceDialogue:Q6_K
Run Hermes
hermes
- Docker Model Runner
How to use MoYoYoTech/VoiceDialogue with Docker Model Runner:
docker model run hf.co/MoYoYoTech/VoiceDialogue:Q6_K
- Lemonade
How to use MoYoYoTech/VoiceDialogue with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull MoYoYoTech/VoiceDialogue:Q6_K
Run and chat with the model
lemonade run user.VoiceDialogue-Q6_K
List all available models
lemonade list
liumaolin commited on
Commit ·
2baeba2
1
Parent(s): b319d19
简化LLM模型路径管理。
Browse files
src/voice_dialogue/config/llm_config.py
CHANGED
|
@@ -3,8 +3,17 @@
|
|
| 3 |
from typing import Dict, Any
|
| 4 |
|
| 5 |
from voice_dialogue.utils.apple_silicon import get_optimal_llama_cpp_config, get_apple_silicon_info
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
CHINESE_SYSTEM_PROMPT = (
|
| 10 |
"你是AI助手。请以自然流畅的中文口语化表达直接回答问题,避免冗余的思考过程。"
|
|
|
|
| 3 |
from typing import Dict, Any
|
| 4 |
|
| 5 |
from voice_dialogue.utils.apple_silicon import get_optimal_llama_cpp_config, get_apple_silicon_info
|
| 6 |
+
from .paths import LLM_MODELS_PATH
|
| 7 |
+
|
| 8 |
+
__all__ = (
|
| 9 |
+
'get_llm_model_params',
|
| 10 |
+
'get_apple_silicon_summary',
|
| 11 |
+
'CHINESE_SYSTEM_PROMPT',
|
| 12 |
+
'ENGLISH_SYSTEM_PROMPT',
|
| 13 |
+
'BUILTIN_LLM_MODEL_PATH',
|
| 14 |
+
)
|
| 15 |
|
| 16 |
+
BUILTIN_LLM_MODEL_PATH = LLM_MODELS_PATH / 'qwen' / 'Qwen3-8B-Q6_K.gguf'
|
| 17 |
|
| 18 |
CHINESE_SYSTEM_PROMPT = (
|
| 19 |
"你是AI助手。请以自然流畅的中文口语化表达直接回答问题,避免冗余的思考过程。"
|
src/voice_dialogue/services/text/generator.py
CHANGED
|
@@ -6,8 +6,7 @@ from queue import Queue, Empty
|
|
| 6 |
from langchain.memory import ConversationBufferWindowMemory
|
| 7 |
from langchain_core.chat_history import InMemoryChatMessageHistory
|
| 8 |
|
| 9 |
-
from voice_dialogue.config import
|
| 10 |
-
from voice_dialogue.config.llm_config import get_llm_model_params, get_apple_silicon_summary
|
| 11 |
from voice_dialogue.config.user_config import get_prompt
|
| 12 |
from voice_dialogue.core.base import BaseThread
|
| 13 |
from voice_dialogue.core.constants import chat_history_cache
|
|
@@ -203,7 +202,6 @@ class LLMResponseGenerator(BaseThread, TaskStatusMixin):
|
|
| 203 |
self._send_sentence_to_queue(voice_task, sentence, answer_index)
|
| 204 |
|
| 205 |
def run(self):
|
| 206 |
-
model_path = paths.LLM_MODELS_PATH / 'qwen' / 'Qwen3-8B-Q6_K.gguf'
|
| 207 |
|
| 208 |
model_params = get_llm_model_params()
|
| 209 |
|
|
@@ -216,7 +214,7 @@ class LLMResponseGenerator(BaseThread, TaskStatusMixin):
|
|
| 216 |
logger.info(f"配置说明: {chip_summary['config_note']}")
|
| 217 |
|
| 218 |
self.model_instance = create_langchain_chat_llamacpp_instance(
|
| 219 |
-
local_model_path=
|
| 220 |
)
|
| 221 |
# 使用默认中文 prompt 进行 warmup
|
| 222 |
prompt = get_prompt("zh")
|
|
|
|
| 6 |
from langchain.memory import ConversationBufferWindowMemory
|
| 7 |
from langchain_core.chat_history import InMemoryChatMessageHistory
|
| 8 |
|
| 9 |
+
from voice_dialogue.config.llm_config import get_llm_model_params, get_apple_silicon_summary, BUILTIN_LLM_MODEL_PATH
|
|
|
|
| 10 |
from voice_dialogue.config.user_config import get_prompt
|
| 11 |
from voice_dialogue.core.base import BaseThread
|
| 12 |
from voice_dialogue.core.constants import chat_history_cache
|
|
|
|
| 202 |
self._send_sentence_to_queue(voice_task, sentence, answer_index)
|
| 203 |
|
| 204 |
def run(self):
|
|
|
|
| 205 |
|
| 206 |
model_params = get_llm_model_params()
|
| 207 |
|
|
|
|
| 214 |
logger.info(f"配置说明: {chip_summary['config_note']}")
|
| 215 |
|
| 216 |
self.model_instance = create_langchain_chat_llamacpp_instance(
|
| 217 |
+
local_model_path=BUILTIN_LLM_MODEL_PATH, model_params=model_params
|
| 218 |
)
|
| 219 |
# 使用默认中文 prompt 进行 warmup
|
| 220 |
prompt = get_prompt("zh")
|
tests/test_llm_dialogue.py
CHANGED
|
@@ -12,8 +12,7 @@ lib_path = HERE / "src"
|
|
| 12 |
if lib_path.exists() and lib_path.as_posix() not in sys.path:
|
| 13 |
sys.path.insert(0, lib_path.as_posix())
|
| 14 |
|
| 15 |
-
from voice_dialogue.config import
|
| 16 |
-
from voice_dialogue.config.llm_config import get_llm_model_params
|
| 17 |
from voice_dialogue.services.text.processor import create_langchain_pipeline
|
| 18 |
|
| 19 |
CHINESE_SYSTEM_PROMPT = (
|
|
@@ -44,8 +43,7 @@ class TestLLMDialogue(unittest.TestCase):
|
|
| 44 |
model_params = get_llm_model_params()
|
| 45 |
self.history_store = {}
|
| 46 |
|
| 47 |
-
|
| 48 |
-
self.langchain_instance = ChatLlamaCpp(model_path=model_path.as_posix(), **model_params)
|
| 49 |
|
| 50 |
pipeline = create_langchain_pipeline(
|
| 51 |
self.langchain_instance,
|
|
|
|
| 12 |
if lib_path.exists() and lib_path.as_posix() not in sys.path:
|
| 13 |
sys.path.insert(0, lib_path.as_posix())
|
| 14 |
|
| 15 |
+
from voice_dialogue.config.llm_config import get_llm_model_params, BUILTIN_LLM_MODEL_PATH
|
|
|
|
| 16 |
from voice_dialogue.services.text.processor import create_langchain_pipeline
|
| 17 |
|
| 18 |
CHINESE_SYSTEM_PROMPT = (
|
|
|
|
| 43 |
model_params = get_llm_model_params()
|
| 44 |
self.history_store = {}
|
| 45 |
|
| 46 |
+
self.langchain_instance = ChatLlamaCpp(model_path=BUILTIN_LLM_MODEL_PATH.as_posix(), **model_params)
|
|
|
|
| 47 |
|
| 48 |
pipeline = create_langchain_pipeline(
|
| 49 |
self.langchain_instance,
|