MOSS TTSD v1.0 โ€” MLX 8-bit

This repository contains an MLX-native int8 conversion of MOSS TTSD v1.0 for multi-speaker dialogue generation on Apple Silicon.

Note This repo is a community mirror of the canonical MLX conversion maintained by AppAutomaton at appautomaton/openmoss-ttsd-mlx.

Variants

Path Precision
mlx-int8/ int8 quantized weights

Model Details

How to Get Started

Command-line generation with mlx-speech:

python scripts/generate_moss_ttsd.py \
  --text "[S1] Watson, I think we should go. [S2] Give me one moment." \
  --output outputs/dialogue.wav

Minimal Python usage:

from mlx_speech.generation import MossTTSDelayModel

model = MossTTSDelayModel.from_path("mlx-int8")

Speaker turns are tagged with [S1] and [S2] in the input text.

Notes

  • This repo contains the quantized MLX runtime artifact only.
  • The conversion keeps the original TTSD architecture and remaps weights explicitly for MLX inference.
  • The current runtime path is designed around speaker-tagged dialogue input and shared codec decoding.
  • This mirror is a duplicated repo, not an automatically synchronized namespace mirror.

Links

License

Apache 2.0 โ€” following the upstream license published with OpenMOSS-Team/MOSS-TTSD-v1.0.

Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mlx-community/MOSS-TTSD-v1.0-MLX-8bit

Quantized
(2)
this model