Instructions to use Pixel-Labs/threadcast-neural-models with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- KittenTTS
How to use Pixel-Labs/threadcast-neural-models with KittenTTS:
from kittentts import KittenTTS m = KittenTTS("Pixel-Labs/threadcast-neural-models") audio = m.generate("This high quality TTS model works without a GPU") # Save the audio import soundfile as sf sf.write('output.wav', audio, 24000) - Notebooks
- Google Colab
- Kaggle
license: mit
language: en
library_name: onnx
tags:
- text-to-speech
- tts
- kokoro
- piper
- kittentts
- vits
- styletts2
- onnx
- sherpa-onnx
- on-device
- threadcast
pipeline_tag: text-to-speech
ThreadCast β Neural Models Mirror
Threads, now a podcast.
threadcast.app Β· pixellabs.ventures
Self-hosted mirror of the on-device neural TTS models used by ThreadCast across both shipping platforms β the Chrome extension and the Android app. Three engine families on Android (Piper VITS, KittenTTS-nano VITS, Kokoro StyleTTS2), two on the extension, one source of truth.
This repository exists so each platform can ship a stable, version-pinned set of model weights without depending on the availability or rate-limits of upstream Hugging Face repos at runtime.
Note: if you're a ThreadCast user, you don't need anything here β the extension and the Android app each download (or bundle) what they need automatically. This page is for transparency, contributors, and forks.
Repository layout
threadcast-neural-models/
βββ extension/ β Chrome extension β HF transformers.js packaging
β βββ neural-28m/ Piper VITS β 5 voices, raw HF format
β βββ neural-82m/ Kokoro StyleTTS2 β 1 model + 11 voice embeddings
β
βββ mobile-android/ β Android app β production zips fetched at runtime
βββ v1/ 8 zips: 1 shared espeak + 5 per-voice Piper
+ 1 KittenTTS-nano ("Local AI Plus")
+ 1 Kokoro ("Local AI Studio")
| Subtree | Format | Consumed by | Sub-README |
|---|---|---|---|
extension/ |
Raw HF (per-file .onnx, .bin, tokenizer.json) |
Chrome extension via @huggingface/transformers + @realtimex/piper-tts-web |
extension/README.md |
mobile-android/ |
ZIP archives, sherpa-onnx packaging | Android app at runtime via AssetInstaller.kt β first-launch download with cancel/delete | mobile-android/README.md |
The two subtrees parallel each other on purpose β same engine families (Piper VITS, Kokoro StyleTTS2), same neural-28m / neural-82m parameter-count naming, just packaged for each platform's runtime.
Engines at a glance
| Engine | Architecture | Params | Per-voice cost | Quality tier |
|---|---|---|---|---|
neural-28m |
Piper VITS | ~28 M | One ONNX file per voice (~63 MB) | Standard β fast, CPU-friendly, single-thread WASM real-time on a laptop. Surfaced on Android as Local AI Lite, on the extension as AI Neural CPU. |
neural-15m |
KittenTTS-nano VITS | ~15 M | Single fp16 model + 8 speaker embeddings (one ~26 MB file serves all) | Sweet spot β 8 voices with style-vector switching at a fraction of the storage cost. Android-only, surfaced as Local AI Plus. |
neural-82m |
Kokoro StyleTTS2 | ~82 M | Single model + 256-dim style vectors per voice (one ~325 MB file serves all) | Premium β more natural prosody, GPU-accelerated on Chrome (WebGPU); CPU-only on Android (perf-gated). Surfaced on Android as Local AI Studio, on the extension as AI Neural GPU. |
License
This repository mirrors upstream models for distribution stability. Each upstream project retains its own license:
- Kokoro-82M: Apache-2.0 (upstream model card)
- KittenTTS-nano (v0.1): Apache-2.0 (upstream model card)
- Piper voices: MIT, with individual voice attributions in each
.onnx.json - transformers.js, onnxruntime-web, onnxruntime-android: Apache-2.0
- sherpa-onnx: Apache-2.0
The mirror layout, READMEs, and any custom additions in this repository are licensed under MIT by Pixel Labs.
Links
- π ThreadCast: threadcast.app
- π§βπ» Pixel Labs: pixellabs.ventures
- π¦ Issues / questions: open an issue on the ThreadCast extension repo