File size: 6,004 Bytes
bd95217 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 | # LiveTrans
**English** | [中文](README_zh.md)
Real-time audio translation tool for Windows. Captures system audio via WASAPI loopback, runs speech recognition (ASR), translates through LLM APIs, and displays results in a transparent overlay window.
Perfect for watching foreign-language videos, livestreams, and meetings — no player modifications needed, works with any system audio.



## Features
- **Real-time translation**: System audio → ASR → LLM translation → subtitle overlay, fully automatic
- **Multiple ASR engines**: faster-whisper, FunASR SenseVoice (optimized for Japanese), FunASR Nano
- **Flexible translation backend**: Compatible with any OpenAI-format API (DeepSeek, Grok, Qwen, GPT, etc.)
- **Low-latency VAD**: 32ms audio chunks + Silero VAD with adaptive silence detection
- **Transparent overlay**: Always-on-top, click-through, draggable — doesn't interfere with your workflow
- **CUDA acceleration**: GPU-accelerated ASR inference
- **Automatic model management**: First-launch setup wizard, supports ModelScope / HuggingFace dual sources
- **Translation benchmark**: Built-in benchmark tool for comparing model performance
## Screenshots
**English → Chinese** (Twitch livestream)

**Japanese → Chinese** (Japanese livestream)

## Requirements
- **OS**: Windows 10/11
- **Python**: 3.10+
- **GPU** (recommended): NVIDIA GPU with CUDA 12.6 (for ASR acceleration)
- **Network**: Access to a translation API (DeepSeek, OpenAI, etc.)
## Installation
### 1. Clone the repository
```bash
git clone https://github.com/TheDeathDragon/LiveTranslate.git
cd LiveTranslate
```
### 2. Create a virtual environment
```bash
python -m venv .venv
.venv\Scripts\activate
```
### 3. Install PyTorch (with CUDA)
Choose the install command based on your CUDA version. See [PyTorch official site](https://pytorch.org/get-started/locally/):
```bash
# CUDA 12.6 (recommended)
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu126
# CPU only (no NVIDIA GPU)
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu
```
### 4. Install remaining dependencies
```bash
pip install -r requirements.txt
pip install funasr --no-deps
```
> **Note**: FunASR is installed with `--no-deps` because its dependency `editdistance` requires a C++ compiler. The pure-Python alternative `editdistance-s` is included in `requirements.txt` as a drop-in replacement.
### 5. Launch
```bash
.venv\Scripts\python.exe main.py
```
Or double-click `start.bat`.
## First Launch
1. A **setup wizard** will appear on first launch — choose your model download source (ModelScope for China, HuggingFace for international) and model cache path
2. Silero VAD and SenseVoice ASR models will be downloaded automatically (~1GB)
3. The main UI appears once downloads complete
## Configuring the Translation API
Click **Settings** on the overlay → **Translation** tab:
| Parameter | Description |
|-----------|-------------|
| API Base | API endpoint, e.g. `https://api.deepseek.com/v1` |
| API Key | Your API key |
| Model | Model name, e.g. `deepseek-chat` |
| Proxy | `none` (direct) / `system` (system proxy) / custom proxy URL |
Works with any OpenAI-compatible API, including:
- [DeepSeek](https://platform.deepseek.com/)
- [xAI Grok](https://console.x.ai/)
- [Alibaba Qwen](https://dashscope.aliyuncs.com/)
- [OpenAI GPT](https://platform.openai.com/)
- Self-hosted [Ollama](https://ollama.ai/), [vLLM](https://github.com/vllm-project/vllm), etc.
## Usage
1. Play a video or livestream with foreign-language audio
2. Launch LiveTrans — the overlay appears automatically
3. Recognized text and translations are displayed in real time
### Overlay Controls
- **Pause/Resume**: Pause or resume translation
- **Clear**: Clear current subtitles
- **Click-through**: Mouse clicks pass through the subtitle window
- **Always on top**: Keep overlay above all windows
- **Auto-scroll**: Automatically scroll to the latest subtitle
- **Model selector**: Switch between configured translation models
- **Target language**: Change the translation target language
### Settings Panel
Open via the **Settings** button on the overlay or the system tray menu:
- **VAD/ASR**: ASR engine selection, VAD mode, sensitivity parameters
- **Translation**: API configuration, system prompt, multi-model management
- **Benchmark**: Translation speed and quality benchmarks
- **Cache**: Model cache path management
## Architecture
```
Audio (WASAPI 32ms) → VAD (Silero) → ASR (Whisper/SenseVoice/Nano) → LLM Translation → Overlay
```
```
main.py Entry point & pipeline orchestration
├── audio_capture.py WASAPI loopback audio capture
├── vad_processor.py Silero VAD speech detection
├── asr_engine.py faster-whisper ASR backend
├── asr_sensevoice.py FunASR SenseVoice backend
├── asr_funasr_nano.py FunASR Nano backend
├── translator.py OpenAI-compatible translation client
├── model_manager.py Model detection, download & cache management
├── subtitle_overlay.py PyQt6 transparent overlay window
├── control_panel.py Settings panel UI
├── dialogs.py Setup wizard & model download dialogs
├── log_window.py Real-time log viewer
├── benchmark.py Translation benchmark
└── config.yaml Default configuration
```
## Known Limitations
- Windows only (depends on WASAPI loopback)
- ASR model first load takes a few seconds (GPU) to tens of seconds (CPU)
- Translation quality depends on the LLM API used
- Recognition degrades in noisy environments or with overlapping speakers
## License
[MIT License](LICENSE)
|