πŸŽ™οΈ ClearSpeech

AI-Powered Speech Enhancement & Transcription System

ClearSpeech uses a custom U-Net deep learning model to remove background noise from audio, then transcribes the enhanced audio using OpenAI's Whisper. Perfect for cleaning up voice recordings, meeting audio, podcasts, or any noisy speech.

🌐 Live Website (will be updated): https://clearspeech.yourdomain.com

🌟 Features

  • 🧹 AI-Powered Noise Reduction: Custom U-Net model trained to remove background noise
  • πŸ“ Automatic Transcription: Whisper integration for accurate speech-to-text
  • ⚑ Fast Processing: Optimized pipeline with GPU support
  • 🌐 REST API: Easy-to-use FastAPI backend
  • 🎯 High Quality: Val loss of 0.031
  • πŸ”§ Flexible: Enhancement-only, transcription-only, or both

πŸ“‹ Table of Contents

  • Installation
  • Quick Start
  • API Documentation
  • Project Structure
  • Contributing

πŸš€ Installation

Prerequisites

  • Python 3.8+

  • pip

  • Optional CUDA GPU

Step 1: Clone Repository

git clone https://github.com/yourusername/ClearSpeech.git
cd ClearSpeech

Step 2: Create Virtual Environment

# Create environment 
python3.10 -m venv venv 
# Activate (macOS/Linux) 
source venv/bin/activate 
# Activate (Windows) 
venv\Scripts\activate

Step 3: Install Dependencies

# Install dependencies
pip install -r requirements.txt

Step 4: Download Pretrained Model

# Download model
python -c "
from huggingface_hub import hf_hub_download
hf_hub_download(
    repo_id='thecodeworm/clearspeech-unet',
    filename='best_model.pt',
    local_dir='enhancement_model/checkpoints/'
)
"

Step 5: Generate Noisy Samples

  1. Make your own WAV sample
  2. Run the generate_noisy_samples.py file on the sample to make the audio noisier to test the model
# Generate all noise types at multiple SNR levvels
python generate_noisy_samples.py \
 --input my_clean_voice.wav \
 --output test_samples/

⚑ Quick Start

Method 1: Using the API (Recommended)

Start the server:

python -m backend.app

Server starts at http://localhost:8000

Start the server:

cd frontend
python -m http.server 3000

Frontend starts at http://localhost:3000

Process audio:

# Full pipeline (enhance + transcribe)
curl -X POST "http://localhost:8000/process" \
  -F "file=@your_audio.wav" \
  | jq .

# Enhance only
curl -X POST "http://localhost:8000/enhance" \
  -F "file=@your_audio.wav" \
  -o enhanced_output.wav

# Transcribe only
curl -X POST "http://localhost:8000/transcribe" \
  -F "file=@your_audio.wav" \
  -F "enhance=true" \
  | jq .

Method 2: Using Python

from backend.inference_pipeline import EnhancementPipeline

# Initialize pipeline
pipeline = EnhancementPipeline(
    cnn_checkpoint_path="enhancement_model/checkpoints/best_model.pt",
    whisper_model_name="base",
    device="cpu"  # or "cuda" or "mps"
)

# Process audio
result = pipeline.process("path/to/noisy_audio.wav")

print(f"Transcript: {result['transcript']}")
print(f"Duration: {result['duration']:.2f}s")

# Save enhanced audio
import soundfile as sf
sf.write("enhanced.wav", result['enhanced_audio'], result['sample_rate'])

Method 3: Command Line

# Enhance audio file
python enhancement_model/infer.py \
  --checkpoint enhancement_model/checkpoints/best_model.pt \
  --input noisy_audio.wav \
  --output enhanced_audio.wav \
  --comparison  # Creates stereo comparison file

πŸ“š API Documentation

Interactive Docs

Once the server is running, visit:

Endpoints

POST /process

Process audio with enhancement and transcription.

Request:

  -F "file=@audio.wav" \
  -F "language=en" \
  -F "skip_enhancement=false"

Response:

{
"success": true,
"transcript": "Transcribed text here",
"duration": 3.5,
"language": "en",
"enhanced_audio_url": "/download/enhanced_123.wav",
"segments": [...],
"processing_time": 2.3
}

POST /enhance

Enhance audio only (no transcription).

Request:

curl -X POST "http://localhost:8000/enhance" \
  -F "file=@audio.wav" \
  -o enhanced.wav

Response: Enhanced audio file (WAV)

POST /transcribe

Transcribe audio with optional enhancement.

Request:

curl -X POST "http://localhost:8000/transcribe" \
  -F "file=@audio.wav" \
  -F "language=en" \
  -F "enhance=true"

Response:

{
"success": true,
"transcript": "Transcribed text",
"duration": 3.5,
"language": "en",
"segments": [...]
}

GET /download/{filename}

Download enhanced audio file.

GET /health

Health check endpoint.

πŸ“ Project Structure

ClearSpeech/
β”œβ”€β”€ backend/                    # FastAPI backend
β”‚   β”œβ”€β”€ app.py                 # Main API server
β”‚   β”œβ”€β”€ inference_pipeline.py  # Processing pipeline
β”‚   └── requirements.txt
β”œβ”€β”€ enhancement_model/          # U-Net model
β”‚   β”œβ”€β”€ model.py               # U-Net architecture
β”‚   β”œβ”€β”€ dataset.py             # PyTorch dataset
β”‚   β”œβ”€β”€ train.py               # Training script
β”‚   β”œβ”€β”€ infer.py               # Inference script
β”‚   β”œβ”€β”€ checkpoints/           # Trained models
β”‚   β”‚   └── best_model.pt
β”‚   └── requirements.txt
β”œβ”€β”€ data/                       # Training/test data
β”‚   β”œβ”€β”€ audio_clean/           # Clean audio
β”‚   β”œβ”€β”€ audio_raw/             # Noisy audio
β”‚   β”œβ”€β”€ metadata/
β”‚   β”‚   └── metadata.json      # Dataset metadata
β”‚   └── spectrograms/          # Mel-spectrograms
β”‚       β”œβ”€β”€ clean/
β”‚       └── noisy/
β”œβ”€β”€ frontend/                   # Web interface (optional)
β”‚   β”œβ”€β”€ index.html
β”‚   └── script.js
β”œβ”€β”€ tests/                      # Test files
β”‚   └── test_backend.py
β”œβ”€β”€ README.md
└── requirements.txt

🀝 Contributing

We welcome contributions! Here's how:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit changes: git commit -m 'Add amazing feature'
  4. Push to branch: git push origin feature/amazing-feature
  5. Open a Pull Request

Development Setup

# Install dev dependencies
pip install -r requirements-dev.txt

# Run tests before committing
python -m pytest tests/

# Format code
black backend/ enhancement_model/

πŸ™ Acknowledgments

πŸ“§ Contact

Project Maintainers: Aditya Chanda, Josh Pal, Advik Kumar Singh

Project Link: https://github.com/thecodeworm/ClearSpeech

⭐ Show Your Support

Give a ⭐️ if this project helped you!


Built with ❀️ using PyTorch and FastAPI

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train thecodeworm/clearspeech-unet

Space using thecodeworm/clearspeech-unet 1

Paper for thecodeworm/clearspeech-unet