This repository provides the checkpoints used in the paper:

Diffusion-based Frameworks for Unsupervised Speech Enhancement.

Please, read also the main github repository for a complete understanding.

About the checkpoints:

  • separate_wsjqut_speech_modeling.ckpt: Audio-only diffusion model trained separately on WSJ0 speech dataset (~5.2 M parameters).

  • separate_wsjqut_noise_modeling.ckpt: Audio-only diffusion model trained separately on QUT noise dataset (~5.2 M parameters).

  • joint_wsjqut_speech_noise_modeling.ckpt: Audio-only diffusion model trained jointly on WSJ0 speech and QUT noise datasets (~5.9 M parameters).

  • separate_vbdmd_speech_modeling.ckpt: Audio-only diffusion model trained separately on VoiceBank speech dataset (~5.2 M parameters).

  • separate_vbdmd_noise_modeling.ckpt: Audio-only diffusion model trained separately on DEMAND noise dataset (~5.2 M parameters).

  • joint_vbdmd_speech_noise_modeling.ckpt: Audio-only diffusion model trained jointly on VoiceBank speech and QUT noise datasets (~5.9 M parameters)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for jeaneudesAyilo/enudiffuse