You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Malayalam TTS Model

VITS-based Malayalam Text-to-Speech model.

Quick Start

import torch
from huggingface_hub import hf_hub_download

# Download model
model_path = hf_hub_download(repo_id="siyah1/malayalam-tts-vits", filename="pytorch_model.bin")

# Load checkpoint
checkpoint = torch.load(model_path, map_location='cpu')

# ... (see full documentation for usage)

Model Details

  • Trained epochs: 10
  • Validation loss: 0.0648
  • Parameters: ~15M
  • Architecture: VITS (Variational Inference with adversarial learning for end-to-end Text-to-Speech)

Dataset

Malayalam-TTS

Usage

For detailed usage instructions, please refer to the training repository.

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train siyah1/malayalam-tts-vits