Zen3 Audio

Zen3 audio processing model for speech synthesis and audio understanding.

Overview

Built on Zen MoDE (Mixture of Distilled Experts) architecture with 1.5B parameters.

Developed by Hanzo AI and the Zoo Labs Foundation.

Quick Start

from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
import torch

model_id = "zenlm/zen3-audio"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")

# Load audio
import librosa
audio, sr = librosa.load("audio.wav", sr=16000)
inputs = processor(audio, sampling_rate=sr, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs)
print(processor.batch_decode(outputs, skip_special_tokens=True)[0])

Model Details

Attribute Value
Parameters 1.5B
Architecture Zen MoDE
Context 30s audio
License Apache 2.0

License

Apache 2.0

Downloads last month
33
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support