Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
yuriyvnv 
posted an update 15 days ago
Post
2178
🎯 WAVe: 1B Multimodal Embedding Model for Word-Level Speech Quality

Multimodal embeddings for speech + transcript that verify quality at the word level, not just sentence level. Catches mispronunciations, timing errors, and prosody issues that sentence-level filters miss.

📊 Impact on Portuguese ASR:
• 34% reduction in training steps
• 50% better cross-domain generalization
• 30% less synthetic data needed
• Word-aligned attention finds errors other methods miss

🏗️ Architecture:
• Text: XLM-RoBERTa (278M params)
• Audio: Wav2Vec2-BERT 2.0 (581M params)
• Word Alignment: Multi-head attention + GLU (14M params)
• Total: 1B parameters

from transformers import AutoModel, AutoProcessor

  processor = AutoProcessor.from_pretrained(
      "yuriyvnv/WAVe-1B-Multimodal-PT",
      trust_remote_code=True
  )
  model = AutoModel.from_pretrained(
      "yuriyvnv/WAVe-1B-Multimodal-PT",
      trust_remote_code=True
  )



# Assess speech-transcript alignment

inputs = processor(text="Olá, como está?", audio=audio_array, sampling_rate=16000, return_tensors="pt")
  quality = model(**inputs).quality_score.item()


Perfect for filtering synthetic speech datasets before ASR training.

Model: yuriyvnv/WAVe-1B-Multimodal-PT
Code to create WAVe : https://github.com/yuriyvnv/WAVe
#multimodal #speech #embeddings #asr
#syntheticdata #qualityassessment

Hello everyone, yesterday there were minor problems that prevented the usage of the Embedding model. Mainly because of the Processor Class.
Posting here that the team has already solved the bugs.
If there is any problem with your usage, first delete the cached model (.cache folder in Hugging Face), redownload it, and if the issue persists, post a thread on the model page.

🔥 Hello Everyone, given the community's increased interest in the WAVe for the Portuguese Language, the team has retrained the model for over 100 epochs to further extend learning. The results are much better than those from the previous version with 30 epochs.
Key improvements:

Metric 30 ep 100 ep Change
Loss 0.49 0.22 -56%
Alignment Gap 0.079 0.118 +49%
Corrupt Similarity 0.31 0.23 -25%

The biggest win is the alignment gap nearly doubling -- the model is now much better at catching word-level errors like mispronunciations and timing artifacts. Corrupt pairs get
penalized harder (0.23 vs 0.31), so the filtering threshold becomes more reliable.

Same repo, same API, drop-in replacement:

model = AutoModel.from_pretrained("yuriyvnv/WAVe-1B-Multimodal-PT", trust_remote_code=True)

Updated README of the model card includes side-by-side training curves for both versions, check it out.

In this post