W2V-BERT 2.0 ASR Adapters

This repository contains 3 per-language bottleneck adapters for automatic speech recognition (ASR) trained on top of facebook/w2v-bert-2.0.

Model Description

  • Base Model: facebook/w2v-bert-2.0 (600M parameters, frozen)
  • Adapter Architecture: MMS-style bottleneck adapters (dim=64)
  • Decoder: Lightweight transformer decoder (1 layer)
  • Training: CTC loss with extended vocabulary for double vowels
  • Average WER: 47.95%

Trained Adapters

Adapter Language WER Train Samples
kam_Latn Kamba 29.87% 14968
kik_Latn Kikuyu 15.20% 14966
lug_Latn_salt Luganda (SALT) 98.79% 5002

Architecture

The model uses:

  1. Frozen w2v-bert-2.0 encoder - Extracts audio representations
  2. Bottleneck adapter - Language-specific adaptation (trainable)
  3. Lightweight decoder - Transformer decoder block (trainable)
  4. LM head - Per-language vocabulary projection (trainable)
Audio β†’ Encoder(frozen) β†’ Adapter β†’ Decoder β†’ LayerNorm β†’ LM Head β†’ Text

Usage

Each adapter folder contains:

  • adapter_weights.pt - Bottleneck adapter weights
  • decoder_weights.pt - Decoder block weights
  • lm_head_weights.pt - Language model head weights
  • final_norm_weights.pt - Final layer norm weights
  • vocab.json - Language-specific vocabulary
  • adapter_config.json - Adapter configuration
  • metrics.json - Training metrics

Loading an Adapter

import torch
from transformers import Wav2Vec2BertProcessor
from huggingface_hub import hf_hub_download

# Load processor for specific language (e.g., kik_Latn for Kikuyu)
adapter_id = "kik_Latn"
processor = Wav2Vec2BertProcessor.from_pretrained(
    "mutisya/w2v-bert-adapters-3lang-e10-25_52-v4",
    subfolder=adapter_id
)

# Load adapter configuration
import json
config_path = hf_hub_download("mutisya/w2v-bert-adapters-3lang-e10-25_52-v4", f"{adapter_id}/adapter_config.json")
with open(config_path) as f:
    adapter_config = json.load(f)

# Load adapter weights
adapter_weights = torch.load(
    hf_hub_download("mutisya/w2v-bert-adapters-3lang-e10-25_52-v4", f"{adapter_id}/adapter_weights.pt"),
    map_location="cpu"
)
decoder_weights = torch.load(
    hf_hub_download("mutisya/w2v-bert-adapters-3lang-e10-25_52-v4", f"{adapter_id}/decoder_weights.pt"),
    map_location="cpu"
)
lm_head_weights = torch.load(
    hf_hub_download("mutisya/w2v-bert-adapters-3lang-e10-25_52-v4", f"{adapter_id}/lm_head_weights.pt"),
    map_location="cpu"
)

Training Configuration

  • Epochs: 10
  • Learning Rate: 0.0005
  • Batch Size: 16 Γ— 4 (effective: 64)
  • Extended Vocabulary: True
  • Adapter Dimension: 64
  • Decoder Layers: 1

Supported Languages

The following languages have trained adapters:

  • Kamba (kam_Latn): WER 29.87%
  • Kikuyu (kik_Latn): WER 15.20%
  • Luganda (SALT) (lug_Latn_salt): WER 98.79%

License

Apache 2.0

Citation

@misc{w2vbert-asr-adapters,
  author = {Mutisya},
  title = {W2V-BERT 2.0 ASR Adapters for African Languages},
  year = {2024},
  publisher = {HuggingFace},
  url = {https://huggingface.co/mutisya/w2v-bert-adapters-3lang-e10-25_52-v4}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mutisya/w2v-bert-adapters-3lang-e10-25_52-v4

Finetuned
(400)
this model

Evaluation results