File size: 19,456 Bytes
441d220
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
# Empathic-Insight-Voice-Plus
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WR-B6j--Y5RdhIyRGF_tJ3YdFF8BkUA2)

**Empathic-Insight-Voice-Plus** extends [laion/Empathic-Insight-Voice-Small](https://huggingface.co/laion/Empathic-Insight-Voice-Small) with **4 additional audio quality expert models**. These new experts predict overall audio quality, speech quality, background noise quality, and content enjoyment from the same frozen [laion/BUD-E-Whisper](https://huggingface.co/laion/BUD-E-Whisper) encoder embeddings.

This repository is fully compatible with the original Empathic-Insight-Voice-Small suite. It uses the same Whisper encoder (based on OpenAI Whisper Small) and the same MLP hidden architecture (`proj=64, hidden=[64, 32, 16]`). The only difference is that the new quality experts use **pooled features** (mean + min + max + std pooling over the encoder sequence dimension, yielding a 3072-d input) instead of the full flattened sequence, making them significantly more compact (~200K parameters each vs. ~73.7M for the full-sequence emotion experts).

This work is based on the research paper:
**"EMONET-VOICE: A Fine-Grained, Expert-Verified Benchmark for Speech Emotion Detection"**


## Example Video Analyses (Top 3 Emotions)
<div style='display: flex; flex-wrap: wrap; justify-content: flex-start; gap: 15px;'>
            <div style='flex: 0 1 auto; margin-bottom: 15px; text-align: center; width: 480px; max-width: 480px;'>
                <a href='https://www.youtube.com/watch?v=TsTVKCmqHhk' target='_blank' title='Watch video TsTVKCmqHhk'>
                    <img src='https://img.youtube.com/vi/TsTVKCmqHhk/hqdefault.jpg' alt='YouTube Thumbnail for TsTVKCmqHhk' style='width: 100%; height: auto; border: 1px solid #ccc; border-radius: 4px; display: block;'>
                </a>
                <p style='font-size: 0.8em; margin-top: 5px; word-break: break-all;'>ID: TsTVKCmqHhk</p>
            </div>
            <div style='flex: 0 1 auto; margin-bottom: 15px; text-align: center; width: 480px; max-width: 480px;'>
                <a href='https://www.youtube.com/watch?v=sErqFgL4vA8' target='_blank' title='Watch video sErqFgL4vA8'>
                    <img src='https://img.youtube.com/vi/sErqFgL4vA8/hqdefault.jpg' alt='YouTube Thumbnail for sErqFgL4vA8' style='width: 100%; height: auto; border: 1px solid #ccc; border-radius: 4px; display: block;'>
                </a>
                <p style='font-size: 0.8em; margin-top: 5px; word-break: break-all;'>ID: sErqFgL4vA8</p>
            </div>
            <div style='flex: 0 1 auto; margin-bottom: 15px; text-align: center; width: 480px; max-width: 480px;'>
                <a href='https://www.youtube.com/watch?v=BUnfuiwE_IM' target='_blank' title='Watch video BUnfuiwE_IM'>
                    <img src='https://img.youtube.com/vi/BUnfuiwE_IM/hqdefault.jpg' alt='YouTube Thumbnail for BUnfuiwE_IM' style='width: 100%; height: auto; border: 1px solid #ccc; border-radius: 4px; display: block;'>
                </a>
                <p style='font-size: 0.8em; margin-top: 5px; word-break: break-all;'>ID: BUnfuiwE_IM</p>
            </div>
            <div style='flex: 0 1 auto; margin-bottom: 15px; text-align: center; width: 480px; max-width: 480px;'>
                <a href='https://www.youtube.com/watch?v=dDrmjcUq8W4' target='_blank' title='Watch video dDrmjcUq8W4'>
                    <img src='https://img.youtube.com/vi/dDrmjcUq8W4/hqdefault.jpg' alt='YouTube Thumbnail for dDrmjcUq8W4' style='width: 100%; height: auto; border: 1px solid #ccc; border-radius: 4px; display: block;'>
                </a>
                <p style='font-size: 0.8em; margin-top: 5px; word-break: break-all;'>ID: dDrmjcUq8W4</p>
            </div>
            </div>


## Model Description

The Empathic-Insight-Voice-Plus suite combines the original 54+ emotion/attribute MLP experts from [Empathic-Insight-Voice-Small](https://huggingface.co/laion/Empathic-Insight-Voice-Small) with 4 new audio quality regression experts. All models use embeddings from the fine-tuned Whisper model [laion/BUD-E-Whisper](https://huggingface.co/laion/BUD-E-Whisper).

The original emotion experts were trained on the large-scale, multilingual synthetic voice-acting dataset LAION'S GOT TALENT (~5,000 hours) & an "in the wild" dataset of voice snippets (~5,000 hours). The new quality experts were trained on the [mitermix/balanced-audio-score-datasets](https://huggingface.co/datasets/mitermix/balanced-audio-score-datasets) dataset.

The quality scores are distilled from two established audio quality models:
- **Overall Quality, Speech Quality, Background Quality**: Distilled from Microsoft's [DNSMOS](https://github.com/microsoft/DNS-Challenge) (Deep Noise Suppression Mean Opinion Score), a non-intrusive speech quality estimator.
- **Content Enjoyment**: Distilled from Meta's [AudioBox](https://ai.meta.com/research/publications/audiobox-unified-audio-generation-with-natural-language-prompts/) content enjoyment scoring model.


## New Quality Expert Scores

| Expert | Description | Source | Expected Range | Typical Mean | Unit |
|--------|-------------|--------|---------------|--------------|------|
| **Overall Quality** | Overall perceived audio quality score. Higher is better. | DNSMOS | 1.0 - 3.7 | ~2.4 | MOS-like |
| **Speech Quality** | Quality of the speech signal itself (clarity, naturalness). Higher is better. | DNSMOS | 1.0 - 2.4 | ~1.9 | MOS-like |
| **Background Quality** | Quality of the background (absence of noise/artifacts). Higher is better. | DNSMOS | 1.0 - 4.3 | ~3.2 | MOS-like |
| **Content Enjoyment** | How engaging/enjoyable the spoken content is. Higher is better. | Meta AudioBox | 1.9 - 5.1 | ~4.1 | MOS-like |


### Validation Performance

| Expert | Val MAE | Pearson r | Val Samples |
|--------|---------|-----------|-------------|
| Overall Quality | 0.26 | 0.899 | 200 |
| Speech Quality | 0.30 | 0.517 | 200 |
| Background Quality | 0.35 | 0.865 | 200 |
| Content Enjoyment | 0.34 | 0.691 | 200 |


## How to Use

### Full Inference: Emotion Scores + Quality Scores

The following example loads both the original 54 emotion/attribute experts and the 4 new quality experts, then runs inference on a single audio file.

```python
import torch
import torch.nn as nn
import numpy as np
import librosa
from pathlib import Path
from transformers import WhisperModel, WhisperFeatureExtractor
from huggingface_hub import snapshot_download
import gc

# --- Configuration ---
SAMPLING_RATE = 16000
MAX_AUDIO_SECONDS = 30.0
WHISPER_MODEL_ID = "laion/BUD-E-Whisper"

# Original emotion experts
HF_EMOTION_REPO_ID = "laion/Empathic-Insight-Voice-Small"
# New quality experts (this repo)
HF_QUALITY_REPO_ID = "laion/Empathic-Insight-Voice-Plus"

WHISPER_SEQ_LEN = 1500
WHISPER_EMBED_DIM = 768
PROJECTION_DIM = 64
MLP_HIDDEN_DIMS = [64, 32, 16]
MLP_DROPOUTS = [0.0, 0.1, 0.1, 0.1]
POOLED_DIM = 3072  # 4 * 768 (mean + min + max + std)

DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")


# --- Model Definitions ---

class FullEmbeddingMLP(nn.Module):
    """Original emotion expert architecture (full sequence input)."""
    def __init__(self, seq_len, embed_dim, projection_dim, mlp_hidden_dims, mlp_dropout_rates):
        super().__init__()
        self.flatten = nn.Flatten()
        self.proj = nn.Linear(seq_len * embed_dim, projection_dim)
        layers = [nn.ReLU(), nn.Dropout(mlp_dropout_rates[0])]
        current_dim = projection_dim
        for i, h_dim in enumerate(mlp_hidden_dims):
            layers.extend([nn.Linear(current_dim, h_dim), nn.ReLU(), nn.Dropout(mlp_dropout_rates[i + 1])])
            current_dim = h_dim
        layers.append(nn.Linear(current_dim, 1))
        self.mlp = nn.Sequential(*layers)

    def forward(self, x):
        if x.ndim == 4 and x.shape[1] == 1:
            x = x.squeeze(1)
        return self.mlp(self.proj(self.flatten(x)))


class PooledEmbeddingMLP(nn.Module):
    """New quality expert architecture (pooled features input)."""
    def __init__(self, input_dim, projection_dim, mlp_hidden_dims, mlp_dropout_rates):
        super().__init__()
        self.proj = nn.Linear(input_dim, projection_dim)
        layers = [nn.ReLU(), nn.Dropout(mlp_dropout_rates[0])]
        current_dim = projection_dim
        for i, h_dim in enumerate(mlp_hidden_dims):
            layers.extend([nn.Linear(current_dim, h_dim), nn.ReLU(), nn.Dropout(mlp_dropout_rates[i + 1])])
            current_dim = h_dim
        layers.append(nn.Linear(current_dim, 1))
        self.mlp = nn.Sequential(*layers)

    def forward(self, x):
        return self.mlp(self.proj(x))


def pool_embedding(embedding):
    """Pool encoder output [1, seq_len, 768] -> [1, 3072] using mean+min+max+std."""
    mean_pool = embedding.mean(dim=1)
    min_pool = embedding.min(dim=1).values
    max_pool = embedding.max(dim=1).values
    std_pool = embedding.std(dim=1)
    return torch.cat([mean_pool, min_pool, max_pool, std_pool], dim=1)


# Quality expert file mapping
QUALITY_EXPERTS = {
    "Overall_Quality": "model_score_overall_quality_best.pth",
    "Speech_Quality": "model_score_speech_quality_best.pth",
    "Background_Quality": "model_score_background_quality_best.pth",
    "Content_Enjoyment": "model_score_content_enjoyment_best.pth",
}


@torch.no_grad()
def analyze_audio(audio_path: str):
    """Run full inference: emotions + quality scores."""

    # Load Whisper encoder
    print("Loading Whisper encoder...")
    feature_extractor = WhisperFeatureExtractor.from_pretrained(WHISPER_MODEL_ID)
    whisper = WhisperModel.from_pretrained(WHISPER_MODEL_ID, low_cpu_mem_usage=True)
    encoder = whisper.get_encoder().to(DEVICE).eval()
    del whisper; gc.collect()

    # Load audio
    print(f"Loading audio: {audio_path}")
    waveform, sr = librosa.load(audio_path, sr=SAMPLING_RATE, mono=True)
    max_samples = int(MAX_AUDIO_SECONDS * SAMPLING_RATE)
    if len(waveform) > max_samples:
        waveform = waveform[:max_samples]
    print(f"  Duration: {len(waveform)/SAMPLING_RATE:.2f}s")

    # Get encoder embedding
    inputs = feature_extractor(waveform, sampling_rate=SAMPLING_RATE, return_tensors="pt")
    input_features = inputs.input_features.to(DEVICE)
    embedding = encoder(input_features).last_hidden_state  # [1, 1500, 768]

    # Prepare pooled features for quality experts
    pooled = pool_embedding(embedding.float())  # [1, 3072]

    # Download model repos
    emotion_dir = Path(snapshot_download(HF_EMOTION_REPO_ID, allow_patterns=["*.pth"]))
    quality_dir = Path(snapshot_download(HF_QUALITY_REPO_ID, allow_patterns=["*.pth"]))

    results = {}

    # --- Run emotion experts (full sequence) ---
    print("\n--- Emotion Scores ---")
    embedding_cpu = embedding.cpu().float()
    for pth_file in sorted(emotion_dir.glob("model_*_best.pth")):
        name = pth_file.stem.replace("model_", "").replace("_best", "")
        model = FullEmbeddingMLP(WHISPER_SEQ_LEN, WHISPER_EMBED_DIM, PROJECTION_DIM, MLP_HIDDEN_DIMS, MLP_DROPOUTS)
        state_dict = torch.load(pth_file, map_location="cpu", weights_only=True)
        if any(k.startswith("_orig_mod.") for k in state_dict):
            state_dict = {k.replace("_orig_mod.", ""): v for k, v in state_dict.items()}
        model.load_state_dict(state_dict)
        model.eval()
        score = model(embedding_cpu).item()
        results[name] = score
        del model; gc.collect()

    # Print top emotions
    sorted_emotions = sorted(results.items(), key=lambda x: x[1], reverse=True)
    for name, score in sorted_emotions[:5]:
        print(f"  {name}: {score:.4f}")

    # --- Run quality experts (pooled features) ---
    print("\n--- Quality Scores ---")
    pooled_cpu = pooled.cpu().float()
    for label, filename in QUALITY_EXPERTS.items():
        pth_path = quality_dir / filename
        if not pth_path.exists():
            print(f"  {label}: model not found")
            continue
        model = PooledEmbeddingMLP(POOLED_DIM, PROJECTION_DIM, MLP_HIDDEN_DIMS, MLP_DROPOUTS)
        model.load_state_dict(torch.load(pth_path, map_location="cpu", weights_only=True))
        model.eval()
        score = model(pooled_cpu).item()
        results[label] = score
        print(f"  {label}: {score:.4f}")
        del model; gc.collect()

    return results


# --- Example Usage ---
# results = analyze_audio("your_audio_file.mp3")
```

### Quality Scores Only (Lightweight)

If you only need the 4 quality scores without the emotion experts:

```python
import torch
import torch.nn as nn
import numpy as np
import librosa
from transformers import WhisperModel, WhisperFeatureExtractor
from huggingface_hub import hf_hub_download

DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")

class PooledEmbeddingMLP(nn.Module):
    def __init__(self, input_dim=3072, projection_dim=64,
                 mlp_hidden_dims=[64, 32, 16], mlp_dropout_rates=[0.0, 0.1, 0.1, 0.1]):
        super().__init__()
        self.proj = nn.Linear(input_dim, projection_dim)
        layers = [nn.ReLU(), nn.Dropout(mlp_dropout_rates[0])]
        current_dim = projection_dim
        for i, h_dim in enumerate(mlp_hidden_dims):
            layers.extend([nn.Linear(current_dim, h_dim), nn.ReLU(), nn.Dropout(mlp_dropout_rates[i + 1])])
            current_dim = h_dim
        layers.append(nn.Linear(current_dim, 1))
        self.mlp = nn.Sequential(*layers)

    def forward(self, x):
        return self.mlp(self.proj(x))

# Load encoder
feature_extractor = WhisperFeatureExtractor.from_pretrained("laion/BUD-E-Whisper")
whisper = WhisperModel.from_pretrained("laion/BUD-E-Whisper", low_cpu_mem_usage=True)
encoder = whisper.get_encoder().to(DEVICE).eval()

# Load audio
waveform, sr = librosa.load("your_audio.mp3", sr=16000, mono=True)
inputs = feature_extractor(waveform, sampling_rate=16000, return_tensors="pt")
with torch.no_grad():
    embedding = encoder(inputs.input_features.to(DEVICE)).last_hidden_state  # [1, 1500, 768]

# Pool: mean + min + max + std
emb = embedding.float()
pooled = torch.cat([emb.mean(1), emb.min(1).values, emb.max(1).values, emb.std(1)], dim=1)  # [1, 3072]

# Run quality experts
EXPERTS = {
    "Overall_Quality": "model_score_overall_quality_best.pth",
    "Speech_Quality": "model_score_speech_quality_best.pth",
    "Background_Quality": "model_score_background_quality_best.pth",
    "Content_Enjoyment": "model_score_content_enjoyment_best.pth",
}

for label, filename in EXPERTS.items():
    path = hf_hub_download("laion/Empathic-Insight-Voice-Plus", filename)
    model = PooledEmbeddingMLP()
    model.load_state_dict(torch.load(path, map_location="cpu", weights_only=True))
    model.eval()
    with torch.no_grad():
        score = model(pooled.cpu()).item()
    print(f"{label}: {score:.4f}")
```


## Architecture

The quality experts use a `PooledEmbeddingMLP` architecture:

```
Input: [batch, 3072]  (4 * 768: mean + min + max + std pooling over Whisper encoder sequence)
  -> Linear(3072, 64) -> ReLU -> Dropout(0.0)
  -> Linear(64, 64)   -> ReLU -> Dropout(0.1)
  -> Linear(64, 32)   -> ReLU -> Dropout(0.1)
  -> Linear(32, 16)   -> ReLU -> Dropout(0.1)
  -> Linear(16, 1)
Output: scalar score
```

~203K parameters per expert. Trained with Huber loss (delta=1.0), AdamW optimizer (lr=1e-3, weight_decay=1e-4), cosine annealing LR schedule over 50 epochs.


## Training Data

Trained on [mitermix/balanced-audio-score-datasets](https://huggingface.co/datasets/mitermix/balanced-audio-score-datasets) (32.2 GB), which contains balanced distributions of audio quality scores across 5 dimensions. Each subset contains paired audio files and JSON metadata with ground truth scores.

| Subset | Source | Training Samples |
|--------|--------|-----------------|
| Overall Quality | DNSMOS | 39,800 |
| Speech Quality | DNSMOS | 9,800 |
| Background Quality | DNSMOS | 39,800 |
| Content Enjoyment | Meta AudioBox | 19,800 |


## Based On

- **Whisper Encoder**: [laion/BUD-E-Whisper](https://huggingface.co/laion/BUD-E-Whisper) (OpenAI Whisper Small, fine-tuned)
- **MLP Architecture**: Same hidden layer structure as [laion/Empathic-Insight-Voice-Small](https://huggingface.co/laion/Empathic-Insight-Voice-Small)
- **Emotion Experts**: Fully compatible with the 54 emotion/attribute experts from Empathic-Insight-Voice-Small


## Files

**New quality experts (this repo):**
- `model_score_overall_quality_best.pth` - Overall Quality expert (DNSMOS)
- `model_score_speech_quality_best.pth` - Speech Quality expert (DNSMOS)
- `model_score_background_quality_best.pth` - Background Quality expert (DNSMOS)
- `model_score_content_enjoyment_best.pth` - Content Enjoyment expert (Meta AudioBox)

**Original emotion experts** are loaded from [laion/Empathic-Insight-Voice-Small](https://huggingface.co/laion/Empathic-Insight-Voice-Small) (54 `.pth` files).


## Emotion Taxonomy

The core 40 emotion categories are (from EMONET-VOICE, Appendix A.1):
Affection, Amusement, Anger, Astonishment/Surprise, Awe, Bitterness, Concentration, Confusion, Contemplation, Contempt, Contentment, Disappointment, Disgust, Distress, Doubt, Elation, Embarrassment, Emotional Numbness, Fatigue/Exhaustion, Fear, Helplessness, Hope/Enthusiasm/Optimism, Impatience and Irritability, Infatuation, Interest, Intoxication/Altered States of Consciousness, Jealousy & Envy, Longing, Malevolence/Malice, Pain, Pleasure/Ecstasy, Pride, Relief, Sadness, Sexual Lust, Shame, Sourness, Teasing, Thankfulness/Gratitude, Triumph.

Additional vocal attributes (e.g., Valence, Arousal, Gender, Age, Pitch characteristics) are also predicted by corresponding MLP models in the suite. The full list of predictable dimensions can be inferred from the `FILENAME_PART_TO_TARGET_KEY_MAP` in the [Colab notebook](https://colab.research.google.com/drive/1WR-B6j--Y5RdhIyRGF_tJ3YdFF8BkUA2).


## Intended Use

These models are intended for research purposes in affective computing, speech emotion recognition (SER), human-AI interaction, and voice AI development. They can be used to:
*   Analyze and predict fine-grained emotional states and vocal attributes from speech.
*   Assess audio quality, speech clarity, background noise levels, and content enjoyment.
*   Serve as a baseline for developing more advanced SER and audio quality assessment systems.

**Out-of-Scope Use:**
These models are trained on synthetic speech and their generalization to spontaneous real-world speech needs further evaluation. They should not be used for making critical decisions about individuals, for surveillance, or in any manner that could lead to discriminatory outcomes or infringe on privacy without due diligence and ethical review.


## Ethical Considerations

The EMONET-VOICE suite was developed with ethical considerations as a priority:

Privacy Preservation: The use of synthetic voice generation fundamentally circumvents privacy concerns associated with collecting real human emotional expressions, especially for sensitive states.

Responsible Use: These models are released for research. Users are urged to consider the ethical implications of their applications and avoid misuse, such as for emotional manipulation, surveillance, or in ways that could lead to unfair, biased, or harmful outcomes. The broader societal implications and mitigation of potential misuse of SER technology remain important ongoing considerations.