Delete .ipynb_checkpoints
Browse files
.ipynb_checkpoints/README-checkpoint.md
DELETED
|
@@ -1,776 +0,0 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
language:
|
| 4 |
-
- en
|
| 5 |
-
- zh
|
| 6 |
-
tags:
|
| 7 |
-
- audio
|
| 8 |
-
- speech
|
| 9 |
-
- multimodal
|
| 10 |
-
- audio-language-model
|
| 11 |
-
- asr
|
| 12 |
-
- speech-recognition
|
| 13 |
-
library_name: transformers
|
| 14 |
-
pipeline_tag: audio-text-to-text
|
| 15 |
-
base_model:
|
| 16 |
-
- Qwen/Qwen3-1.7B-Base
|
| 17 |
-
- openai/whisper-large-v3
|
| 18 |
-
---
|
| 19 |
-
|
| 20 |
-
<p align="center">
|
| 21 |
-
<img src="assets/eureka_logo_new.png" width="200"/>
|
| 22 |
-
</p>
|
| 23 |
-
|
| 24 |
-
<p align="center">
|
| 25 |
-
<b>Eureka-Audio-Instruct</b>
|
| 26 |
-
</p>
|
| 27 |
-
|
| 28 |
-
<p align="center">
|
| 29 |
-
<a href="https://huggingface.co/cslys1999/Eureka-Audio-Instruct">
|
| 30 |
-
<img src="https://img.shields.io/badge/🤗%20HuggingFace-Model-yellow" alt="HuggingFace"/>
|
| 31 |
-
</a>
|
| 32 |
-
|
| 33 |
-
<a href="https://www.modelscope.cn/models/lys1999/Eureka-Audio-Instruct">
|
| 34 |
-
<img src="https://img.shields.io/badge/🤖%20ModelScope-Model-blue" alt="ModelScope"/>
|
| 35 |
-
</a>
|
| 36 |
-
|
| 37 |
-
<a href="https://arxiv.org/abs/2602.13954">
|
| 38 |
-
<img src="https://img.shields.io/badge/📑%20arXiv-Paper-red" alt="Paper"/>
|
| 39 |
-
</a>
|
| 40 |
-
</p>
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
We present **Eureka-Audio**, a compact yet high-performance audio language model that achieves competitive performance against models that are **4 to 18 times larger** across a broad range of audio understanding benchmarks. Despite containing only **1.7B parameters**, Eureka-Audio demonstrates strong performance on automatic speech recognition (ASR), audio understanding, and dense audio captioning, matching or surpassing multiple 7B to 30B audio and omni-modal baselines.
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
## News
|
| 47 |
-
|
| 48 |
-
* Feb 25, 2026: We release the inference code and model weights of [Eureka-Audio-Instruct](https://huggingface.co/cslys1999/Eureka-Audio-Instruct).
|
| 49 |
-
* Feb 17, 2026: We release the technical report of [Eureka-Audio](https://arxiv.org/abs/2602.13954).
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
## Table of Contents
|
| 53 |
-
|
| 54 |
-
- [Introduction](#introduction)
|
| 55 |
-
- [Architecture Overview](#architecture-overview)
|
| 56 |
-
- [Quick Start](#quick-start)
|
| 57 |
-
- [Evaluation](#evaluation)
|
| 58 |
-
- [Automatic Speech Recognition](#automatic-speech-recognition-asr)
|
| 59 |
-
- [Audio Understanding](#audio-understanding)
|
| 60 |
-
- [Dense Audio Captioning](#dense-audio-captioning)
|
| 61 |
-
- [License](#license)
|
| 62 |
-
- [Acknowledgements](#acknowledgements)
|
| 63 |
-
- [Citation](#citation)
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
## Introduction
|
| 67 |
-
|
| 68 |
-
Eureka-Audio is designed as a lightweight yet powerful audio foundation model capable of handling a wide variety of audio understanding tasks within a single unified framework. Key features include:
|
| 69 |
-
|
| 70 |
-
* **Lightweight yet Powerful:** Achieve competitive results with only **1.7B parameters**, delivering up to **3.7x faster** decoding speed compared to larger models.
|
| 71 |
-
* **Universal Audio Understanding:** Handle diverse tasks like automatic speech recognition (ASR), audio question answering, audio captioning, speech emotion recognition, and sound event classification.
|
| 72 |
-
* **Competitive Performance:** Achieve competitive performance against models that are 4 to 18 times larger, matching or surpassing multiple 7B-30B audio and omni-modal baselines.
|
| 73 |
-
* **DataFlux Pipeline:** A closed-loop audio instruction data synthesis and verification pipeline that constructs high-quality, logically consistent supervision from raw audio.
|
| 74 |
-
* **Sparse MoE Adapter:** A novel sparsely activated Mixture-of-Experts adapter that explicitly accounts for audio heterogeneity and alleviates cross-modal optimization conflicts.
|
| 75 |
-
* **Open-Source:** Release the code and model checkpoints for community research and development.
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
## Architecture Overview
|
| 79 |
-
|
| 80 |
-
<p align="center">
|
| 81 |
-
<img src="assets/eureka-audio-architecture.png" width="90%"/>
|
| 82 |
-
<p>
|
| 83 |
-
|
| 84 |
-
Eureka-Audio consists of three main components:
|
| 85 |
-
|
| 86 |
-
1. **Audio Encoder:** A Whisper-based audio encoder that encodes raw waveforms into high-temporal-resolution acoustic representations, capturing fine-grained perceptual and semantic information present in the audio signal.
|
| 87 |
-
|
| 88 |
-
2. **Sparse MoE Adapter:** A Mixture-of-Experts adapter that maps audio representations into the embedding space of the language model. This design explicitly models the heterogeneity of audio signals at both the semantic and acoustic levels, mitigating optimization conflicts while improving representational efficiency.
|
| 89 |
-
|
| 90 |
-
3. **Language Model Backbone:** Qwen3-1.7B-base serves as the language backbone. After alignment via the MoE Adapter, audio embeddings are concatenated with text token embeddings and jointly modeled by the backbone in a standard autoregressive manner.
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
## Getting Started
|
| 94 |
-
|
| 95 |
-
### Installation
|
| 96 |
-
|
| 97 |
-
```bash
|
| 98 |
-
git clone https://github.com/Alittleegg/Eureka-Audio.git
|
| 99 |
-
cd Eureka-Audio
|
| 100 |
-
pip install -r requirements.txt
|
| 101 |
-
```
|
| 102 |
-
|
| 103 |
-
## Quick Start
|
| 104 |
-
|
| 105 |
-
This example demonstrates basic usage for generating text from audio.
|
| 106 |
-
|
| 107 |
-
```python
|
| 108 |
-
"""
|
| 109 |
-
Eureka-Audio Local Inference Script
|
| 110 |
-
|
| 111 |
-
Usage:
|
| 112 |
-
python infer_local.py --audio_path test_wav/0.wav --prompt "Descript The audio."
|
| 113 |
-
"""
|
| 114 |
-
|
| 115 |
-
import os
|
| 116 |
-
import sys
|
| 117 |
-
import argparse
|
| 118 |
-
|
| 119 |
-
from eureka_infer.api import EurekaAudio
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
def main():
|
| 123 |
-
parser = argparse.ArgumentParser(description="Eureka-Audio Local Inference")
|
| 124 |
-
parser.add_argument("--model_path", type=str, default="Eureka-Audio-Instruct",
|
| 125 |
-
help="Path to the model checkpoint")
|
| 126 |
-
parser.add_argument("--audio_path", type=str, required=True,
|
| 127 |
-
help="Path to the audio file")
|
| 128 |
-
parser.add_argument("--prompt", type=str, default="Descript The audio.",
|
| 129 |
-
help="User prompt")
|
| 130 |
-
parser.add_argument("--max_new_tokens", type=int, default=512,
|
| 131 |
-
help="Maximum number of new tokens to generate")
|
| 132 |
-
parser.add_argument("--device", type=str, default="cuda:0",
|
| 133 |
-
help="Device to use (cuda:0/cpu)")
|
| 134 |
-
args = parser.parse_args()
|
| 135 |
-
|
| 136 |
-
print(f"Loading model from {args.model_path}...")
|
| 137 |
-
model = EurekaAudio(model_path=args.model_path, device=args.device)
|
| 138 |
-
|
| 139 |
-
# Build messages
|
| 140 |
-
messages = [
|
| 141 |
-
{
|
| 142 |
-
"role": "user",
|
| 143 |
-
"content": [
|
| 144 |
-
{"type": "audio_url", "audio_url": {"url": args.audio_path}},
|
| 145 |
-
{"type": "text", "text": args.prompt}
|
| 146 |
-
]
|
| 147 |
-
}
|
| 148 |
-
]
|
| 149 |
-
|
| 150 |
-
print(f"Processing audio: {args.audio_path}")
|
| 151 |
-
print(f"Prompt: {args.prompt}")
|
| 152 |
-
print("Generating response...")
|
| 153 |
-
|
| 154 |
-
response = model.generate(
|
| 155 |
-
messages,
|
| 156 |
-
max_new_tokens=args.max_new_tokens,
|
| 157 |
-
temperature=0.0,
|
| 158 |
-
top_p=0.0,
|
| 159 |
-
top_k=0,
|
| 160 |
-
do_sample=False,
|
| 161 |
-
)
|
| 162 |
-
|
| 163 |
-
print("\n" + "="*50)
|
| 164 |
-
print(f"Response:\n{response}")
|
| 165 |
-
print("="*50)
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
if __name__ == "__main__":
|
| 169 |
-
main()
|
| 170 |
-
```
|
| 171 |
-
|
| 172 |
-
### Using HuggingFace Transformers
|
| 173 |
-
|
| 174 |
-
```python
|
| 175 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 176 |
-
import torch
|
| 177 |
-
|
| 178 |
-
# Load model with trust_remote_code
|
| 179 |
-
model = AutoModelForCausalLM.from_pretrained(
|
| 180 |
-
"cslys1999/Eureka-Audio-Instruct",
|
| 181 |
-
torch_dtype=torch.bfloat16,
|
| 182 |
-
device_map="auto",
|
| 183 |
-
trust_remote_code=True,
|
| 184 |
-
)
|
| 185 |
-
```
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
## Evaluation
|
| 189 |
-
|
| 190 |
-
Eureka-Audio achieves competitive performance across a wide range of audio benchmarks despite having only 1.7B parameters.
|
| 191 |
-
|
| 192 |
-
<p align="center">
|
| 193 |
-
<img src="assets/teaser.png" width="100%"/>
|
| 194 |
-
<p>
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
### Automatic Speech Recognition (ASR)
|
| 198 |
-
|
| 199 |
-
<table>
|
| 200 |
-
<thead>
|
| 201 |
-
<tr>
|
| 202 |
-
<th>Datasets</th>
|
| 203 |
-
<th>Type</th>
|
| 204 |
-
<th>Model</th>
|
| 205 |
-
<th>Size</th>
|
| 206 |
-
<th>WER/CER ↓</th>
|
| 207 |
-
</tr>
|
| 208 |
-
</thead>
|
| 209 |
-
<tbody>
|
| 210 |
-
<tr>
|
| 211 |
-
<td rowspan="11"><strong>LibriSpeech</strong><br>test-clean | test-other</td>
|
| 212 |
-
<td rowspan="5"><em>Omni</em></td>
|
| 213 |
-
<td>Qwen3-Omni-Instruct</td>
|
| 214 |
-
<td>30B-A3B</td>
|
| 215 |
-
<td>1.60 | 2.93</td>
|
| 216 |
-
</tr>
|
| 217 |
-
<tr>
|
| 218 |
-
<td>Ming-Lite-Omni-1.5</td>
|
| 219 |
-
<td>19B-A2.8B</td>
|
| 220 |
-
<td>1.90 | 3.54</td>
|
| 221 |
-
</tr>
|
| 222 |
-
<tr>
|
| 223 |
-
<td>MiniCPM-o</td>
|
| 224 |
-
<td>9B</td>
|
| 225 |
-
<td>2.01 | 4.87</td>
|
| 226 |
-
</tr>
|
| 227 |
-
<tr>
|
| 228 |
-
<td>Qwen2.5-Omni-7B</td>
|
| 229 |
-
<td>7B</td>
|
| 230 |
-
<td>1.53 | 3.19</td>
|
| 231 |
-
</tr>
|
| 232 |
-
<tr>
|
| 233 |
-
<td>Qwen2.5-Omni-3B</td>
|
| 234 |
-
<td>3B</td>
|
| 235 |
-
<td>1.68 | 3.90</td>
|
| 236 |
-
</tr>
|
| 237 |
-
<tr>
|
| 238 |
-
<td rowspan="4"><em>Audio</em></td>
|
| 239 |
-
<td>Step-Audio-2-mini</td>
|
| 240 |
-
<td>8B</td>
|
| 241 |
-
<td>1.41 | 2.76</td>
|
| 242 |
-
</tr>
|
| 243 |
-
<tr>
|
| 244 |
-
<td>Audio Flamingo 3</td>
|
| 245 |
-
<td>8B</td>
|
| 246 |
-
<td>1.39 | 2.96</td>
|
| 247 |
-
</tr>
|
| 248 |
-
<tr>
|
| 249 |
-
<td>Qwen2-Audio</td>
|
| 250 |
-
<td>7B</td>
|
| 251 |
-
<td>1.74 | 4.01</td>
|
| 252 |
-
</tr>
|
| 253 |
-
<tr>
|
| 254 |
-
<td>Kimi-Audio-7B-Instruct</td>
|
| 255 |
-
<td>7B</td>
|
| 256 |
-
<td>1.33 | 2.57</td>
|
| 257 |
-
</tr>
|
| 258 |
-
<tr>
|
| 259 |
-
<td rowspan="2"><em>Ours</em></td>
|
| 260 |
-
<td>Eureka-Audio-Base</td>
|
| 261 |
-
<td>1.7B</td>
|
| 262 |
-
<td>1.59 | 3.34</td>
|
| 263 |
-
</tr>
|
| 264 |
-
<tr>
|
| 265 |
-
<td><strong>Eureka-Audio-Instruct</strong></td>
|
| 266 |
-
<td><strong>1.7B</strong></td>
|
| 267 |
-
<td><strong>1.46 | 3.24</strong></td>
|
| 268 |
-
</tr>
|
| 269 |
-
<tr>
|
| 270 |
-
<td rowspan="11"><strong>Fleurs-en</strong></td>
|
| 271 |
-
<td rowspan="5"><em>Omni</em></td>
|
| 272 |
-
<td>Qwen3-Omni-Instruct</td>
|
| 273 |
-
<td>30B-A3B</td>
|
| 274 |
-
<td>5.04</td>
|
| 275 |
-
</tr>
|
| 276 |
-
<tr>
|
| 277 |
-
<td>Ming-Lite-Omni-1.5</td>
|
| 278 |
-
<td>19B-A2.8B</td>
|
| 279 |
-
<td>5.82</td>
|
| 280 |
-
</tr>
|
| 281 |
-
<tr>
|
| 282 |
-
<td>MiniCPM-o</td>
|
| 283 |
-
<td>9B</td>
|
| 284 |
-
<td>6.18</td>
|
| 285 |
-
</tr>
|
| 286 |
-
<tr>
|
| 287 |
-
<td>Qwen2.5-Omni-7B</td>
|
| 288 |
-
<td>7B</td>
|
| 289 |
-
<td>5.49</td>
|
| 290 |
-
</tr>
|
| 291 |
-
<tr>
|
| 292 |
-
<td>Qwen2.5-Omni-3B</td>
|
| 293 |
-
<td>3B</td>
|
| 294 |
-
<td>5.65</td>
|
| 295 |
-
</tr>
|
| 296 |
-
<tr>
|
| 297 |
-
<td rowspan="4"><em>Audio</em></td>
|
| 298 |
-
<td>Step-Audio-2-mini</td>
|
| 299 |
-
<td>8B</td>
|
| 300 |
-
<td>4.51</td>
|
| 301 |
-
</tr>
|
| 302 |
-
<tr>
|
| 303 |
-
<td>Audio Flamingo 3</td>
|
| 304 |
-
<td>8B</td>
|
| 305 |
-
<td>6.30</td>
|
| 306 |
-
</tr>
|
| 307 |
-
<tr>
|
| 308 |
-
<td>Qwen2-Audio</td>
|
| 309 |
-
<td>7B</td>
|
| 310 |
-
<td>6.92</td>
|
| 311 |
-
</tr>
|
| 312 |
-
<tr>
|
| 313 |
-
<td>Kimi-Audio-7B-Instruct</td>
|
| 314 |
-
<td>7B</td>
|
| 315 |
-
<td>6.11</td>
|
| 316 |
-
</tr>
|
| 317 |
-
<tr>
|
| 318 |
-
<td rowspan="2"><em>Ours</em></td>
|
| 319 |
-
<td>Eureka-Audio-Base</td>
|
| 320 |
-
<td>1.7B</td>
|
| 321 |
-
<td>5.73</td>
|
| 322 |
-
</tr>
|
| 323 |
-
<tr>
|
| 324 |
-
<td><strong>Eureka-Audio-Instruct</strong></td>
|
| 325 |
-
<td><strong>1.7B</strong></td>
|
| 326 |
-
<td><strong>5.39</strong></td>
|
| 327 |
-
</tr>
|
| 328 |
-
<tr>
|
| 329 |
-
<td rowspan="10"><strong>AISHELL-2</strong> ios</td>
|
| 330 |
-
<td rowspan="5"><em>Omni</em></td>
|
| 331 |
-
<td>Qwen3-Omni-Instruct</td>
|
| 332 |
-
<td>30B-A3B</td>
|
| 333 |
-
<td>2.63</td>
|
| 334 |
-
</tr>
|
| 335 |
-
<tr>
|
| 336 |
-
<td>Ming-Lite-Omni-1.5</td>
|
| 337 |
-
<td>19B-A2.8B</td>
|
| 338 |
-
<td>2.66</td>
|
| 339 |
-
</tr>
|
| 340 |
-
<tr>
|
| 341 |
-
<td>MiniCPM-o</td>
|
| 342 |
-
<td>9B</td>
|
| 343 |
-
<td>3.42</td>
|
| 344 |
-
</tr>
|
| 345 |
-
<tr>
|
| 346 |
-
<td>Qwen2.5-Omni-7B</td>
|
| 347 |
-
<td>7B</td>
|
| 348 |
-
<td>2.58</td>
|
| 349 |
-
</tr>
|
| 350 |
-
<tr>
|
| 351 |
-
<td>Qwen2.5-Omni-3B</td>
|
| 352 |
-
<td>3B</td>
|
| 353 |
-
<td>2.77</td>
|
| 354 |
-
</tr>
|
| 355 |
-
<tr>
|
| 356 |
-
<td rowspan="3"><em>Audio</em></td>
|
| 357 |
-
<td>Step-Audio-2-mini</td>
|
| 358 |
-
<td>8B</td>
|
| 359 |
-
<td>2.33</td>
|
| 360 |
-
</tr>
|
| 361 |
-
<tr>
|
| 362 |
-
<td>Qwen2-Audio</td>
|
| 363 |
-
<td>7B</td>
|
| 364 |
-
<td>3.08</td>
|
| 365 |
-
</tr>
|
| 366 |
-
<tr>
|
| 367 |
-
<td>Kimi-Audio-7B-Instruct</td>
|
| 368 |
-
<td>7B</td>
|
| 369 |
-
<td>2.80</td>
|
| 370 |
-
</tr>
|
| 371 |
-
<tr>
|
| 372 |
-
<td rowspan="2"><em>Ours</em></td>
|
| 373 |
-
<td>Eureka-Audio-Base</td>
|
| 374 |
-
<td>1.7B</td>
|
| 375 |
-
<td>3.17</td>
|
| 376 |
-
</tr>
|
| 377 |
-
<tr>
|
| 378 |
-
<td><strong>Eureka-Audio-Instruct</strong></td>
|
| 379 |
-
<td><strong>1.7B</strong></td>
|
| 380 |
-
<td><strong>3.10</strong></td>
|
| 381 |
-
</tr>
|
| 382 |
-
<tr>
|
| 383 |
-
<td rowspan="10"><strong>WenetSpeech</strong><br>test-meeting | test-net</td>
|
| 384 |
-
<td rowspan="5"><em>Omni</em></td>
|
| 385 |
-
<td>Qwen3-Omni-Instruct</td>
|
| 386 |
-
<td>30B-A3B</td>
|
| 387 |
-
<td>6.12 | 5.29</td>
|
| 388 |
-
</tr>
|
| 389 |
-
<tr>
|
| 390 |
-
<td>Ming-Lite-Omni-1.5</td>
|
| 391 |
-
<td>19B-A2.8B</td>
|
| 392 |
-
<td>5.96 | 6.26</td>
|
| 393 |
-
</tr>
|
| 394 |
-
<tr>
|
| 395 |
-
<td>MiniCPM-o</td>
|
| 396 |
-
<td>9B</td>
|
| 397 |
-
<td>15.53 | 7.68</td>
|
| 398 |
-
</tr>
|
| 399 |
-
<tr>
|
| 400 |
-
<td>Qwen2.5-Omni-7B</td>
|
| 401 |
-
<td>7B</td>
|
| 402 |
-
<td>8.43 | 7.10</td>
|
| 403 |
-
</tr>
|
| 404 |
-
<tr>
|
| 405 |
-
<td>Qwen2.5-Omni-3B</td>
|
| 406 |
-
<td>3B</td>
|
| 407 |
-
<td>8.53 | 7.14</td>
|
| 408 |
-
</tr>
|
| 409 |
-
<tr>
|
| 410 |
-
<td rowspan="3"><em>Audio</em></td>
|
| 411 |
-
<td>Step-Audio-2-mini</td>
|
| 412 |
-
<td>8B</td>
|
| 413 |
-
<td>5.43 | 5.50</td>
|
| 414 |
-
</tr>
|
| 415 |
-
<tr>
|
| 416 |
-
<td>Qwen2-Audio</td>
|
| 417 |
-
<td>7B</td>
|
| 418 |
-
<td>8.40 | 8.00</td>
|
| 419 |
-
</tr>
|
| 420 |
-
<tr>
|
| 421 |
-
<td>Kimi-Audio-7B-Instruct</td>
|
| 422 |
-
<td>7B</td>
|
| 423 |
-
<td>6.38 | 7.17</td>
|
| 424 |
-
</tr>
|
| 425 |
-
<tr>
|
| 426 |
-
<td rowspan="2"><em>Ours</em></td>
|
| 427 |
-
<td>Eureka-Audio-Base</td>
|
| 428 |
-
<td>1.7B</td>
|
| 429 |
-
<td>10.37 | 8.63</td>
|
| 430 |
-
</tr>
|
| 431 |
-
<tr>
|
| 432 |
-
<td><strong>Eureka-Audio-Instruct</strong></td>
|
| 433 |
-
<td><strong>1.7B</strong></td>
|
| 434 |
-
<td><strong>9.14 | 7.55</strong></td>
|
| 435 |
-
</tr>
|
| 436 |
-
</tbody>
|
| 437 |
-
</table>
|
| 438 |
-
|
| 439 |
-
|
| 440 |
-
### Audio Understanding
|
| 441 |
-
|
| 442 |
-
<table>
|
| 443 |
-
<thead>
|
| 444 |
-
<tr>
|
| 445 |
-
<th>Datasets</th>
|
| 446 |
-
<th>Type</th>
|
| 447 |
-
<th>Model</th>
|
| 448 |
-
<th>Size</th>
|
| 449 |
-
<th>Performance ↑</th>
|
| 450 |
-
</tr>
|
| 451 |
-
</thead>
|
| 452 |
-
<tbody>
|
| 453 |
-
<tr>
|
| 454 |
-
<td rowspan="11"><strong>Knowledge</strong><br>MMSU | OpenBookQA</td>
|
| 455 |
-
<td rowspan="5"><em>Omni</em></td>
|
| 456 |
-
<td>Qwen3-Omni-Instruct</td>
|
| 457 |
-
<td>30B-A3B</td>
|
| 458 |
-
<td>77.00 | 92.31</td>
|
| 459 |
-
</tr>
|
| 460 |
-
<tr>
|
| 461 |
-
<td>Ming-Lite-Omni-1.5</td>
|
| 462 |
-
<td>19B-A2.8B</td>
|
| 463 |
-
<td>47.00 | 69.67</td>
|
| 464 |
-
</tr>
|
| 465 |
-
<tr>
|
| 466 |
-
<td>MiniCPM-o</td>
|
| 467 |
-
<td>9B</td>
|
| 468 |
-
<td>54.55 | 79.12</td>
|
| 469 |
-
</tr>
|
| 470 |
-
<tr>
|
| 471 |
-
<td>Qwen2.5-Omni-7B</td>
|
| 472 |
-
<td>7B</td>
|
| 473 |
-
<td>61.22 | 81.53</td>
|
| 474 |
-
</tr>
|
| 475 |
-
<tr>
|
| 476 |
-
<td>Qwen2.5-Omni-3B</td>
|
| 477 |
-
<td>3B</td>
|
| 478 |
-
<td>53.41 | 77.36</td>
|
| 479 |
-
</tr>
|
| 480 |
-
<tr>
|
| 481 |
-
<td rowspan="4"><em>Audio</em></td>
|
| 482 |
-
<td>Step-Audio-2-mini</td>
|
| 483 |
-
<td>8B</td>
|
| 484 |
-
<td>55.14 | 75.60</td>
|
| 485 |
-
</tr>
|
| 486 |
-
<tr>
|
| 487 |
-
<td>Audio Flamingo 3</td>
|
| 488 |
-
<td>8B</td>
|
| 489 |
-
<td>47.07 | 61.54</td>
|
| 490 |
-
</tr>
|
| 491 |
-
<tr>
|
| 492 |
-
<td>Qwen2-Audio</td>
|
| 493 |
-
<td>7B</td>
|
| 494 |
-
<td>35.75 | 49.67</td>
|
| 495 |
-
</tr>
|
| 496 |
-
<tr>
|
| 497 |
-
<td>Kimi-Audio-7B-Instruct</td>
|
| 498 |
-
<td>7B</td>
|
| 499 |
-
<td>61.26 | 84.18</td>
|
| 500 |
-
</tr>
|
| 501 |
-
<tr>
|
| 502 |
-
<td rowspan="2"><em>Ours</em></td>
|
| 503 |
-
<td>Eureka-Audio-Base</td>
|
| 504 |
-
<td>1.7B</td>
|
| 505 |
-
<td>38.03 | 52.53</td>
|
| 506 |
-
</tr>
|
| 507 |
-
<tr>
|
| 508 |
-
<td><strong>Eureka-Audio-Instruct</strong></td>
|
| 509 |
-
<td><strong>1.7B</strong></td>
|
| 510 |
-
<td><strong>55.63 | 69.23</strong></td>
|
| 511 |
-
</tr>
|
| 512 |
-
<tr>
|
| 513 |
-
<td rowspan="10"><strong>Safety</strong><br>AdvBench</td>
|
| 514 |
-
<td rowspan="5"><em>Omni</em></td>
|
| 515 |
-
<td>Qwen3-Omni-Instruct</td>
|
| 516 |
-
<td>30B-A3B</td>
|
| 517 |
-
<td>99.61</td>
|
| 518 |
-
</tr>
|
| 519 |
-
<tr>
|
| 520 |
-
<td>Ming-Lite-Omni-1.5</td>
|
| 521 |
-
<td>19B-A2.8B</td>
|
| 522 |
-
<td>99.23</td>
|
| 523 |
-
</tr>
|
| 524 |
-
<tr>
|
| 525 |
-
<td>MiniCPM-o</td>
|
| 526 |
-
<td>9B</td>
|
| 527 |
-
<td>95.76</td>
|
| 528 |
-
</tr>
|
| 529 |
-
<tr>
|
| 530 |
-
<td>Qwen2.5-Omni-7B</td>
|
| 531 |
-
<td>7B</td>
|
| 532 |
-
<td>96.92</td>
|
| 533 |
-
</tr>
|
| 534 |
-
<tr>
|
| 535 |
-
<td>Qwen2.5-Omni-3B</td>
|
| 536 |
-
<td>3B</td>
|
| 537 |
-
<td>89.80</td>
|
| 538 |
-
</tr>
|
| 539 |
-
<tr>
|
| 540 |
-
<td rowspan="4"><em>Audio</em></td>
|
| 541 |
-
<td>Step-Audio-2-mini</td>
|
| 542 |
-
<td>8B</td>
|
| 543 |
-
<td>93.08</td>
|
| 544 |
-
</tr>
|
| 545 |
-
<tr>
|
| 546 |
-
<td>Audio Flamingo 3</td>
|
| 547 |
-
<td>8B</td>
|
| 548 |
-
<td>98.26</td>
|
| 549 |
-
</tr>
|
| 550 |
-
<tr>
|
| 551 |
-
<td>Qwen2-Audio</td>
|
| 552 |
-
<td>7B</td>
|
| 553 |
-
<td>98.84</td>
|
| 554 |
-
</tr>
|
| 555 |
-
<tr>
|
| 556 |
-
<td>Kimi-Audio-7B-Instruct</td>
|
| 557 |
-
<td>7B</td>
|
| 558 |
-
<td>100.00</td>
|
| 559 |
-
</tr>
|
| 560 |
-
<tr>
|
| 561 |
-
<td><em>Ours</em></td>
|
| 562 |
-
<td><strong>Eureka-Audio-Instruct</strong></td>
|
| 563 |
-
<td><strong>1.7B</strong></td>
|
| 564 |
-
<td><strong>99.81</strong></td>
|
| 565 |
-
</tr>
|
| 566 |
-
<tr>
|
| 567 |
-
<td rowspan="10"><strong>Instruction</strong><br>IFEval</td>
|
| 568 |
-
<td rowspan="5"><em>Omni</em></td>
|
| 569 |
-
<td>Qwen3-Omni-Instruct</td>
|
| 570 |
-
<td>30B-A3B</td>
|
| 571 |
-
<td>81.17</td>
|
| 572 |
-
</tr>
|
| 573 |
-
<tr>
|
| 574 |
-
<td>Ming-Lite-Omni-1.5</td>
|
| 575 |
-
<td>19B-A2.8B</td>
|
| 576 |
-
<td>53.68</td>
|
| 577 |
-
</tr>
|
| 578 |
-
<tr>
|
| 579 |
-
<td>MiniCPM-o</td>
|
| 580 |
-
<td>9B</td>
|
| 581 |
-
<td>41.72</td>
|
| 582 |
-
</tr>
|
| 583 |
-
<tr>
|
| 584 |
-
<td>Qwen2.5-Omni-7B</td>
|
| 585 |
-
<td>7B</td>
|
| 586 |
-
<td>39.84</td>
|
| 587 |
-
</tr>
|
| 588 |
-
<tr>
|
| 589 |
-
<td>Qwen2.5-Omni-3B</td>
|
| 590 |
-
<td>3B</td>
|
| 591 |
-
<td>32.97</td>
|
| 592 |
-
</tr>
|
| 593 |
-
<tr>
|
| 594 |
-
<td rowspan="4"><em>Audio</em></td>
|
| 595 |
-
<td>Step-Audio-2-mini</td>
|
| 596 |
-
<td>8B</td>
|
| 597 |
-
<td>43.54</td>
|
| 598 |
-
</tr>
|
| 599 |
-
<tr>
|
| 600 |
-
<td>Audio Flamingo 3</td>
|
| 601 |
-
<td>8B</td>
|
| 602 |
-
<td>32.27</td>
|
| 603 |
-
</tr>
|
| 604 |
-
<tr>
|
| 605 |
-
<td>Qwen2-Audio</td>
|
| 606 |
-
<td>7B</td>
|
| 607 |
-
<td>26.24</td>
|
| 608 |
-
</tr>
|
| 609 |
-
<tr>
|
| 610 |
-
<td>Kimi-Audio-7B-Instruct</td>
|
| 611 |
-
<td>7B</td>
|
| 612 |
-
<td>47.91</td>
|
| 613 |
-
</tr>
|
| 614 |
-
<tr>
|
| 615 |
-
<td><em>Ours</em></td>
|
| 616 |
-
<td><strong>Eureka-Audio-Instruct</strong></td>
|
| 617 |
-
<td><strong>1.7B</strong></td>
|
| 618 |
-
<td><strong>53.21</strong></td>
|
| 619 |
-
</tr>
|
| 620 |
-
<tr>
|
| 621 |
-
<td rowspan="12"><strong>Paralinguistic</strong><br>MMAU | MMAR</td>
|
| 622 |
-
<td rowspan="5"><em>Omni</em></td>
|
| 623 |
-
<td>Qwen3-Omni-Instruct</td>
|
| 624 |
-
<td>30B-A3B</td>
|
| 625 |
-
<td>74.57 | 67.10</td>
|
| 626 |
-
</tr>
|
| 627 |
-
<tr>
|
| 628 |
-
<td>Ming-Lite-Omni-1.5</td>
|
| 629 |
-
<td>19B-A2.8B</td>
|
| 630 |
-
<td>63.52 | 45.40</td>
|
| 631 |
-
</tr>
|
| 632 |
-
<tr>
|
| 633 |
-
<td>MiniCPM-o</td>
|
| 634 |
-
<td>9B</td>
|
| 635 |
-
<td>64.92 | 47.90</td>
|
| 636 |
-
</tr>
|
| 637 |
-
<tr>
|
| 638 |
-
<td>Qwen2.5-Omni-7B</td>
|
| 639 |
-
<td>7B</td>
|
| 640 |
-
<td>66.23 | 49.60</td>
|
| 641 |
-
</tr>
|
| 642 |
-
<tr>
|
| 643 |
-
<td>Qwen2.5-Omni-3B</td>
|
| 644 |
-
<td>3B</td>
|
| 645 |
-
<td>62.91 | 43.40</td>
|
| 646 |
-
</tr>
|
| 647 |
-
<tr>
|
| 648 |
-
<td rowspan="4"><em>Audio</em></td>
|
| 649 |
-
<td>Step-Audio-2-mini</td>
|
| 650 |
-
<td>8B</td>
|
| 651 |
-
<td>71.96 | 61.57</td>
|
| 652 |
-
</tr>
|
| 653 |
-
<tr>
|
| 654 |
-
<td>Audio Flamingo 3</td>
|
| 655 |
-
<td>8B</td>
|
| 656 |
-
<td>74.77 | 61.00</td>
|
| 657 |
-
</tr>
|
| 658 |
-
<tr>
|
| 659 |
-
<td>Qwen2-Audio</td>
|
| 660 |
-
<td>7B</td>
|
| 661 |
-
<td>59.80 | 37.90</td>
|
| 662 |
-
</tr>
|
| 663 |
-
<tr>
|
| 664 |
-
<td>Kimi-Audio-7B-Instruct</td>
|
| 665 |
-
<td>7B</td>
|
| 666 |
-
<td>72.86 | 57.40</td>
|
| 667 |
-
</tr>
|
| 668 |
-
<tr>
|
| 669 |
-
<td rowspan="3"><em>Ours</em></td>
|
| 670 |
-
<td>Eureka-Audio-Base</td>
|
| 671 |
-
<td>1.7B</td>
|
| 672 |
-
<td>63.42 | 46.80</td>
|
| 673 |
-
</tr>
|
| 674 |
-
<tr>
|
| 675 |
-
<td>Eureka-Audio-Instruct w/o DataFlux</td>
|
| 676 |
-
<td>1.7B</td>
|
| 677 |
-
<td>66.93 | 50.70</td>
|
| 678 |
-
</tr>
|
| 679 |
-
<tr>
|
| 680 |
-
<td><strong>Eureka-Audio-Instruct</strong></td>
|
| 681 |
-
<td><strong>1.7B</strong></td>
|
| 682 |
-
<td><strong>74.67 | 56.20</strong></td>
|
| 683 |
-
</tr>
|
| 684 |
-
</tbody>
|
| 685 |
-
</table>
|
| 686 |
-
|
| 687 |
-
|
| 688 |
-
### Dense Audio Captioning
|
| 689 |
-
|
| 690 |
-
<table>
|
| 691 |
-
<thead>
|
| 692 |
-
<tr>
|
| 693 |
-
<th>Datasets</th>
|
| 694 |
-
<th>Model</th>
|
| 695 |
-
<th>Size</th>
|
| 696 |
-
<th>MMAU | MMAR ↑</th>
|
| 697 |
-
</tr>
|
| 698 |
-
</thead>
|
| 699 |
-
<tbody>
|
| 700 |
-
<tr>
|
| 701 |
-
<td rowspan="3"><strong>Dense Captioning</strong></td>
|
| 702 |
-
<td>Qwen3-Omni-Captioner</td>
|
| 703 |
-
<td>30B-A3B</td>
|
| 704 |
-
<td>56.68 | 46.40</td>
|
| 705 |
-
</tr>
|
| 706 |
-
<tr>
|
| 707 |
-
<td>Qwen3-Omni-Instruct</td>
|
| 708 |
-
<td>30B-A3B</td>
|
| 709 |
-
<td>48.24 | 36.90</td>
|
| 710 |
-
</tr>
|
| 711 |
-
<tr>
|
| 712 |
-
<td><strong>Eureka-Audio-Instruct (Ours)</strong></td>
|
| 713 |
-
<td><strong>1.7B</strong></td>
|
| 714 |
-
<td><strong>52.96 | 41.70</strong></td>
|
| 715 |
-
</tr>
|
| 716 |
-
</tbody>
|
| 717 |
-
</table>
|
| 718 |
-
|
| 719 |
-
|
| 720 |
-
## Audio Input Formats
|
| 721 |
-
|
| 722 |
-
The model supports multiple audio input formats:
|
| 723 |
-
|
| 724 |
-
- **Local file path**: `"path/to/audio.wav"`
|
| 725 |
-
- **HTTP URL**: `"https://example.com/audio.wav"`
|
| 726 |
-
- **Base64 data URI**: `"data:audio/wav;base64,<base64_string>"`
|
| 727 |
-
|
| 728 |
-
Supported audio formats: WAV, MP3, FLAC, OGG, etc. (via torchaudio)
|
| 729 |
-
|
| 730 |
-
|
| 731 |
-
## License
|
| 732 |
-
|
| 733 |
-
The model is based and modified from [Qwen3](https://github.com/QwenLM/Qwen3). Code derived from Qwen3 is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). Other parts of the code are licensed under the [MIT License](https://opensource.org/licenses/MIT).
|
| 734 |
-
|
| 735 |
-
|
| 736 |
-
## Acknowledgements
|
| 737 |
-
|
| 738 |
-
We would like to thank the following projects for their contributions:
|
| 739 |
-
|
| 740 |
-
* [Whisper](https://github.com/openai/whisper) - Audio encoder
|
| 741 |
-
* [Qwen3](https://github.com/QwenLM/Qwen3) - Language model backbone
|
| 742 |
-
* [Transformers](https://github.com/huggingface/transformers) - Model framework
|
| 743 |
-
|
| 744 |
-
Thank you to all the open-source projects for their contributions!
|
| 745 |
-
|
| 746 |
-
|
| 747 |
-
## Citation
|
| 748 |
-
|
| 749 |
-
If you find Eureka-Audio useful in your research or applications, please cite our technical report:
|
| 750 |
-
|
| 751 |
-
```bibtex
|
| 752 |
-
@misc{zhang2026eurekaaudiotriggeringaudiointelligence,
|
| 753 |
-
title={Eureka-Audio: Triggering Audio Intelligence in Compact Language Models},
|
| 754 |
-
author={Dan Zhang and Yishu Lei and Jing Hu and Shuwei He and Songhe Deng and Xianlong Luo and Danxiang Zhu and Shikun Feng and Rui Liu and Jingzhou He and Yu Sun and Hua Wu and Haifeng Wang},
|
| 755 |
-
year={2026},
|
| 756 |
-
eprint={2602.13954},
|
| 757 |
-
archivePrefix={arXiv},
|
| 758 |
-
primaryClass={cs.SD},
|
| 759 |
-
url={https://arxiv.org/abs/2602.13954},
|
| 760 |
-
}
|
| 761 |
-
```
|
| 762 |
-
```bibtex
|
| 763 |
-
@misc{lei2026moeadapterlargeaudio,
|
| 764 |
-
title={MoE Adapter for Large Audio Language Models: Sparsity, Disentanglement, and Gradient-Conflict-Free},
|
| 765 |
-
author={Yishu Lei and Shuwei He and Jing Hu and Dan Zhang and Xianlong Luo and Danxiang Zhu and Shikun Feng and Rui Liu and Jingzhou He and Yu Sun and Hua Wu and Haifeng Wang},
|
| 766 |
-
year={2026},
|
| 767 |
-
eprint={2601.02967},
|
| 768 |
-
archivePrefix={arXiv},
|
| 769 |
-
primaryClass={cs.SD},
|
| 770 |
-
url={https://arxiv.org/abs/2601.02967},
|
| 771 |
-
}
|
| 772 |
-
```
|
| 773 |
-
|
| 774 |
-
## Contact Us
|
| 775 |
-
|
| 776 |
-
For questions, issues, or collaboration inquiries, please feel free to open an issue on GitHub.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|