File size: 3,381 Bytes
d37be22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
03d009d
 
 
 
 
 
 
af89977
03d009d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d37be22
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: cc-by-sa-4.0
task_categories:
- text-retrieval
- automatic-speech-recognition
language:
- en
- zh
tags:
- spoken-query-retrieval
- information-retrieval
- audio-text-retrieval
- mteb
- c-mteb
- robustness
pretty_name: SQuTR
size_categories:
- 10K<n<100K
---

# SQuTR: A Robustness Benchmark for Spoken Query to Text Retrieval

[![GitHub](https://img.shields.io/badge/GitHub-Repository-blue)](https://github.com/ttoyekk1a/SQuTR-Spoken-Query-to-Text-Retrieval)
[![Paper](https://img.shields.io/badge/Paper-arXiv-red)](https://arxiv.org/abs/2602.12783)

**SQuTR** (Spoken Query-to-Text Retrieval) is a large-scale bilingual benchmark designed to evaluate the robustness of information retrieval systems under realistic acoustic perturbations. 

While speech interaction is becoming a primary interface for IR systems, performance often degrades significantly in noisy environments. SQuTR provides a standardized framework featuring **37,317** complex queries across **6 domains**, synthesized with **200 real speakers**, and evaluated under **4 graded noise levels**.

---

## 🌟 Key Features

* **Bilingual & Multi-Domain:** Includes 6 subsets from MTEB and C-MTEB covering Wikipedia, Finance, Medical, and Encyclopedia domains.
* **High-Fidelity Synthesis:** Generated using **CosyVoice-3** with diverse speaker profiles, totaling **190.4 hours** of audio.
* **Robustness Evaluation:** Explicitly models four acoustic conditions: **Clean, Low Noise (20dB), Medium Noise (10dB), and High Noise (0dB)**.
* **MTEB Compatibility:** Follows standard JSONL/BEIR formatting for seamless integration into modern retrieval pipelines.

---

## 📂 Dataset Structure

The dataset is organized by language and subset. Each subset (e.g., `fiqa`) contains the original text documents and the synthesized audio queries under different SNR conditions.

```text
SQuTR/
└── source_data/
    ├── en/ (English Datasets: fiqa, hotpotqa, nq)
    │   └── [subset_name]/
    │       ├── audio_clean/              # Clean original audio files (.wav)
    │       ├── audio_noise_snr_0/        # Audio with 0dB Signal-to-Noise Ratio
    │       ├── audio_noise_snr_10/       # Audio with 10dB Signal-to-Noise Ratio
    │       ├── audio_noise_snr_20/       # Audio with 20dB Signal-to-Noise Ratio
    │       ├── qrels/                    # Query relevance judgments (TSV/JSONL)
    │       ├── corpus.jsonl              # Text corpus documents
    │       ├── queries.jsonl             # Original text queries
    │       ├── queries_with_audio_clean.jsonl         # Metadata mapping text to clean audio
    │       ├── queries_with_audio_noise_snr_0.jsonl   # Metadata for 0dB noise queries
    │       ├── queries_with_audio_noise_snr_10.jsonl  # Metadata for 10dB noise queries
    │       └── queries_with_audio_noise_snr_20.jsonl  # Metadata for 20dB noise queries
    └── zh/ (Chinese Datasets: DuRetrieval, MedicalRetrieval, T2Retrieval)
        └── [subset_name]/
            └── (Same structure as above)
```
---

## 💾 How to Use the Dataset

You can download the dataset directly from this Hugging Face repository. To use the evaluation scripts, please refer to our [GitHub Repository](https://github.com/ttoyekk1a/SQuTR-Spoken-Query-to-Text-Retrieval).