| --- |
| dataset_info: |
| features: |
| - name: file |
| dtype: string |
| - name: audio |
| dtype: audio |
| - name: attribute_label |
| dtype: string |
| - name: single_instruction |
| dtype: string |
| - name: single_answer |
| dtype: string |
| - name: multi_instruction |
| dtype: string |
| - name: multi_answer |
| dtype: string |
| splits: |
| - name: test |
| num_bytes: 93051050.0 |
| num_examples: 500 |
| download_size: 84272312 |
| dataset_size: 93051050.0 |
| configs: |
| - config_name: default |
| data_files: |
| - split: test |
| path: data/test-* |
| --- |
| # Dataset Card for SAKURA-GenderQA |
| This dataset contains the audio and the single/multi-hop questions/answers of the gender track of the SAKURA benchmark from Interspeech 2025 paper, "[**SAKURA: On the Multi-hop Reasoning of Large Audio-Language Models Based on Speech and Audio Information**](https://arxiv.org/abs/2505.13237)". |
|
|
| The fields of the dataset are: |
| - file: The filename of the audio files. |
| - audio: The audio recordings. |
| - attribute_label: The attribute labels (i.e., gender of the speakers) of the audio files. |
| - single_instruction: The questions (instructions) of the single-hop questions. |
| - single_answer: The answer to the single-hop questions. |
| - multi_instruction: The questions (instructions) of the multi-hop questions. |
| - multi_answer: The answer to the multi-hop questions. |
| |
| If you find this dataset helpful for you, please kindly consider to cite our paper via: |
| ``` |
| @article{ |
| sakura, |
| title={SAKURA: On the Multi-hop Reasoning of Large Audio-Language Models Based on Speech and Audio Information}, |
| author={Yang, Chih-Kai and Ho, Neo and Piao, Yen-Ting and Lee, Hung-yi}, |
| journal={Interspeech 2025}, |
| year={2025} |
| } |
| ``` |
| |
| |