TimeAudio / README.md
chukewang
Init: add images via LFS
5edbe17
|
raw
history blame
5.36 kB
metadata
license: apache-2.0
task_categories:
  - audio-classification
  - automatic-speech-recognition
  - question-answering
tags:
  - Audio
  - Large Audio Language Models
language:
  - en
metrics:
  - F1
  - IOU
  - accuracy

🚀🚀TimeAudio: Bridging Temporal Gaps in Large Audio-Language

Recent Large Audio-Language Models (LALMs) exhibit impressive capabilities in understanding audio content for conversational QA tasks. However, these models struggle to accurately understand timestamps for temporal localization (e.g., Temporal Audio Grounding) and are restricted to short audio perception, leading to constrained capabilities on fine-grained tasks. We identify three key aspects that limit their temporal localization and long audio understanding: (i) timestamp representation, (ii) architecture, and (iii) data.

To address this, we introduce TimeAudio, a novel method that empowers LALMs to connect their understanding of audio content with precise temporal perception. Specifically, we incorporate unique temporal markers to improve time-sensitive reasoning and apply an absolute time-aware encoding that explicitly grounds the acoustic features with absolute time information. Moreover, to achieve end-to-end long audio understanding, we introduce a segment-level token merging module to substantially reduce audio token redundancy and enhance the efficiency of information extraction. Due to the lack of suitable datasets and evaluation metrics, we consolidate existing audio datasets into a new dataset focused on temporal tasks and establish a series of metrics to evaluate the fine-grained performance. Evaluations show strong performance across a variety of fine-grained tasks, such as dense captioning, temporal grounding, and timeline speech summarization, demonstrating TimeAudio's robust temporal localization and reasoning capabilities.

Method

TimeAudio is based on the fundamental architecture of SALMONN. Specifically, TimeAudio is consists of four components: a sliding audio encoder, a window Q-former, a segment-level token merging module, and an LLM to process raw audio. The sliding audio encoder first divides long audio into shorter segments and combines the BEATs and the Whisper encoder to extract features for each segments independently. Then, the window Q-former projects these encoded audio tokens into the language space and applies a segment-level token merging mechanism based on attention scores to filter out unimportant acoustic information. Finally, the audio embeddings and the textual token embeddings of user prompts are fed into the LLM to generate response.

Compare

Compared with traditional speech and audio processing tasks such as speech recognition and audio caption, Example of failed cases by Qwen2-Audio and Qwen2-Audio-R1 on fine-grained tasks that require both semantics and timestamps as output.

How to inference in CLI

You need to use the following dependencies:

  1. Our environment: The python version is 3.10.16, and other required packages can be installed with the following command: pip install -r requirements.txt.
  2. Download whisper large v2 to whisper_path.
  3. Download Fine-tuned BEATs_iter3+ (AS2M) (cpt2) to beats_path.
  4. Download vicuna 7B v1.5 to vicuna_path.
  5. Download salmonn-7b v0 to ckpt_path.
  6. Running with python3 cli_inference.py --ckpt_path xxx --whisper_path xxx --beats_path xxx --vicuna_path xxx to start cli inference. Please make sure your GPU has more than 40G of memory. If your GPU does not have enough memory (e.g. only 24G), you can quantize the model using the --low_resource parameter to reduce the memory usage, and can reduce the LoRA scaling factor to maintain the model's emergent abilities, e.g. --lora_alpha=28.

Launch a QA

  1. Same as How to inference in CLI: 1-5.
  2. Running with python3 web_demo.py --ckpt_path xxx --whisper_path xxx --beats_path xxx --vicuna_path xxx in your machine. You can add --low_resource parameter if the GPU memory is not enough, and reduce the LoRA scaling factor to maintain the model's emergent abilities.

Citation

If you find SALMONN great and useful, please cite our paper:

@article{,
  title={TimeAudio: Bridging Temporal Gaps in Large Audio-Language Models},
  author={Hualei Wang, Yiming Li, Shuo Ma, Hong Liu, Xiangdong Wang },
  journal={arXiv preprint arXiv:},
  year={2025},
  url={https://arxiv.org/abs/}
}