Update README.md
Browse files
README.md
CHANGED
|
@@ -1,199 +1,138 @@
|
|
|
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
-
#
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
## Evaluation
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
### Results
|
| 128 |
-
|
| 129 |
-
[More Information Needed]
|
| 130 |
-
|
| 131 |
-
#### Summary
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Environmental Impact
|
| 142 |
-
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
-
|
| 147 |
-
- **Hardware Type:** [More Information Needed]
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
-
|
| 153 |
-
## Technical Specifications [optional]
|
| 154 |
-
|
| 155 |
-
### Model Architecture and Objective
|
| 156 |
-
|
| 157 |
-
[More Information Needed]
|
| 158 |
-
|
| 159 |
-
### Compute Infrastructure
|
| 160 |
-
|
| 161 |
-
[More Information Needed]
|
| 162 |
-
|
| 163 |
-
#### Hardware
|
| 164 |
-
|
| 165 |
-
[More Information Needed]
|
| 166 |
-
|
| 167 |
-
#### Software
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
## Citation [optional]
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## Model Card Authors [optional]
|
| 194 |
-
|
| 195 |
-
[More Information Needed]
|
| 196 |
-
|
| 197 |
-
## Model Card Contact
|
| 198 |
-
|
| 199 |
-
[More Information Needed]
|
|
|
|
| 1 |
+
|
| 2 |
---
|
| 3 |
+
tags:
|
| 4 |
+
- audio
|
| 5 |
+
- audio-language
|
| 6 |
+
- multimodal
|
| 7 |
+
- reasoning
|
| 8 |
+
- auditory-semantics
|
| 9 |
+
- supervised-fine-tuning
|
| 10 |
+
- sft
|
| 11 |
+
- qwen
|
| 12 |
+
- audsem
|
| 13 |
+
language: en
|
| 14 |
+
license: apache-2.0
|
| 15 |
+
datasets:
|
| 16 |
+
- GLJS/AudSem
|
| 17 |
---
|
| 18 |
|
| 19 |
+
# AudSemThinker
|
| 20 |
+
|
| 21 |
+
## Model Description
|
| 22 |
+
`AudSemThinker` is a novel audio-language model that grounds its reasoning process in a structured framework of auditory semantics, inspired by human cognition. It processes audio by explicitly analyzing functional components such as sound-generating agents (who), physical sound sources (what), generation mechanisms (how), and contextual cues (when/where).
|
| 23 |
+
|
| 24 |
+
This model is built upon the `Qwen2.5-Omni-7B` multimodal foundation model and is fine-tuned on the novel `AudSem` dataset using Supervised Fine-Tuning (SFT). `AudSemThinker` is designed to produce comprehensive responses in a three-phase structure: a detailed `<thinking>` process, a listing of `<semantic_elements>`, and a concise `<answer>`.
|
| 25 |
+
|
| 26 |
+
## How to Use
|
| 27 |
+
To use `AudSemThinker` for audio understanding and captioning tasks, you can load it using the `transformers` library. Ensure you have `torch`, `torchaudio`, and `soundfile` installed.
|
| 28 |
+
|
| 29 |
+
```python
|
| 30 |
+
from transformers import AutoProcessor, AutoModelForCausalLM
|
| 31 |
+
import torch
|
| 32 |
+
import torchaudio
|
| 33 |
+
import soundfile as sf
|
| 34 |
+
|
| 35 |
+
# Load processor and model
|
| 36 |
+
processor = Qwen2_5OmniProcessor.from_pretrained("GLJS/audsemthinker", trust_remote_code=True)
|
| 37 |
+
model = Qwen2_5OmniThinkerForConditionalGeneration.from_pretrained(
|
| 38 |
+
"GLJS/audsemthinker",
|
| 39 |
+
torch_dtype=torch.bfloat16,
|
| 40 |
+
device_map="auto",
|
| 41 |
+
trust_remote_code=True,
|
| 42 |
+
low_cpu_mem_usage=True,
|
| 43 |
+
)
|
| 44 |
+
|
| 45 |
+
# Example audio file (replace with your audio path)
|
| 46 |
+
audio_file = "path/to/your/audio.wav"
|
| 47 |
+
|
| 48 |
+
audio_input, sampling_rate = torchaudio.load(audio_file)
|
| 49 |
+
if sampling_rate != processor.feature_extractor.sampling_rate:
|
| 50 |
+
audio_input = torchaudio.transforms.Resample(orig_freq=sampling_rate, new_freq=processor.feature_extractor.sampling_rate)(audio_input)
|
| 51 |
+
audio_input = audio_input.squeeze().numpy() # Ensure mono and numpy array
|
| 52 |
+
|
| 53 |
+
# User prompt for the task
|
| 54 |
+
user_prompt_text = "You are given an audio clip. Your task is to describe the audio in detail. First, think about the audio clip and put your thoughts in <think> and </think> tags. Then reason about the semantic elements involved in the audio clip and put your reasoning in <semantic_elements> and </semantic_elements> tags. Then describe the audio clip, put your answer in <answer> and </answer> tags."
|
| 55 |
+
|
| 56 |
+
# Construct messages in conversation format, similar to training
|
| 57 |
+
messages = [
|
| 58 |
+
{"role": "system", "content": [{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}]},
|
| 59 |
+
{
|
| 60 |
+
"role": "user",
|
| 61 |
+
"content": [
|
| 62 |
+
{"type": "audio", "audio": audio_input},
|
| 63 |
+
{"type": "text", "text": user_prompt_text}
|
| 64 |
+
]
|
| 65 |
+
}
|
| 66 |
+
]
|
| 67 |
+
|
| 68 |
+
# Apply chat template
|
| 69 |
+
# For inference, add_generation_prompt should be True.
|
| 70 |
+
text_from_chat_template = processor.apply_chat_template(
|
| 71 |
+
messages,
|
| 72 |
+
tokenize=False,
|
| 73 |
+
add_generation_prompt=True
|
| 74 |
+
)
|
| 75 |
+
|
| 76 |
+
# Prepare inputs for the model
|
| 77 |
+
inputs = processor(
|
| 78 |
+
text=text_from_chat_template,
|
| 79 |
+
audio=[audio_input], # Pass audio as a list of numpy arrays
|
| 80 |
+
return_tensors="pt"
|
| 81 |
+
).to(model.device)
|
| 82 |
+
|
| 83 |
+
# Generate response
|
| 84 |
+
output_ids = model.generate(**inputs, max_new_tokens=512)
|
| 85 |
+
response = processor.batch_decode(output_ids, skip_special_tokens=True)[0]
|
| 86 |
+
|
| 87 |
+
print(response)
|
| 88 |
+
# Expected output format:
|
| 89 |
+
# <think>...detailed reasoning about the audio scene...</think>
|
| 90 |
+
# <semantic_elements>...list of identified semantic descriptors (e.g., Who, What, How, When, Where)...</semantic_elements>
|
| 91 |
+
# <answer>...concise audio caption...</answer>
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## Training Data
|
| 95 |
+
`AudSemThinker` is fine-tuned on the full **AudSem** dataset, a novel, high-quality audio-language dataset comprising approximately 797k examples.
|
| 96 |
+
|
| 97 |
+
**AudSem Dataset Characteristics:**
|
| 98 |
+
* **Source:** Synthetically curated from YouTube closed captions, designed to minimize overlap with existing datasets like AudioSet and WavCaps.
|
| 99 |
+
* **Generation Pipeline:** Utilizes a robust multi-stage pipeline that integrates audio, video, and YouTube closed caption data. It employs an ensemble of 9 specialized AI models for comprehensive multimodal analysis (Qwen2Audio-7B, BEATs, AST, CoNeTTE, LP-MusicCaps, BLIP, CLIP, RT-DETR, Places365, LLaVA-Video-7B).
|
| 100 |
+
* **Quality Control:** Includes rigorous filtering steps, such as ensuring a cosine similarity score greater than 0.5 between generated audio captions and original YouTube closed captions, to ensure high quality and relevance.
|
| 101 |
+
* **Diversity:** Contains a diverse range of task types, including open-ended audio captioning, multiple-choice question answering, open-ended question answering, and creative writing based on audio.
|
| 102 |
+
|
| 103 |
+
## Training Procedure
|
| 104 |
+
* **Base Model:** Qwen2.5-Omni-7B.
|
| 105 |
+
* **Fine-tuning Paradigm:** Supervised Fine-Tuning (SFT).
|
| 106 |
+
* **Parameter-Efficient Fine-tuning:** LoRA (Low-Rank Adaptation) applied to projection layers.
|
| 107 |
+
* **Optimizer:** AdamW.
|
| 108 |
+
* **Learning Rate:** 2e-04.
|
| 109 |
+
* **Epochs:** 1.
|
| 110 |
+
* **Precision:** bf16.
|
| 111 |
+
* **Batch Size:** 4.
|
| 112 |
+
* **Hardware:** Trained on a single H100 GPU.
|
| 113 |
+
* **Training Time:** Approximately 12 hours for the full dataset.
|
| 114 |
+
* **Output Format:** Trained to generate structured XML-like output with `<think>`, `<semantic_elements>`, and `<answer>` tags. The loss is computed only on the model completion part (assistant's response).
|
| 115 |
+
|
| 116 |
+
## Evaluation Results
|
| 117 |
+
`AudSemThinker` demonstrates state-of-the-art performance across multiple benchmarks, highlighting its strength in semantic audio reasoning. It shows particularly strong capabilities in music-related tasks.
|
| 118 |
+
|
| 119 |
+
## Limitations and Bias
|
| 120 |
+
* **Data Contamination:** While `AudSem` is designed to minimize overlap with existing benchmarks, the underlying `Qwen2.5-Omni` pretrained model might have encountered data present in test sets during its initial pretraining.
|
| 121 |
+
* **Generalization:** While strong, supervised fine-tuning on `AudSem` for general tasks might not always outperform models specifically trained for niche benchmarks.
|
| 122 |
+
|
| 123 |
+
## Ethical Considerations
|
| 124 |
+
* **Data Sourcing:** The `AudSem` dataset is primarily sourced from YouTube closed captions. While systematic checks for harmful content (e.g., child abuse, hate speech, sexual content, harassment) were performed and YouTube's community guidelines provide a safeguard, inherent biases or problematic content from the original video sources could potentially be present.
|
| 125 |
+
* **Societal Impact:** `AudSemThinker` can contribute to positive societal impacts by enhancing audio-language understanding. Potential applications include improved audio transcription and captioning for individuals who are deaf or hard of hearing, sophisticated monitoring systems for environmental sounds (e.g., avian populations), and automated closed-caption generation for multimedia content.
|
| 126 |
+
|
| 127 |
+
## Citation
|
| 128 |
+
```bibtex
|
| 129 |
+
@misc{wijngaard2025audsemthinkerenhancingaudiolanguagemodels,
|
| 130 |
+
title={AudSemThinker: Enhancing Audio-Language Models through Reasoning over Semantics of Sound},
|
| 131 |
+
author={Gijs Wijngaard and Elia Formisano and Michele Esposito and Michel Dumontier},
|
| 132 |
+
year={2025},
|
| 133 |
+
eprint={2505.14142},
|
| 134 |
+
archivePrefix={arXiv},
|
| 135 |
+
primaryClass={cs.SD},
|
| 136 |
+
url={https://arxiv.org/abs/2505.14142},
|
| 137 |
+
}
|
| 138 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|