|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- zh |
|
|
- en |
|
|
library_name: transformers |
|
|
tags: |
|
|
- qwen2.5 |
|
|
- audio |
|
|
- open-source |
|
|
- thinker |
|
|
pipeline_tag: text-generation |
|
|
model_type: qwen2_5_omni |
|
|
base_model: Qwen/Qwen2.5-Omni-7B |
|
|
--- |
|
|
|
|
|
# AudioOnlyThinker |
|
|
|
|
|
This model is a lightweight variant of [Qwen2.5-Omni-7B](https://huggingface.co/Qwen/Qwen2.5-Omni-7B), customized to **remove the vision encoder** and support only **audio and text**. |
|
|
|
|
|
It is intended for use in audio-to-text instruction following, voice chat, and ASR-style tasks, and supports generation through `generate()` as with any decoder-only model. |
|
|
|
|
|
## π§ How this model was built |
|
|
|
|
|
We extracted only the `Thinker` component from the full Qwen2.5-Omni model: |
|
|
|
|
|
- β
Kept: Audio encoder (`audio_tower`) + Language model (`model`) |
|
|
- β Removed: Vision encoder (`visual`) + Talker (speech decoder) |
|
|
- β
Manually deleted `vision_config` from `config.json` |
|
|
- β
Class modified via subclassing `Qwen2_5OmniThinkerForConditionalGeneration` |
|
|
|
|
|
## π¦ Usage: π§ How to use with `AudioOnlyThinker` class |
|
|
|
|
|
This model uses a custom subclass `AudioOnlyThinker`, which disables the vision encoder. |
|
|
|
|
|
You must define this class before calling `.from_pretrained()`. Example: |
|
|
|
|
|
```python |
|
|
from transformers import Qwen2_5OmniThinkerForConditionalGeneration |
|
|
|
|
|
class AudioOnlyThinker(Qwen2_5OmniThinkerForConditionalGeneration): |
|
|
def __init__(self, config): |
|
|
super().__init__(config) |
|
|
self.visual = None |
|
|
if hasattr(self.config, "vision_config"): |
|
|
del self.config.vision_config |
|
|
|
|
|
def forward(self, *args, pixel_values=None, pixel_values_videos=None, **kwargs): |
|
|
return super().forward(*args, pixel_values=None, pixel_values_videos=None, **kwargs) |
|
|
|
|
|
model = AudioOnlyThinker.from_pretrained("chunhuizng/AudioOnlyThinker") |
|
|
|
|
|
from audio_only_processor import AudioOnlyProcessor |
|
|
|
|
|
processor = AudioOnlyProcessor.from_pretrained("chunhuizng/AudioOnlyThinker") |
|
|
|
|
|
conversation = [ |
|
|
{ |
|
|
"role": "user", |
|
|
"content": [ |
|
|
{"type": "audio", "path": "your_audio.wav"}, |
|
|
{"type": "text", "text": "What is being said in this audio?"} |
|
|
] |
|
|
} |
|
|
] |
|
|
|
|
|
inputs = processor.apply_chat_template(conversation, tokenize=True, return_tensors="pt") |
|
|
inputs = {k: v.to(model.device) for k, v in inputs.items()} |
|
|
outputs = model.generate(**inputs, max_new_tokens=128) |
|
|
|
|
|
response = processor.batch_decode(outputs, skip_special_tokens=True)[0] |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
--- |
|
|
license: mit |
|
|
--- |
|
|
|