Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: en
|
| 3 |
+
license: apache-2.0
|
| 4 |
+
library_name: transformers
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
tags:
|
| 7 |
+
- multimodal
|
| 8 |
+
- audio
|
| 9 |
+
- healthcare
|
| 10 |
+
- respiratory
|
| 11 |
+
- question-answering
|
| 12 |
+
- mixture-of-experts
|
| 13 |
+
- lora
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# RAMoEA-QA (Checkpoint)
|
| 17 |
+
|
| 18 |
+
RAMoEA-QA is a **hierarchical generative** model for **Respiratory Audio Question Answering (RA-QA)**. It supports multiple question formats (open-ended, single-verify, multiple-choice) and both **discrete labels** (e.g., diagnosis/verification) and **continuous targets** (regression) within a single system.
|
| 19 |
+
|
| 20 |
+
**Architecture (two-stage conditional specialization):**
|
| 21 |
+
- **Audio Mixture-of-Experts (Audio-MoE):** routes each *(audio, question)* example to **one** pre-trained audio encoder expert.
|
| 22 |
+
- **Language Mixture-of-Adapters (MoA):** selects **one** LoRA adapter on a shared **frozen** LLM backbone (GPT-2) to match query intent and answer format.
|
| 23 |
+
|
| 24 |
+
> **Selected audio prefix** = aligned audio embeddings concatenated into the LLM input (soft prefix).
|
| 25 |
+
|
| 26 |
+
## Intended use
|
| 27 |
+
Research and decision-support experiments on RA-QA benchmarks. **Not** a medical device.
|
| 28 |
+
|
| 29 |
+
## Usage
|
| 30 |
+
This checkpoint is meant to be used with the accompanying codebase (audio encoder factory + routing + alignment):
|
| 31 |
+
- **GitHub:** [https://github.com/gab62-cam/RAMoEA-QA/tree/main]
|