RAMoEA-QA / README.md
gab62-cam's picture
Update README.md
0fdd60f verified
metadata
language: en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
  - multimodal
  - audio
  - healthcare
  - respiratory
  - question-answering
  - mixture-of-experts
  - lora

RAMoEA-QA (Checkpoint)

RAMoEA-QA is a hierarchical generative model for Respiratory Audio Question Answering (RA-QA). It supports multiple question formats (open-ended, single-verify, multiple-choice) and both discrete labels (e.g., diagnosis/verification) and continuous targets (regression) within a single system.

Architecture (two-stage conditional specialization):

  • Audio Mixture-of-Experts (Audio-MoE): routes each (audio, question) example to one pre-trained audio encoder expert.
  • Language Mixture-of-Adapters (MoA): selects one LoRA adapter on a shared frozen LLM backbone (GPT-2) to match query intent and answer format.

Intended use

Research and decision-support experiments on respiratory-audio question-answering. Not a medical device.

Usage

This checkpoint is meant to be used with the accompanying codebase (audio encoder factory + routing + alignment):