MedPLIB-7b-2e / README.md
nielsr's picture
nielsr HF Staff
Improve model card: add pipeline tag, library, paper & code links, Gradio demo usage
6616c0a verified
|
raw
history blame
6.59 kB
metadata
license: apache-2.0
pipeline_tag: any-to-any
library_name: transformers

MedPLIB: Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine

This repository contains the official implementation of the paper: Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine.

GitHub Code

Abstract

Multimodal Large Language Models (MLLMs) have shown promising progress in understanding and analyzing video content. However, processing long videos remains a significant challenge constrained by LLM's context size. To address this limitation, we propose LongVU, a spatiotemporal adaptive compression mechanism thats reduces the number of video tokens while preserving visual details of long videos. Our idea is based on leveraging cross-modal query and inter-frame dependencies to adaptively reduce temporal and spatial redundancy in videos. Specifically, we leverage DINOv2 features to remove redundant frames that exhibit high similarity. Then we utilize text-guided cross-modal query for selective frame feature reduction. Further, we perform spatial token reduction across frames based on their temporal dependencies. Our adaptive compression strategy effectively processes a large number of frames with little visual information loss within given context length. Our LongVU consistently surpass existing methods across a variety of video understanding benchmarks, especially on hour-long video understanding tasks such as VideoMME and MLVU. Given a light-weight LLM, our LongVU also scales effectively into a smaller size with state-of-the-art video understanding performance.

Highlights

MedPLIB shows excellent performance in pixel-level understanding in biomedical field.

  • ✨ MedPLIB is a biomedical MLLM with a huge breadth of abilities and supports multiple imaging modalities. Not only can it perform image-level visual language tasks like VQA, but it also facilitates question answering at the pixel level.

  • ✨ We constructe MeCoVQA Dataset. It comprises an array of 8 modalities with a total of 310k pairs for complex medical imaging question answering and image region understanding.

Installation

For detailed instructions, please refer to the GitHub repository.

  1. Clone this repository and navigate to MedPLIB folder
    git clone https://github.com/ShawnHuang497/MedPLIB.git
    cd MedPLIB
    
  2. Install Package
    conda create -n medplib python=3.10 -y
    conda activate medplib
    pip install --upgrade pip 
    pip install -r requirements.txt
    
  3. Install additional packages for training cases
    pip install ninja==1.11.1.1
    pip install flash-attn==2.5.2 --no-build-isolation
    

Sample Usage: Gradio Web UI

We recommend trying our web demo, which includes all the features currently supported by MedPLIB. To run our demo, you need to download or train MedPLIB to make the checkpoints locally. Please run the following commands one by one.

# launch the server controller
python -m model.serve.controller --host 0.0.0.0 --port 64000
# launch the web server
python -m model.serve.gradio_web_server --controller http://localhost:64000 --model-list-mode reload --add_region_feature --port 64001 
# launch the model worker
CUDA_VISIBLE_DEVICES=0 python -m model.serve.model_worker --host localhost --controller http://localhost:64000 --port 64002 --worker http://localhost:64002 --model-path /path/to/the/medplib_checkpoints --add_region_feature --device_map cuda --vision_pretrained /path/to/the/sam-med2d_b.pth
  • Pixel grounding:

  • Region VQA:

  • VQA:

Acknowledgement

We thank the following works for giving us the inspiration and part of the code: LISA, MoE-LLaVA, LLaVA, SAM-Med2D, SAM and SEEM.

Model Use

Intended Use

The data, code, and model checkpoints are intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper. The data, code, and model checkpoints are not intended to be used in clinical care or for any clinical decision making purposes.

Primary Intended Use

The primary intended use is to support AI researchers reproducing and building on top of this work. MedPLIB and its associated models should be helpful for exploring various biomedical pixel grunding and vision question answering (VQA) research questions.

Out-of-Scope Use

Any deployed use case of the model --- commercial or otherwise --- is out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are intended for research use only and not intended for deployed use cases.

Citation

If you find our paper and code useful in your research, please consider giving a star and citation.

@article{huang2024towards,
  title={Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine},
  author={Huang, Xiaoshuang and Shen, Lingdong and Liu, Jia and Shang, Fangxin and Li, Hongxiang and Huang, Haifeng and Yang, Yehui},
  journal={arXiv preprint arXiv:2412.09278},
  year={2024}
}