|
|
--- |
|
|
license: cc-by-nc-nd-4.0 |
|
|
datasets: |
|
|
- himanshunitrr/LaViDa-PathGen-Instruct-and-VQA |
|
|
tags: |
|
|
- medical |
|
|
--- |
|
|
|
|
|
# LaViDa-Pathgen |
|
|
World's First Diffusion Model based Visual Language Model for Pathology based on LaViDa, trained on PathGen-1.6M dataset and finetuned on PathGen-Instruct datasets. |
|
|
|
|
|
 |
|
|
|
|
|
[Github](https://github.com/Himanshunitrr/LaViDa-PathGen) |
|
|
|
|
|
# Inference |
|
|
Download checkpoint from https://huggingface.co/himanshunitrr/LaViDa-Pathgen |
|
|
You can infer using [predict.py](https://github.com/Himanshunitrr/LaViDa-PathGen/blob/main/LaViDa/predict.py) |
|
|
|
|
|
|
|
|
# Evaluation |
|
|
PathMMU |
|
|
|
|
|
To evaluate the model on PathMMU use [main.py](https://github.com/Himanshunitrr/LaViDa-PathGen/blob/main/PathMMU-main/eval/main.py) |
|
|
|
|
|
Use the conda environment you created earlier for LLaVA for evaluating LLaVA based models and use the conda environment you created for LaViDa for evaluating LaViDa based models. |
|
|
|
|
|
Also, for some reason for LLaVA based models, you need to use an old version of LLaVA, for more information, check [this issue](https://github.com/PathMMU-Benchmark/PathMMU/issues/7) |
|
|
|
|
|
 |
|
|
|
|
|
 |
|
|
|
|
|
* in the PathGen-LLaVA paper the reported accuracy is quite low (~60.1) but I got different results. |
|
|
|
|
|
# Thanks |
|
|
A huge shoutout to @jacklishufan et al for [LaViDa](https://github.com/jacklishufan/LaViDa/tree/main) and answering all my stupid questions, @superjamessyx et al for [PathGen](https://github.com/PathFoundation/PathGen-1.6M) and [PathMMU](https://github.com/PathMMU-Benchmark/PathMMU) and my Boss Anant for all the support and guidance. |