|
|
--- |
|
|
license: mit |
|
|
tags: |
|
|
- keras |
|
|
- medical-imaging |
|
|
- deep-learning |
|
|
- .h5-model |
|
|
framework: keras |
|
|
task: image-translation |
|
|
--- |
|
|
|
|
|
# pyMEAL: Multi-Encoder-Augmentation-Aware-Learning |
|
|
|
|
|
pyMEAL is a multi-encoder framework for augmentation-aware learning that accurately performs CT-to-T1-weighted MRI translation under diverse augmentations. It utilizes four dedicated encoders and three fusion strategies, concatenation (CC), fusion layer (FL), and controller block (BD), to capture augmentation-specific features. MEAL-BD outperforms conventional augmentation methods, achieving SSIM > 0.83 and PSNR > 25 dB in CT-to-T1w translation. |
|
|
|
|
|
## Dependecies |
|
|
|
|
|
tensorflow |
|
|
|
|
|
matplotlib |
|
|
|
|
|
SimpleITK |
|
|
|
|
|
scipy |
|
|
|
|
|
antspyx |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## Available Models |
|
|
|
|
|
| Model ID | File Name | Description | |
|
|
|----------|------------------------------------------------|---------------------------------------------| |
|
|
| BD | `builder1_mode1l1abW512_1_11211z1p1rt_.h5` | Builder-based architecture model | |
|
|
| CC | `best_moderRl_RHID2_1mo.h5` | Encoder-concatenation-based configuration | |
|
|
| FL | `bestac22_mode3l_512m2_m21.h5` | Feature-level fusion-based model | |
|
|
| NA | `direct7_11ag23f11.h5` | Direct training baseline model | |
|
|
| TA | `best_modelaf2ndab7_221ag12g11.h5` | traditional augmentation configuration model| |
|
|
|
|
|
--- |
|
|
|
|
|
### Model Architecture Overview |
|
|
|
|
|
 |
|
|
|
|
|
*Figure 1. Model architecture for the model having no augmentation and traditional augmentation.* |
|
|
|
|
|
 |
|
|
|
|
|
*Figure 2. Model architecture for Multi-Stream with a Builder Controller block method (BD), Fusion layer (FL), and Encoder concatenation (CC).* |
|
|
|
|
|
|
|
|
## Download Model Files |
|
|
|
|
|
You can download any `.h5` file directly: |
|
|
|
|
|
- [Download builder1_mode1l1abW512_1_11211z1p1rt_.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/builder1_mode1l1abW512_1_11211z1p1rt_.h5) |
|
|
- [Download best_moderRl_RHID2_1mo.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/best_moderRl_RHID2_1mo.h5) |
|
|
- [Download bestac22_mode3l_512m2_m21.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/bestac22_mode3l_512m2_m21.h5) |
|
|
- [Download direct7_11ag23f11.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/direct7_11ag23f11.h5) |
|
|
- [Download best_modelaf2ndab7_221ag12g11.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/best_modelaf2ndab7_221ag12g11.h5) |
|
|
|
|
|
|
|
|
--- |
|
|
## How to Use |
|
|
|
|
|
### Load a Model (Basic) |
|
|
|
|
|
```python |
|
|
import tensorflow as tf |
|
|
|
|
|
# Load the model |
|
|
model = tf.keras.models.load_model("model.h5", compile=False) |
|
|
|
|
|
# Run inference |
|
|
output = model.predict(input_data) |
|
|
``` |
|
|
|
|
|
Here, `input_data` refers to a CT image, and the corresponding T1-weighted (T1w) image is produced as the output. |
|
|
|
|
|
For detailed instructions on how to use each module of the **pyMEAL** software, please refer to the [tutorial section on our GitHub repository](https://github.com/ai-vbrain/pyMEAL). |
|
|
|
|
|
### How to cite these models? |
|
|
Please cite the following: |
|
|
|
|
|
```python |
|
|
@article{ilyas2025pymeal, |
|
|
title={pyMEAL: A Multi-Encoder Augmentation-Aware Learning for Robust and Generalizable Medical Image Translation}, |
|
|
author={Ilyas, Abdul-mojeed Olabisi and Maradesa, Adeleke and Banzi, Jamal and Huang, Jianpan and Mak, Henry KF and Chan, Kannie WY}, |
|
|
journal={arXiv preprint arXiv:2505.24421}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
### How to Get Support? |
|
|
|
|
|
For help, contact: |
|
|
|
|
|
- Dr. Ilyas (<amoiIyas@hkcoche.org>) |
|
|
- Dr. Maradesa (<amaradesa@hkcoche.org>) |
|
|
|