File size: 3,705 Bytes
3a3458b
 
 
 
 
 
 
 
 
 
 
18e3c1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5bba72d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1654a39
 
b950b18
 
 
1654a39
8566247
1654a39
8566247
1654a39
 
5bba72d
 
 
 
 
 
 
 
 
 
4446496
5bba72d
 
 
 
259a1d0
4446496
5bba72d
 
 
 
 
 
fdabb6f
 
c61d09f
fdabb6f
c61d09f
fdabb6f
c61d09f
19933dc
 
c61d09f
19933dc
 
 
 
 
 
 
 
 
 
c61d09f
 
 
 
b6a8d44
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: mit
tags:
  - keras
  - medical-imaging
  - deep-learning
  - .h5-model
framework: keras
task: image-translation
---

# pyMEAL: Multi-Encoder-Augmentation-Aware-Learning

pyMEAL is a multi-encoder framework for augmentation-aware learning that accurately performs CT-to-T1-weighted MRI translation under diverse augmentations. It utilizes four dedicated encoders and three fusion strategies, concatenation (CC), fusion layer (FL), and controller block (BD), to capture augmentation-specific features. MEAL-BD outperforms conventional augmentation methods, achieving SSIM > 0.83 and PSNR > 25 dB in CT-to-T1w translation.

## Dependecies

tensorflow

matplotlib

SimpleITK

scipy

antspyx


---

## Available Models

| Model ID | File Name                                      | Description                                 |
|----------|------------------------------------------------|---------------------------------------------|
| BD       | `builder1_mode1l1abW512_1_11211z1p1rt_.h5`     | Builder-based architecture model            |
| CC       | `best_moderRl_RHID2_1mo.h5`                    | Encoder-concatenation-based configuration   |
| FL       | `bestac22_mode3l_512m2_m21.h5`                 | Feature-level fusion-based model            |
| NA       | `direct7_11ag23f11.h5`                         | Direct training baseline model              |
| TA       | `best_modelaf2ndab7_221ag12g11.h5`             | traditional augmentation configuration model|

---

### Model Architecture Overview

![Model Diagram](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/Fig1_TA_NA.png)

*Figure 1. Model architecture for the model having no augmentation and traditional augmentation.*

![Model2 Diagram](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/Fig2_BD_CC_FL.png)

*Figure 2. Model architecture for Multi-Stream with a Builder Controller block method (BD), Fusion layer (FL), and Encoder concatenation (CC).*


## Download Model Files

You can download any `.h5` file directly:

- [Download builder1_mode1l1abW512_1_11211z1p1rt_.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/builder1_mode1l1abW512_1_11211z1p1rt_.h5)
- [Download best_moderRl_RHID2_1mo.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/best_moderRl_RHID2_1mo.h5)
- [Download bestac22_mode3l_512m2_m21.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/bestac22_mode3l_512m2_m21.h5)
- [Download direct7_11ag23f11.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/direct7_11ag23f11.h5)
- [Download best_modelaf2ndab7_221ag12g11.h5](https://huggingface.co/AI-vBRAIN/pyMEAL/resolve/main/best_modelaf2ndab7_221ag12g11.h5)


---
## How to Use

### Load a Model (Basic)

```python
import tensorflow as tf

# Load the model
model = tf.keras.models.load_model("model.h5", compile=False)

# Run inference
output = model.predict(input_data)
```

Here, `input_data` refers to a CT image, and the corresponding T1-weighted (T1w) image is produced as the output.

For detailed instructions on how to use each module of the **pyMEAL** software, please refer to the [tutorial section on our GitHub repository](https://github.com/ai-vbrain/pyMEAL).

### How to cite these models?
Please cite the following:

```python
@article{ilyas2025pymeal,
  title={pyMEAL: A Multi-Encoder Augmentation-Aware Learning for Robust and Generalizable Medical Image Translation},
  author={Ilyas, Abdul-mojeed Olabisi and Maradesa, Adeleke and Banzi, Jamal and Huang, Jianpan and Mak, Henry KF and Chan, Kannie WY},
  journal={arXiv preprint arXiv:2505.24421},
  year={2025}
}
```

### How to Get Support?

For help, contact:

- Dr. Ilyas (<amoiIyas@hkcoche.org>)  
- Dr. Maradesa (<amaradesa@hkcoche.org>)