Merge pull request #5 from andreped/readme-refactoring
Browse files
README.md
CHANGED
|
@@ -1,12 +1,57 @@
|
|
| 1 |
-
|
| 2 |
-
|
|
|
|
| 3 |
|
| 4 |
-
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |
Use the "MultiTrain" scripts to launch the trainings, providing the neccesary parameters. Those in the COMET folder accepts a .ini configuration file (see COMET/train_config_files for example configurations).
|
| 8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
Use Evaluate_network to test the trained models. On the Brain folder, use "Evaluate_network__test_fixed.py" instead.
|
| 10 |
|
| 11 |
-
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<div align="center">
|
| 2 |
+
<img src="https://user-images.githubusercontent.com/29090665/197514781-11c92d1e-a692-4c78-8f0f-6cb863c6f572.png" alt="drawing" width="600">
|
| 3 |
+
</div>
|
| 4 |
|
| 5 |
+
<div align="center">
|
| 6 |
+
<h1 align="center">DDMR: Deep Deformation Map Registration</h1>
|
| 7 |
+
<h3 align="center">Cross-modal transfer learning and adaptive multi-task learning for improved abdominal CT registration</h3>
|
| 8 |
+
|
| 9 |
+
**DDMR** was developed by SINTEF Health Research. A paper is submitted to X and the preprint is openly available at [arXiv](some-url-here.com).
|
| 10 |
+
|
| 11 |
+
</div>
|
| 12 |
+
|
| 13 |
+
## 💻 Getting started
|
| 14 |
+
|
| 15 |
+
1. Setup virtual environment:
|
| 16 |
+
```
|
| 17 |
+
virtualenv -ppython3 venv --clear
|
| 18 |
+
source venv/bin/activate
|
| 19 |
+
```
|
| 20 |
+
|
| 21 |
+
2. Install requirements:
|
| 22 |
+
```
|
| 23 |
+
pip install -r requirements.txt
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
## 🏋️♂️ Training
|
| 27 |
|
| 28 |
Use the "MultiTrain" scripts to launch the trainings, providing the neccesary parameters. Those in the COMET folder accepts a .ini configuration file (see COMET/train_config_files for example configurations).
|
| 29 |
|
| 30 |
+
For instance:
|
| 31 |
+
```
|
| 32 |
+
python TrainingScripts/Train_3d.py
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
## 🔍 Evaluate
|
| 36 |
+
|
| 37 |
Use Evaluate_network to test the trained models. On the Brain folder, use "Evaluate_network__test_fixed.py" instead.
|
| 38 |
|
| 39 |
+
For instance:
|
| 40 |
+
```
|
| 41 |
+
python EvaluationScripts/evaluation.py
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
## 🏆 Acknowledgements
|
| 45 |
+
|
| 46 |
+
## ✨ How to cite
|
| 47 |
+
Please, consider citing our paper, if you find the work useful:
|
| 48 |
+
<pre>
|
| 49 |
+
@misc{frutos2022ddmr,
|
| 50 |
+
author={Pérez de Frutos, Javier and Pedersen, André and Pelanis, Egidijus and Bouget, David and Survarachakan, Shanmugapriya and Langø, Thomas and Lindseth, Frank and Elle, Ole-Jakob},
|
| 51 |
+
publisher = {arXiv},
|
| 52 |
+
title={Cross-modal transfer learning and adaptive multi-task learning for improved abdominal CT registration},
|
| 53 |
+
year={2022},
|
| 54 |
+
doi = {10.48550/ARXIV.2011.06033},
|
| 55 |
+
url = {https://arxiv.org/abs/2011.06033}}
|
| 56 |
+
</pre>
|
| 57 |
+
|