Upload README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[](https://arxiv.org/abs/2410.16290)
|
| 2 |
+
[](https://armeet.ca/nomri)
|
| 3 |
+
|
| 4 |
+
# A Unified Model for Compressed Sensing MRI Across Undersampling Patterns
|
| 5 |
+
|
| 6 |
+
> [**A Unified Model for Compressed Sensing MRI Across Undersampling Patterns**](https://arxiv.org/abs/2410.16290)
|
| 7 |
+
> Armeet Singh Jatyani, Jiayun Wang, Aditi Chandrashekar, Zihui Wu, Miguel Liu-Schiaffini, Bahareh Tolooshams, Anima Anandkumar
|
| 8 |
+
> *Paper at [CVPR 2025](https://cvpr.thecvf.com/Conferences/2025/AcceptedPapers)*
|
| 9 |
+
|
| 10 |
+

|
| 11 |
+
|
| 12 |
+
> _**(a) Unified Model:** NO works across various undersampling patterns, unlike CNNs (e.g., [E2E-VarNet](#)) that need separate models for each._ \
|
| 13 |
+
> _**(b) Consistent Performance:** NO consistently outperforms CNNs, especially for 2× acceleration with a single unrolled cascade._ \
|
| 14 |
+
> _**(c) Resolution-Agnostic:** Maintains fixed kernel size regardless of image resolution, reducing aliasing risks._ \
|
| 15 |
+
> _**(d) Zero-Shot Super-Resolution:** Outperforms CNNs in reconstructing high-res MRIs without retraining._
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
> _**(a) Zero-Shot Extended FOV:** On 4x Gaussian undersampling, NO achieves higher PSNR and fewer artifacts than E2E-VN, despite both models being trained only on 160 x 160 FOV._ \
|
| 20 |
+
> _**(b) Zero-Shot Super-Resolution in Image Space:** For 2x radial with 640 x 640 input via bilinear upsampling, NO preserves quality while E2E-VN introduces artifacts._
|
| 21 |
+
|
| 22 |
+
## Requirements
|
| 23 |
+
We have tested training/inference on the following hardware/software versions, however there is no reason it shouldn't work on slightly older driver/cuda versions.
|
| 24 |
+
- tested on RTX 4090 and A100 with CUDA 12.4 and NVML/Driver version 550
|
| 25 |
+
- Ubuntu 22.04.3 LTS & SUSE Linux Enterprise Server 15
|
| 26 |
+
- All python packages are in `pyproject.toml` (see Setup)
|
| 27 |
+
|
| 28 |
+
## Setup
|
| 29 |
+
|
| 30 |
+
We use `uv` for environment setup. It is 10-100x faster than vanilla pip and conda. If you don't have `uv`, please install it from [here](https://docs.astral.sh/uv/getting-started/installation/) (no sudo required). If you're on a Linux environment you can install with: `curl -LsSf https://astral.sh/uv/install.sh | sh`. Of course, if you would like to use a virtual environment handled by vanilla python or conda, all package and their versions are provided in `pyproject.toml` under "dependencies."
|
| 31 |
+
|
| 32 |
+
In the root directory, run
|
| 33 |
+
```bash
|
| 34 |
+
uv sync
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
Then you can activate the environment with:
|
| 38 |
+
```bash
|
| 39 |
+
source .venv/bin/activate
|
| 40 |
+
```
|
| 41 |
+
Note this is optional. You can run scripts with this venv without activating the environment by using `uv run python script.py` or abbreviated `uv run script.py`.
|
| 42 |
+
|
| 43 |
+
`uv` will create a virtual environment for you and install all packages.
|
| 44 |
+
|
| 45 |
+
Then to download the pretrained weights, run:
|
| 46 |
+
```bash
|
| 47 |
+
uv run scripts/download_weights.py
|
| 48 |
+
```
|
| 49 |
+
This downloads pretrained weights into the `weights/` directory.
|
| 50 |
+
|
| 51 |
+
Finally to run scripts, make them executable:
|
| 52 |
+
```bash
|
| 53 |
+
chmod u+x scripts/*
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
Then you can run any script. For example:
|
| 57 |
+
```bash
|
| 58 |
+
./scripts/knee_multipatt.sh
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
By default weights & biases (WANDB) is disabled, so scripts will print results to stdout. If you want to visualize results in
|
| 62 |
+
weights and biases, add your WANDB api key at the top of the script. We log image predictions
|
| 63 |
+
as well as PSNR, NMSE, SSIM metrics for each epoch.
|
| 64 |
+
```bash
|
| 65 |
+
export WANDB_API_KEY=***************
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
Before you can begin training/inference, you will need to download and process the dataset. See the "Datasets" section below.
|
| 69 |
+
|
| 70 |
+
## Datasets
|
| 71 |
+
|
| 72 |
+
We use the fastMRI dataset, which can be downloaded [here](https://fastmri.med.nyu.edu/). \
|
| 73 |
+
Dataset classes are provided in `fastmri/datasets.py`:
|
| 74 |
+
- `SliceDatasetLMDB`: dataset in significantly faster LMDB format
|
| 75 |
+
- `SliceDataset`: dataset class for original fastMRI dataset
|
| 76 |
+
|
| 77 |
+
We convert the raw fastMRI HDF5 formatted samples into a significantly faster LMDB format.
|
| 78 |
+
This accelerates training/validation by a significant factor. Once you have downloaded the fastMRI dataset,
|
| 79 |
+
you will need to run `scripts/gen_lmdb_dataset.py` to convert the original fastMRI dataset into LMDB format.
|
| 80 |
+
|
| 81 |
+
```bash
|
| 82 |
+
uv run scripts/gen_lmdb_dataset.py --body_part brain --partition val -o /path/to/lmdb/dataset
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
Do this for every dataset you need: (brain, knee) x (train, val). To choose a smaller subset for faster training/inference add `--sample_rate 0.Xx`.
|
| 86 |
+
|
| 87 |
+
By default we use the LMDB format. If you want to use the original SliceDataset class, you can swap out the dataset class in `main.py`.
|
| 88 |
+
|
| 89 |
+
Finally, modify your `fastmri.yaml` with the correct dataset paths
|
| 90 |
+
|
| 91 |
+
```yaml
|
| 92 |
+
log_path: /tmp/logs
|
| 93 |
+
checkpoint_path: /tmp/checkpoints
|
| 94 |
+
|
| 95 |
+
lmdb:
|
| 96 |
+
knee_train_path: **/**/**/knee_train_lmdb
|
| 97 |
+
knee_val_path: **/**/**/knee_val_lmdb
|
| 98 |
+
brain_train_path: **/**/**/brain_train_lmdb
|
| 99 |
+
brain_val_path: **/**/**/brain_val_lmdb
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
## Training and Validation
|
| 103 |
+
|
| 104 |
+
`main.py` is used for both training and validation. We follow the original fastMRI repo
|
| 105 |
+
and use Lightning. We provide both a simple PyTorch model `models/no_varnet.py` (if you want
|
| 106 |
+
a thinner abstraction), as well as a Lightning wrapped `models/lightning/no_varnet_module.py` that
|
| 107 |
+
makes distributed training across multiple GPUs easier.
|
| 108 |
+
|
| 109 |
+
## Citation
|
| 110 |
+
|
| 111 |
+
If you found our work helpful or used any of our models (UDNO), please cite the following:
|
| 112 |
+
```bibtex
|
| 113 |
+
@inproceedings{jatyani2025nomri,
|
| 114 |
+
author = {Armeet Singh Jatyani* and Jiayun Wang* and Aditi Chandrashekar and Zihui Wu and Miguel Liu-Schiaffini and Bahareh Tolooshams and Anima Anandkumar},
|
| 115 |
+
title = {A Unified Model for Compressed Sensing MRI Across Undersampling Patterns},
|
| 116 |
+
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR) Proceedings},
|
| 117 |
+
abbr = {CVPR},
|
| 118 |
+
year = {2025}
|
| 119 |
+
}
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+

|