nielsr HF Staff commited on
Commit
bd1efd9
·
verified ·
1 Parent(s): c58b8ad

Improve model card: add pipeline tag, arXiv ID, license, links, and usage instructions

Browse files

This PR significantly enhances the model card for "3D MedDiffusion" by:

- Updating the `pipeline_tag: image-to-3d` to correctly categorize the model for 3D image generation.
- Correcting the `arxiv` metadata to `2412.13059` based on the paper information.
- Adding the `license: mit` based on the majority consensus of peers.
- Updating the model card title to precisely match the paper title.
- Updating the project page link to the more concise version found in the GitHub repository.
- Adding an explicit link to the GitHub repository for easy access to the code.
- Expanding the model's description with key details from the paper abstract.
- Incorporating detailed `Installation`, `Pretrained Models`, and `Inference` instructions with code snippets directly from the GitHub README, to guide users on how to use the model effectively.
- Removing the "The model is coming soon..." placeholder, as usage instructions are now available.

Please review and merge this PR if everything looks good.

Files changed (1) hide show
  1. README.md +41 -6
README.md CHANGED
@@ -6,13 +6,48 @@ tags:
6
  - medical
7
  - image-generation
8
  - diffusion-model
9
- arxiv: ...
 
 
10
  ---
11
 
12
- # 3D MedDiffusion: A 3D Medical Diffusion Model for Controllable and High-quality Medical Image Generation
13
- This is the officical model repository of the paper "[**3D MedDiffusion: A 3D Medical Diffusion Model for Controllable and High-quality Medical Image Generation**](https://arxiv.org/abs/2412.13059)"
14
 
15
- **3D MedDiffusion** is a 3D medical image synthesis framework capable of generating high-quality medical images across multiple modalities and organs.
16
- For more information, please refer to our [**project page**](https://shanghaitech-impact.github.io/3D-MedDiffusion.github.io/) or the [**paper**](https://arxiv.org/abs/2412.13059).
17
 
18
- ## The model is comming soon...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - medical
7
  - image-generation
8
  - diffusion-model
9
+ pipeline_tag: image-to-3d
10
+ arxiv: 2412.13059
11
+ license: mit
12
  ---
13
 
14
+ # 3D MedDiffusion: A 3D Medical Latent Diffusion Model for Controllable and High-quality Medical Image Generation
 
15
 
16
+ This is the official model repository of the paper "[**3D MedDiffusion: A 3D Medical Latent Diffusion Model for Controllable and High-quality Medical Image Generation**](https://arxiv.org/abs/2412.13059)".
 
17
 
18
+ **3D MedDiffusion** is a 3D medical image synthesis framework capable of generating high-quality medical images across multiple modalities and organs. It incorporates a novel, highly efficient Patch-Volume Autoencoder for latent space compression and a new noise estimator to capture both local details and global structural information during diffusion denoising. This enables the generation of fine-detailed, high-resolution images (up to 512x512x512) and ensures strong generalizability across tasks like sparse-view CT reconstruction, fast MRI reconstruction, and data augmentation.
19
+
20
+ For more information, please refer to our:
21
+ * [**Paper (arXiv)**](https://arxiv.org/abs/2412.13059)
22
+ * [**Project Page**](https://shanghaitech-impact.github.io/3D-MedDiffusion/)
23
+ * [**GitHub Repository**](https://github.com/ShanghaiTech-IMPACT/3D-MedDiffusion)
24
+
25
+ ## Installation
26
+ ```bash
27
+ ## Clone this repo
28
+ git clone https://github.com/ShanghaiTech-IMPACT/3D-MedDiffusion.git
29
+
30
+ # Setup the environment
31
+ conda create -n 3DMedDiffusion python=3.11.11
32
+
33
+ conda activate 3DMedDiffusion
34
+
35
+ pip install -r requirements.txt
36
+ ```
37
+
38
+ ## Pretrained Models
39
+ The pretrained checkpoint is provided [here](https://drive.google.com/drive/folders/1h1Ina5iUkjfSAyvM5rUs4n1iqg33zB-J?usp=drive_link).
40
+
41
+ Please download the checkpoints and put it to `./checkpoints`.
42
+
43
+ ## Inference
44
+ Make sure your GPU has at least 40 GB of memory available to run inference at all supported resolutions.
45
+
46
+ **Generation using 8x downsampling**
47
+ ```python
48
+ python evaluation/class_conditional_generation.py --AE-ckpt checkpoints/PatchVolume_8x_s2.ckpt --model-ckpt checkpoints/BiFlowNet_0453500.pt --output-dir input/your/save/dir
49
+ ```
50
+ **Generation using 4x downsampling**
51
+ ```python
52
+ python evaluation/class_conditional_generation_4x.py --AE-ckpt checkpoints/PatchVolume_4x_s2.ckpt --model-ckpt checkpoints/BiFlowNet_4x.pt --output-dir input/your/save/dir
53
+ ```