jiayangshi nielsr HF Staff commited on
Commit
0f5000d
Β·
1 Parent(s): bb4234a

Improve model card metadata and fix usage snippet (#1)

Browse files

- Improve model card metadata and fix usage snippet (c21c9607cb4176b0359a2e09cd3654ea49541691)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +41 -25
README.md CHANGED
@@ -1,23 +1,23 @@
1
  ---
2
- license: mit
3
  library_name: diffusers
 
 
4
  tags:
5
- - computed-tomography
6
- - ct-reconstruction
7
- - diffusion-model
8
- - inverse-problems
9
- - dm4ct
10
- - sparse-view-ct
11
  ---
12
 
13
  # Pixel Diffusion UNet – Real-world Synchrotron Dataset (DM4CT)
14
 
15
- This repository contains the pretrained **pixel-space diffusion UNet** used in the
16
- **DM4CT: Benchmarking Diffusion Models for CT Reconstruction (ICLR 2026)** benchmark.
17
 
18
- πŸ”— Paper: https://openreview.net/forum?id=YE5scJekg5
19
- πŸ”— Arxiv: https://arxiv.org/abs/2602.18589
20
- πŸ”— Codebase: https://github.com/DM4CT/DM4CT
21
 
22
  ---
23
 
@@ -31,7 +31,7 @@ It operates directly in **pixel space** (not latent space).
31
  - **Channels**: 1 (grayscale CT slice)
32
  - **Training objective**: Ξ΅-prediction (standard DDPM formulation)
33
  - **Noise schedule**: Linear beta schedule
34
- - **Training dataset**: Synchrotron Dataset of rocks
35
  - **Intensity normalization**: Rescaled to (-1, 1)
36
 
37
  This model is intended to be combined with data-consistency correction for CT reconstruction.
@@ -40,8 +40,7 @@ This model is intended to be combined with data-consistency correction for CT re
40
 
41
  ## πŸ“Š Dataset: Real-world Synchrotron Dataset
42
 
43
- Source:
44
- https://zenodo.org/records/15420527
45
 
46
  Preprocessing steps:
47
  - Train/test split
@@ -54,22 +53,39 @@ The model learns an unconditional image prior over CT slices.
54
 
55
  ## 🧠 Training Details
56
 
57
- - Optimizer: AdamW
58
- - Learning rate: 1e-4
59
- - Batch size: (insert your batch size)
60
- - Training steps: (insert number of steps)
61
- - Hardware: NVIDIA A100 GPU
62
-
63
- Training script:
64
- https://github.com/DM4CT/DM4CT/blob/main/train_pixel.py
65
 
66
  ---
67
 
68
  ## πŸš€ Usage
69
 
 
 
70
  ```python
71
  from diffusers import DDPMPipeline
 
 
72
  pipeline = DDPMPipeline.from_pretrained("jiayangshi/synchrotron_pixel_diffusion")
73
- )
74
 
75
- model.eval()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  library_name: diffusers
3
+ license: mit
4
+ pipeline_tag: image-to-image
5
  tags:
6
+ - computed-tomography
7
+ - ct-reconstruction
8
+ - diffusion-model
9
+ - inverse-problems
10
+ - dm4ct
11
+ - sparse-view-ct
12
  ---
13
 
14
  # Pixel Diffusion UNet – Real-world Synchrotron Dataset (DM4CT)
15
 
16
+ This repository contains the pretrained **pixel-space diffusion UNet** presented in the paper [DM4CT: Benchmarking Diffusion Models for Computed Tomography Reconstruction](https://huggingface.co/papers/2602.18589).
 
17
 
18
+ πŸ”— **Project Page:** [https://dm4ct.github.io/DM4CT/](https://dm4ct.github.io/DM4CT/)
19
+ πŸ”— **Arxiv:** [https://arxiv.org/abs/2602.18589](https://arxiv.org/abs/2602.18589)
20
+ πŸ”— **Codebase:** [https://github.com/DM4CT/DM4CT](https://github.com/DM4CT/DM4CT)
21
 
22
  ---
23
 
 
31
  - **Channels**: 1 (grayscale CT slice)
32
  - **Training objective**: Ξ΅-prediction (standard DDPM formulation)
33
  - **Noise schedule**: Linear beta schedule
34
+ - **Training dataset**: Real-world Synchrotron Dataset of rocks
35
  - **Intensity normalization**: Rescaled to (-1, 1)
36
 
37
  This model is intended to be combined with data-consistency correction for CT reconstruction.
 
40
 
41
  ## πŸ“Š Dataset: Real-world Synchrotron Dataset
42
 
43
+ Source: [Zenodo](https://zenodo.org/records/15420527)
 
44
 
45
  Preprocessing steps:
46
  - Train/test split
 
53
 
54
  ## 🧠 Training Details
55
 
56
+ - **Optimizer**: AdamW
57
+ - **Learning rate**: 1e-4
58
+ - **Hardware**: NVIDIA A100 GPU
59
+ - **Training script**: [train_pixel.py](https://github.com/DM4CT/DM4CT/blob/main/train_pixel.py)
 
 
 
 
60
 
61
  ---
62
 
63
  ## πŸš€ Usage
64
 
65
+ You can use this model with the `diffusers` library as follows:
66
+
67
  ```python
68
  from diffusers import DDPMPipeline
69
+
70
+ # Load the pipeline
71
  pipeline = DDPMPipeline.from_pretrained("jiayangshi/synchrotron_pixel_diffusion")
 
72
 
73
+ # Access the UNet model
74
+ model = pipeline.unet
75
+ model.eval()
76
+ ```
77
+
78
+ ---
79
+
80
+ ## Citation
81
+
82
+ ```bibtex
83
+ @inproceedings{
84
+ shi2026dmct,
85
+ title={{DM}4{CT}: Benchmarking Diffusion Models for Computed Tomography Reconstruction},
86
+ author={Shi, Jiayang and Pelt, Dani{\"e}l M and Batenburg, K Joost},
87
+ booktitle={The Fourteenth International Conference on Learning Representations},
88
+ year={2026},
89
+ url={https://openreview.net/forum?id=YE5scJekg5}
90
+ }
91
+ ```