File size: 1,467 Bytes
73711bc
 
9dc7d71
73711bc
 
 
 
 
 
9dc7d71
73711bc
9dc7d71
73711bc
9dc7d71
 
 
 
 
73711bc
 
 
 
9dc7d71
 
 
 
 
 
 
73711bc
 
 
 
 
 
 
9dc7d71
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
library_name: diffusers
pipeline_tag: image-to-image
tags:
- dit4sr
- super-resolution
- diffusion-transformer
base_model: stabilityai/stable-diffusion-3.5-medium
---
# DiT4SR Replication

This repository contains the DiT4SR transformer weights exported from the local `dit4sr-replication` experiment at `checkpoint-150000`.

## What This Repo Contains

This Hugging Face repo publishes only the `transformer/` checkpoint used by `SD3Transformer2DModel`. It does not include the full Stable Diffusion 3.5 base model, tokenizers, schedulers, or the rest of the DiT4SR inference stack.

## Files

- `transformer/` contains the publishable model weights and config.
- `source_checkpoint.json` records the local source path and checkpoint name used for the upload.

## Checkpoint Metadata

- Experiment: `dit4sr-replication`
- Source checkpoint: `checkpoint-150000`
- Published artifact: transformer weights only

## Loading In DiT4SR

```python
from model_dit4sr.transformer_sd3 import SD3Transformer2DModel

model = SD3Transformer2DModel.from_pretrained("NisargUpadhyay/ImageSuperResolution-replication", subfolder="transformer")
```

You still need the rest of the DiT4SR codebase and the base SD3 assets described in the project README.

## Related Resources

- Project repo: `NisargUpadhyayIITJ/Deep-Learning-Course-Project`
- Training/evaluation dataset repo: `NisargUpadhyay/ImageSuperResolution`
- Matching training subset in the dataset repo: `Replication/`