Add usage instructions
Browse files
README.md
CHANGED
|
@@ -26,6 +26,45 @@ There are no dedicated datasets for image colorisation, hence I curated my own d
|
|
| 26 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/6318256d212fce5a3cde0fe3/5eKOaiTUK4uDeq07MdJIY.png" width="650px"/>
|
| 27 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/6318256d212fce5a3cde0fe3/MI08kcYgav2ouXlHvvNtu.png" width="650px"/>
|
| 28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
# References
|
| 30 |
- [Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification](https://iizuka.cs.tsukuba.ac.jp/projects/colorization/data/colorization_sig2016.pdf)
|
| 31 |
- [Colorful Image Colorization](https://arxiv.org/pdf/1603.08511)
|
|
|
|
| 26 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/6318256d212fce5a3cde0fe3/5eKOaiTUK4uDeq07MdJIY.png" width="650px"/>
|
| 27 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/6318256d212fce5a3cde0fe3/MI08kcYgav2ouXlHvvNtu.png" width="650px"/>
|
| 28 |
|
| 29 |
+
# Usage
|
| 30 |
+
Download the architecture file and model weights
|
| 31 |
+
```python
|
| 32 |
+
hf_hub_download(
|
| 33 |
+
repo_id="ayushshah/imagecolorization",
|
| 34 |
+
filename="model.py",
|
| 35 |
+
local_dir=".",
|
| 36 |
+
local_dir_use_symlinks=False
|
| 37 |
+
)
|
| 38 |
+
|
| 39 |
+
weights_path = hf_hub_download(
|
| 40 |
+
repo_id=REPO_ID,
|
| 41 |
+
filename="model.safetensors"
|
| 42 |
+
)
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
Make sure the input image(s) are of the size 224x224. Convert them to LAB color space. You can use [`kornia`](https://kornia.readthedocs.io/en/stable/color.html#kornia.color.rgb_to_lab).
|
| 46 |
+
Isolate the L channel and make sure it is in the range [0, 1]. L channel is originally in the range [0, 100].
|
| 47 |
+
|
| 48 |
+
```python
|
| 49 |
+
from model import UNet
|
| 50 |
+
from safetensors.torch import load_file
|
| 51 |
+
|
| 52 |
+
model = UNet().to(DEVICE)
|
| 53 |
+
state_dict = load_file(weights_path)
|
| 54 |
+
model.load_state_dict(state_dict)
|
| 55 |
+
model.eval()
|
| 56 |
+
|
| 57 |
+
with torch.no_grad():
|
| 58 |
+
ab_pred = model(L_normalized)
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
The outputs are in the range [-1, 1]. You can convert the ab channels to their original range using a linear scaling function. Afterwards you can concatenate the original L and the ab channels to get the LAB image.
|
| 62 |
+
```python
|
| 63 |
+
ab = (ab+1) * 255.0 / 2 - 128.0
|
| 64 |
+
ab = torch.clamp(ab, -128, 127)
|
| 65 |
+
lab = torch.cat((L, ab), dim=1)
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
# References
|
| 69 |
- [Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification](https://iizuka.cs.tsukuba.ac.jp/projects/colorization/data/colorization_sig2016.pdf)
|
| 70 |
- [Colorful Image Colorization](https://arxiv.org/pdf/1603.08511)
|