Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ license: apache-2.0
|
|
| 12 |
</div>
|
| 13 |
|
| 14 |
We introduce **Charm** , a novel tokenization approach that preserves **C**omposition, **H**igh-resolution,
|
| 15 |
-
**A**spect **R**atio, and **M**ulti-scale information simultaneously. By preserving critical
|
| 16 |
|
| 17 |
|
| 18 |
### Quick Inference
|
|
@@ -80,4 +80,5 @@ model = backbone(training_dataset='tad66k', device='cpu')
|
|
| 80 |
prediction = model.predict(tokens, pos_embed, mask_token)
|
| 81 |
```
|
| 82 |
|
|
|
|
| 83 |
**Note:** For the training code, check our [GitHub Page](https://github.com/FBehrad/Charm/).
|
|
|
|
| 12 |
</div>
|
| 13 |
|
| 14 |
We introduce **Charm** , a novel tokenization approach that preserves **C**omposition, **H**igh-resolution,
|
| 15 |
+
**A**spect **R**atio, and **M**ulti-scale information simultaneously. By preserving critical information, <em> Charm </em> works like a charm for image aesthetic and quality assessment 🌟.
|
| 16 |
|
| 17 |
|
| 18 |
### Quick Inference
|
|
|
|
| 80 |
prediction = model.predict(tokens, pos_embed, mask_token)
|
| 81 |
```
|
| 82 |
|
| 83 |
+
|
| 84 |
**Note:** For the training code, check our [GitHub Page](https://github.com/FBehrad/Charm/).
|