Update README.md
Browse files
README.md
CHANGED
|
@@ -282,6 +282,18 @@ bash scripts/zeroshot_eval.sh 0 \
|
|
| 282 |
ViT-B-16 RoBERTa-wwm-ext-base-chinese \
|
| 283 |
./pretrained_weights/QA-CLIP-base.pt
|
| 284 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 285 |
<br><br>
|
| 286 |
# Acknowledgments
|
| 287 |
The project code is based on implementation of <b>[Chinese-CLIP](https://github.com/OFA-Sys/Chinese-CLIP)</b>, and we are very grateful for their outstanding open-source contributions.
|
|
|
|
| 282 |
ViT-B-16 RoBERTa-wwm-ext-base-chinese \
|
| 283 |
./pretrained_weights/QA-CLIP-base.pt
|
| 284 |
```
|
| 285 |
+
<br><br>
|
| 286 |
+
|
| 287 |
+
# Huggingface Model and Online Demo
|
| 288 |
+
We have open-sourced our model on the HuggingFace for easier access and utilization. Additionally, we have prepared a simple online demo for zero-shot classification, allowing everyone to experience it firsthand. We encourage you to give it a try!
|
| 289 |
+
|
| 290 |
+
[:star:QA-CLIP-ViT-B-16:star:](https://huggingface.co/TencentARC/QA-CLIP-ViT-B-16)
|
| 291 |
+
|
| 292 |
+
[:star:QA-CLIP-ViT-L-14:star:](https://huggingface.co/TencentARC/QA-CLIP-ViT-L-14)
|
| 293 |
+
|
| 294 |
+
Here are some examples for demonstration:
|
| 295 |
+

|
| 296 |
+
|
| 297 |
<br><br>
|
| 298 |
# Acknowledgments
|
| 299 |
The project code is based on implementation of <b>[Chinese-CLIP](https://github.com/OFA-Sys/Chinese-CLIP)</b>, and we are very grateful for their outstanding open-source contributions.
|