Update README.md
Browse files
README.md
CHANGED
|
@@ -9,42 +9,36 @@ license: apache-2.0
|
|
| 9 |
This is the base-version of the Chinese CLIP, with ViT-B/16 as the image encoder and RoBERTa-wwm-base as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. For more details, please refer to our technical report https://arxiv.org/abs/2211.01335 and our official github repo https://github.com/OFA-Sys/Chinese-CLIP
|
| 10 |
|
| 11 |
## Use with the official API
|
| 12 |
-
We provide a simple code snippet to show how to use the API
|
| 13 |
-
|
| 14 |
-
# to install the latest stable release
|
| 15 |
-
pip install cn_clip
|
| 16 |
-
|
| 17 |
-
# or install from source code
|
| 18 |
-
cd Chinese-CLIP
|
| 19 |
-
pip install -e .
|
| 20 |
-
```
|
| 21 |
-
After installation, use Chinese CLIP as shown below:
|
| 22 |
```python
|
| 23 |
-
import torch
|
| 24 |
from PIL import Image
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
| 48 |
```
|
| 49 |
|
| 50 |
However, if you are not satisfied with only using the API, feel free to check our github repo https://github.com/OFA-Sys/Chinese-CLIP for more details about training and inference.
|
|
|
|
| 9 |
This is the base-version of the Chinese CLIP, with ViT-B/16 as the image encoder and RoBERTa-wwm-base as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. For more details, please refer to our technical report https://arxiv.org/abs/2211.01335 and our official github repo https://github.com/OFA-Sys/Chinese-CLIP
|
| 10 |
|
| 11 |
## Use with the official API
|
| 12 |
+
We provide a simple code snippet to show how to use the API of Chinese-CLIP to compute the image & text embeddings and similarities.
|
| 13 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
```python
|
|
|
|
| 15 |
from PIL import Image
|
| 16 |
+
import requests
|
| 17 |
+
from transformers import ChineseCLIPProcessor, ChineseCLIPModel
|
| 18 |
+
|
| 19 |
+
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
|
| 20 |
+
processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
|
| 21 |
+
|
| 22 |
+
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
|
| 23 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
| 24 |
+
# Squirtle, Bulbasaur, Charmander, Pikachu in English
|
| 25 |
+
texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]
|
| 26 |
+
|
| 27 |
+
# compute image feature
|
| 28 |
+
inputs = processor(images=image, return_tensors="pt")
|
| 29 |
+
image_features = model.get_image_features(**inputs)
|
| 30 |
+
image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
|
| 31 |
+
|
| 32 |
+
# compute text features
|
| 33 |
+
inputs = processor(text=texts, padding=True, return_tensors="pt")
|
| 34 |
+
text_features = model.get_text_features(**inputs)
|
| 35 |
+
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
|
| 36 |
+
|
| 37 |
+
# compute image-text similarity scores
|
| 38 |
+
inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
|
| 39 |
+
outputs = model(**inputs)
|
| 40 |
+
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
|
| 41 |
+
probs = logits_per_image.softmax(dim=1) # probs: [[1.2686e-03, 5.4499e-02, 6.7968e-04, 9.4355e-01]]
|
| 42 |
```
|
| 43 |
|
| 44 |
However, if you are not satisfied with only using the API, feel free to check our github repo https://github.com/OFA-Sys/Chinese-CLIP for more details about training and inference.
|