Update README.md
Browse files
README.md
CHANGED
|
@@ -1,47 +1,81 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: mit
|
| 3 |
-
tags:
|
| 4 |
-
- generated_from_keras_callback
|
| 5 |
-
model-index:
|
| 6 |
-
- name: clip-vit-base-patch32-ko
|
| 7 |
-
results: []
|
| 8 |
---
|
| 9 |
|
| 10 |
-
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
| 11 |
-
probably proofread and complete it, then remove this comment. -->
|
| 12 |
-
|
| 13 |
# clip-vit-base-patch32-ko
|
| 14 |
|
| 15 |
-
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
|
|
|
| 18 |
|
| 19 |
-
|
| 20 |
|
| 21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
-
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
-
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
-
|
|
|
|
| 32 |
|
| 33 |
-
|
|
|
|
| 34 |
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
|
| 39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
|
|
|
| 41 |
|
|
|
|
| 42 |
|
| 43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
-
|
| 46 |
-
- TensorFlow 2.9.2
|
| 47 |
-
- Tokenizers 0.13.1
|
|
|
|
| 1 |
---
|
| 2 |
+
widget:
|
| 3 |
+
- src: http://images.cocodataset.org/val2017/000000039769.jpg
|
| 4 |
+
candidate_labels: 고양이, 강아지, 토끼
|
| 5 |
+
example_title: cat and remote
|
| 6 |
+
language: ko
|
| 7 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
|
|
|
|
|
|
|
|
|
| 10 |
# clip-vit-base-patch32-ko
|
| 11 |
|
| 12 |
+
Korean CLIP model trained by [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
|
| 13 |
+
|
| 14 |
+
[Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)로 학습된 한국어 CLIP 모델입니다.
|
| 15 |
+
|
| 16 |
+
훈련 코드: <https://github.com/Bing-su/KoCLIP_training_code>
|
| 17 |
+
|
| 18 |
+
사용된 데이터: AIHUB에 있는 모든 한국어-영어 병렬 데이터
|
| 19 |
|
| 20 |
+
## How to Use
|
| 21 |
|
| 22 |
+
#### 1.
|
| 23 |
|
| 24 |
+
```python
|
| 25 |
+
import requests
|
| 26 |
+
import torch
|
| 27 |
+
from PIL import Image
|
| 28 |
+
from transformers import AutoModel, AutoProcessor
|
| 29 |
|
| 30 |
+
repo = "Bingsu/clip-vit-base-patch32-ko"
|
| 31 |
+
model = AutoModel.from_pretrained(repo)
|
| 32 |
+
processor = AutoProcessor.from_pretrained(repo)
|
| 33 |
|
| 34 |
+
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
| 35 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
| 36 |
+
inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True)
|
| 37 |
+
with torch.inference_mode():
|
| 38 |
+
outputs = model(**inputs)
|
| 39 |
+
logits_per_image = outputs.logits_per_image
|
| 40 |
+
probs = logits_per_image.softmax(dim=1)
|
| 41 |
+
```
|
| 42 |
|
| 43 |
+
```python
|
| 44 |
+
>>> probs
|
| 45 |
+
tensor([[0.9926, 0.0074]])
|
| 46 |
+
```
|
| 47 |
|
| 48 |
+
#### 2.
|
| 49 |
|
| 50 |
+
```python
|
| 51 |
+
from transformers import pipeline
|
| 52 |
|
| 53 |
+
repo = "Bingsu/clip-vit-base-patch32-ko"
|
| 54 |
+
pipe = pipeline("zero-shot-image-classification", model=repo)
|
| 55 |
|
| 56 |
+
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
| 57 |
+
result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "분홍색 소파에 드러누운 고양이 친구들"], hypothesis_template="{}")
|
| 58 |
+
```
|
| 59 |
|
| 60 |
+
```python
|
| 61 |
+
>>> result
|
| 62 |
+
[{'score': 0.9456236958503723, 'label': '분홍색 소파에 드러누운 고양이 친구들'},
|
| 63 |
+
{'score': 0.05315302312374115, 'label': '고양이 두 마리'},
|
| 64 |
+
{'score': 0.0012233294546604156, 'label': '고양이 한 마리'}]
|
| 65 |
+
```
|
| 66 |
|
| 67 |
+
## Tokenizer
|
| 68 |
|
| 69 |
+
토크나이저는 한국어 데이터와 영어 데이터를 7:3 비율로 섞어, 원본 CLIP 토크나이저에서 `.train_new_from_iterator`를 통해 학습되었습니다.
|
| 70 |
|
| 71 |
+
https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/clip/modeling_clip.py#L661-L666
|
| 72 |
+
```python
|
| 73 |
+
# text_embeds.shape = [batch_size, sequence_length, transformer.width]
|
| 74 |
+
# take features from the eot embedding (eot_token is the highest number in each sequence)
|
| 75 |
+
# casting to torch.int for onnx compatibility: argmax doesn't support int64 inputs with opset 14
|
| 76 |
+
pooled_output = last_hidden_state[
|
| 77 |
+
torch.arange(last_hidden_state.shape[0]), input_ids.to(torch.int).argmax(dim=-1)
|
| 78 |
+
]
|
| 79 |
+
```
|
| 80 |
|
| 81 |
+
CLIP 모델은 `pooled_output`을 구할때 id가 가장 큰 토큰을 사용하기 때문에, eos 토큰은 가장 마지막 토큰이 되어야 합니다.
|
|
|
|
|
|