AbdulElahGwaith's picture
Upload folder using huggingface_hub
a9bd396 verified
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLIP[[clip]]
## ๊ฐœ์š”[[overview]]
CLIP ๋ชจ๋ธ์€ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,
Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever๊ฐ€ ์ œ์•ˆํ•œ [์ž์—ฐ์–ด ์ง€๋„(supervision)๋ฅผ ํ†ตํ•œ ์ „์ด ๊ฐ€๋Šฅํ•œ ์‹œ๊ฐ ๋ชจ๋ธ ํ•™์Šต](https://huggingface.co/papers/2103.00020)๋ผ๋Š” ๋…ผ๋ฌธ์—์„œ ์†Œ๊ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. CLIP(Contrastive Language-Image Pre-Training)์€ ๋‹ค์–‘ํ•œ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ์Œ์œผ๋กœ ํ›ˆ๋ จ๋œ ์‹ ๊ฒฝ๋ง ์ž…๋‹ˆ๋‹ค. GPT-2์™€ 3์˜ ์ œ๋กœ์ƒท ๋Šฅ๋ ฅ๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ, ํ•ด๋‹น ์ž‘์—…์— ์ง์ ‘์ ์œผ๋กœ ์ตœ์ ํ™”ํ•˜์ง€ ์•Š๊ณ ๋„ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ ์žˆ๋Š” ํ…์ŠคํŠธ ์Šค๋‹ˆํŽซ์„ ์˜ˆ์ธกํ•˜๋„๋ก ์ž์—ฐ์–ด๋กœ ์ง€์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
ํ•ด๋‹น ๋…ผ๋ฌธ์˜ ์ดˆ๋ก์ž…๋‹ˆ๋‹ค.
*์ตœ์‹  ์ปดํ“จํ„ฐ ๋น„์ „ ์‹œ์Šคํ…œ์€ ๋ฏธ๋ฆฌ ์ •ํ•ด์ง„ ๊ณ ์ •๋œ ๊ฐ์ฒด ์นดํ…Œ๊ณ ๋ฆฌ ์ง‘ํ•ฉ์„ ์˜ˆ์ธกํ•˜๋„๋ก ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ œํ•œ๋œ ํ˜•ํƒœ์˜ ์ง€๋„๋Š” ๋‹ค๋ฅธ ์‹œ๊ฐ์  ๊ฐœ๋…์„ ์ง€์ •ํ•˜๊ธฐ ์œ„ํ•ด ์ถ”๊ฐ€์ ์ธ ๋ผ๋ฒจ๋ง๋œ ๋ฐ์ดํ„ฐ๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ๊ทธ ์ผ๋ฐ˜์„ฑ๊ณผ ์‚ฌ์šฉ์„ฑ์„ ์ œํ•œํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์›์‹œ ํ…์ŠคํŠธ์—์„œ ์ง์ ‘ ํ•™์Šตํ•˜๋Š” ๊ฒƒ์€ ํ›จ์”ฌ ๋” ๊ด‘๋ฒ”์œ„ํ•œ ์ง€๋„ ์†Œ์Šค๋ฅผ ํ™œ์šฉํ•˜๋Š” ์•„์ฃผ ์ข‹์€ ๋Œ€์•ˆ์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€์™€ ์บก์…˜์„ ๋งž์ถ”๋Š” ๊ฐ„๋‹จํ•œ ์‚ฌ์ „ ํ•™์Šต ์ž‘์—…์ด, ์ธํ„ฐ๋„ท์—์„œ ์ˆ˜์ง‘ํ•œ 4์–ต ์Œ์˜ ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹์—์„œ SOTA ์ˆ˜์ค€์˜ ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํšจ์œจ์ ์ด๊ณ  ํ™•์žฅ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•™์Šตํ•˜๋Š” ๋ฐฉ๋ฒ•์ž„์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ ํ›„, ์ž์—ฐ์–ด๋Š” ํ•™์Šต๋œ ์‹œ๊ฐ์  ๊ฐœ๋…์„ ์ฐธ์กฐํ•˜๊ฑฐ๋‚˜ ์ƒˆ๋กœ์šด ๊ฐœ๋…์„ ์„ค๋ช…ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜์–ด ๋ชจ๋ธ์˜ ํ•˜์œ„ ์ž‘์—…์œผ๋กœ์˜ ์ œ๋กœ์ƒท ์ „์ด๋ฅผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ๋…ผ๋ฌธ์—์„œ๋Š” OCR, ๋น„๋””์˜ค ๋‚ด ํ–‰๋™ ์ธ์‹, ์ง€๋ฆฌ์  ์œ„์น˜ ํŒŒ์•…, ๊ทธ๋ฆฌ๊ณ  ๋งŽ์€ ์ข…๋ฅ˜์˜ ์„ธ๋ฐ€ํ•œ ๊ฐ์ฒด ๋ถ„๋ฅ˜ ๋“ฑ 30๊ฐœ ์ด์ƒ์˜ ๋‹ค์–‘ํ•œ ๊ธฐ์กด ์ปดํ“จํ„ฐ ๋น„์ „ ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•œ ๋ฒค์น˜๋งˆํ‚น์„ ํ†ตํ•ด ์ด ์ ‘๊ทผ ๋ฐฉ์‹์˜ ์„ฑ๋Šฅ์„ ์—ฐ๊ตฌํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ๋Œ€๋ถ€๋ถ„์˜ ์ž‘์—…์— ๋Œ€ํ•ด ์˜๋ฏธ ์žˆ๊ฒŒ ์ „์ด๋˜๋ฉฐ, ์ข…์ข… ๋ฐ์ดํ„ฐ์…‹๋ณ„ ํ›ˆ๋ จ ์—†์ด๋„ ์™„์ „ ์ง€๋„ ํ•™์Šต ๊ธฐ์ค€์„ ๊ณผ ๊ฒฝ์Ÿ๋ ฅ ์žˆ๋Š” ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ImageNet์—์„œ ์›๋ž˜ ResNet-50์˜ ์ •ํ™•๋„๋ฅผ ์ œ๋กœ์ƒท์œผ๋กœ ์ผ์น˜์‹œํ‚ค๋Š”๋ฐ, ์ด๋Š” ResNet-50์ด ํ›ˆ๋ จ๋œ 128๋งŒ ๊ฐœ์˜ ํ›ˆ๋ จ ์˜ˆ์ œ๋ฅผ ์ „ํ˜€ ์‚ฌ์šฉํ•  ํ•„์š”๊ฐ€ ์—†์—ˆ์Šต๋‹ˆ๋‹ค. ์ฝ”๋“œ ๋ฐ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋Š” ์ด https URL์—์„œ ๊ณต๊ฐœํ•ฉ๋‹ˆ๋‹ค.*
์ด ๋ชจ๋ธ์€ [valhalla](https://huggingface.co/valhalla)์— ์˜ํ•ด ๊ธฐ์—ฌ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
์›๋ณธ ์ฝ”๋“œ๋Š” [์ด๊ณณ](https://github.com/openai/CLIP)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
## ์‚ฌ์šฉ ํŒ๊ณผ ์˜ˆ์‹œ[[usage-tips-and-example]]
CLIP์€ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋น„์ „ ๋ฐ’ ์–ธ์–ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ์œ ์‚ฌ๋„ ๊ณ„์‚ฐ๊ณผ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. CLIP์€ ViT์™€ ์œ ์‚ฌํ•œ ํŠธ๋žœ์Šคํฌ๋จธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹œ๊ฐ์  ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ณ , ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ํŠน์ง•์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ํ›„ ํ…์ŠคํŠธ์™€ ์‹œ๊ฐ์  ํŠน์ง• ๋ชจ๋‘ ๋™์ผํ•œ ์ฐจ์›์˜ ์ž ์žฌ(latent) ๊ณต๊ฐ„์œผ๋กœ ํˆฌ์˜๋ฉ๋‹ˆ๋‹ค. ํˆฌ์˜๋œ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ํŠน์ง• ์‚ฌ์ด์˜ ๋‚ด์ ์ด ์œ ์‚ฌ๋„ ์ ์ˆ˜๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.
ํŠธ๋žœ์Šคํฌ๋จธ ์ธ์ฝ”๋”์— ์ด๋ฏธ์ง€๋ฅผ ์ž…๋ ฅํ•˜๊ธฐ ์œ„ํ•ด, ๊ฐ ์ด๋ฏธ์ง€๋Š” ๊ณ ์ • ํฌ๊ธฐ์˜ ๊ฒน์น˜์ง€ ์•Š๋Š” ํŒจ์น˜๋“ค์˜ ์‹œํ€€์Šค๋กœ ๋ถ„ํ• ๋˜๊ณ , ์ดํ›„ ์„ ํ˜• ์ž„๋ฒ ๋”ฉ๋ฉ๋‹ˆ๋‹ค. [CLS]ํ† ํฐ์ด ์ „์ฒด ์ด๋ฏธ์ง€์˜ ํ‘œํ˜„์œผ๋กœ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ €์ž๋“ค์€ ๋˜ํ•œ ์ ˆ๋Œ€ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ์„ ์ถ”๊ฐ€ํ•˜๊ณ , ๊ฒฐ๊ณผ๋กœ ๋‚˜์˜จ ๋ฒกํ„ฐ ์‹œํ€€์Šค๋ฅผ ํ‘œ์ค€ ํŠธ๋žœ์Šคํฌ๋จธ ์ธํ† ๋”์— ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค. [`CLIPImageProcessor`]๋Š” ๋ชจ๋ธ์„ ์œ„ํ•ด ์ด๋ฏธ์ง€๋ฅผ ๋ฆฌ์‚ฌ์ด์ฆˆ(๋˜๋Š” ์žฌ์Šค์บ์ผ๋ง)ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋Š”๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
[`CLIPTokenizer`]๋Š” ํ…์ŠคํŠธ๋ฅผ ์ธ์ฝ”๋”ฉํ•˜๋Š”๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. [`CLIPProcessor`]๋Š” [`CLIPImageProcessor`]์™€ [`CLIPTokenizer`]๋ฅผ ํ•˜๋‚˜์˜ ์ธ์Šคํ„ด์Šค๋กœ ๊ฐ์‹ธ์„œ ํ…์ŠคํŠธ๋ฅผ ์ธ์ฝ”๋”ฉํ•˜๊ณ  ์ด๋ฏธ์ง€๋ฅผ ์ค€๋น„ํ•˜๋Š”๋ฐ ๋ชจ๋‘ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.
๋‹ค์Œ ์˜ˆ์‹œ๋Š” [`CLIPProcessor`]์™€ [`CLIPModel`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ์œ ์‚ฌ๋„ ์ ์ˆ˜๋ฅผ ์–ป๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, CLIPModel
>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ์œ ์‚ฌ์„ฑ ์ ์ˆ˜
>>> probs = logits_per_image.softmax(dim=1) # ํ™•๋ฅ ์„ ๋ ˆ์ด๋ธ”๋ง ํ•˜๊ธฐ์œ„ํ•ด์„œ ์†Œํ”„ํŠธ๋งฅ์Šค๋ฅผ ์ทจํ•ฉ๋‹ˆ๋‹ค.
```
### CLIP๊ณผ ํ”Œ๋ž˜์‹œ ์–ดํ…์…˜2 ๊ฒฐํ•ฉ[[combining-clip-and-flash-attention-2]]
๋จผ์ € ์ตœ์‹ ๋ฒ„์ „์˜ ํ”Œ๋ž˜์‹œ ์–ดํ…์…˜2๋ฅผ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค.
```bash
pip install -U flash-attn --no-build-isolation
```
ํ”Œ๋ž˜์‹œ ์–ดํ…์…˜2์™€ ํ˜ธํ™˜๋˜๋Š” ํ•˜๋“œ์›จ์–ด๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ด์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ flash-attn ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์˜ ๊ณต์‹๋ฌธ์„œ์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ชจ๋ธ์„ ๋ฐ˜์ •๋ฐ€๋„(`torch.float16`)๋กœ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”.
<Tip warning={true}>
์ž‘์€ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ, ํ”Œ๋ž˜์‹œ ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์ด ๋А๋ ค์ง€๋Š” ๊ฒƒ์„ ๋А๋‚„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.์•„๋ž˜์˜ [ํ”Œ๋ž˜์‹œ ์–ดํ…์…˜๊ณผ SDPA๋ฅผ ์‚ฌ์šฉํ•œ ์˜ˆ์ƒ ์†๋„ ํ–ฅ์ƒ](#Expected-speedups-with-Flash-Attention-and-SDPA) ์„น์…˜์„ ์ฐธ์กฐํ•˜์—ฌ ์ ์ ˆํ•œ ์–ดํ…์…˜ ๊ตฌํ˜„์„ ์„ ํƒํ•˜์„ธ์š”.
</Tip>
ํ”Œ๋ž˜์‹œ ์–ดํ…์…˜2๋ฅผ ์‚ฌ์šฉํ•ด์„œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ๊ตฌ๋™ํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋‹ค์Œ ์Šค๋‹ˆํŽซ์„ ์ฐธ๊ณ ํ•˜์„ธ์š”:
```python
>>> import torch
>>> import requests
>>> from PIL import Image
>>> from transformers import CLIPProcessor, CLIPModel
>>> device = "cuda"
>>> dtype = torch.float16
>>> model = CLIPModel.from_pretrained(
... "openai/clip-vit-base-patch32",
... attn_implementation="flash_attention_2",
... device_map=device,
... dtype=dtype,
... )
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
>>> inputs.to(device)
>>> with torch.no_grad():
... with torch.autocast(device):
... outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ์œ ์‚ฌ์„ฑ ์ ์ˆ˜
>>> probs = logits_per_image.softmax(dim=1) # ํ™•๋ฅ ์„ ๋ ˆ์ด๋ธ”๋ง ํ•˜๊ธฐ์œ„ํ•ด์„œ ์†Œํ”„ํŠธ๋งฅ์Šค๋ฅผ ์ทจํ•ฉ๋‹ˆ๋‹ค.
>>> print(probs)
tensor([[0.9946, 0.0052]], device='cuda:0', dtype=torch.float16)
```
### ์Šค์ผ€์ผ๋œ ๋‚ด์  ์–ดํ…์…˜ (Scaled dot-product Attention(SDPA)) ์‚ฌ์šฉํ•˜๊ธฐ[[using-scaled-dot-product-attention-sdpa]]
ํŒŒ์ดํ† ์น˜๋Š” `torch.nn.functional`์˜ ์ผ๋ถ€๋กœ ๋„ค์ดํ‹ฐ๋ธŒ ์Šค์ผ€์ผ๋œ ๋‚ด์  ์–ดํ…์…˜(SPDA) ์—ฐ์‚ฐ์ž๋ฅผ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” ์ž…๋ ฅ๊ณผ ์‚ฌ์šฉ ์ค‘์ธ ํ•˜๋“œ์›จ์–ด์— ๋”ฐ๋ผ ์ ์šฉ๋  ์ˆ˜ ์žˆ๋Š” ์—ฌ๋Ÿฌ ๊ตฌํ˜„์„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์ •๋ณด๋Š” [๊ณต์‹๋ฌธ์„œ](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)๋‚˜ [GPU ์ถ”๋ก ](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) ํŽ˜์ด์ง€๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
`torch>=2.1.1`์—์„œ๋Š” ๊ตฌํ˜„์ด ๊ฐ€๋Šฅํ•  ๋•Œ SDPA๊ฐ€ ๊ธฐ๋ณธ์ ์œผ๋กœ ์‚ฌ์šฉ๋˜์ง€๋งŒ, `from_pretrained()` ํ•จ์ˆ˜์—์„œ `attn_implementation="sdpa"`๋ฅผ ์„ค์ •ํ•˜์—ฌ SDPA๋ฅผ ๋ช…์‹œ์ ์œผ๋กœ ์‚ฌ์šฉํ•˜๋„๋ก ์š”์ฒญํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค.
```python
from transformers import CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32", dtype=torch.float16, attn_implementation="sdpa")
```
์ตœ๊ณ ์˜ ์†๋„ํ–ฅ์ƒ์„ ์œ„ํ•ด์„œ, ๋ฐ˜์ •๋ฐ€๋„๋กœ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. (์˜ˆ๋ฅผ๋“ค๋ฉด `torch.float16` ๋˜๋Š” `torch.bfloat16`).
### ํ”Œ๋ž˜์‹œ ์–ดํ…์…˜๊ณผ ์Šค์ผ€์ผ๋œ ๋‚ด์  ์–ดํ…์…˜(SDPA)์œผ๋กœ ์ธํ•ด ์˜ˆ์ƒ๋˜๋Š” ์†๋„ํ–ฅ์ƒ[[expected-speedups-with-flash-attention-and-sdpa]]
๋กœ์ปฌ ๋ฒค์น˜๋งˆํฌ(NVIDIA A10G, PyTorch 2.3.1+cu121)์—์„œ `float16`์„ ์‚ฌ์šฉํ•˜์—ฌ `"openai/clip-vit-large-patch14"` ์ฒดํฌํฌ์ธํŠธ๋กœ ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ–ˆ์„ ๋•Œ, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์†๋„ ํ–ฅ์ƒ์„ ํ™•์ธ ํ–ˆ์Šต๋‹ˆ๋‹ค.
[์ฝ”๋“œ](https://gist.github.com/qubvel/ac691a54e54f9fae8144275f866a7ff8):
#### CLIPTextModel[[cliptextmodel]]
| Num text labels | Eager (s/iter) | FA2 (s/iter) | FA2 speedup | SDPA (s/iter) | SDPA speedup |
|------------------:|-----------------:|---------------:|--------------:|----------------:|---------------:|
| 4 | 0.009 | 0.012 | 0.737 | 0.007 | 1.269 |
| 16 | 0.009 | 0.014 | 0.659 | 0.008 | 1.187 |
| 32 | 0.018 | 0.021 | 0.862 | 0.016 | 1.142 |
| 64 | 0.034 | 0.034 | 1.001 | 0.03 | 1.163 |
| 128 | 0.063 | 0.058 | 1.09 | 0.054 | 1.174 |
![clip_text_model_viz_3](https://github.com/user-attachments/assets/e9826b43-4e66-4f4c-952b-af4d90bd38eb)
#### CLIPVisionModel[[clipvisionmodel]]
| Image batch size | Eager (s/iter) | FA2 (s/iter) | FA2 speedup | SDPA (s/iter) | SDPA speedup |
|-------------------:|-----------------:|---------------:|--------------:|----------------:|---------------:|
| 1 | 0.016 | 0.013 | 1.247 | 0.012 | 1.318 |
| 4 | 0.025 | 0.021 | 1.198 | 0.021 | 1.202 |
| 16 | 0.093 | 0.075 | 1.234 | 0.075 | 1.24 |
| 32 | 0.181 | 0.147 | 1.237 | 0.146 | 1.241 |
![clip_image_model_viz_3](https://github.com/user-attachments/assets/50a36206-e3b9-4adc-ac8e-926b8b071d63)
#### CLIPModel[[clipmodel]]
| Image batch size | Num text labels | Eager (s/iter) | FA2 (s/iter) | FA2 speedup | SDPA (s/iter) | SDPA speedup |
|-------------------:|------------------:|-----------------:|---------------:|--------------:|----------------:|---------------:|
| 1 | 4 | 0.025 | 0.026 | 0.954 | 0.02 | 1.217 |
| 1 | 16 | 0.026 | 0.028 | 0.918 | 0.02 | 1.287 |
| 1 | 64 | 0.042 | 0.046 | 0.906 | 0.036 | 1.167 |
| 4 | 4 | 0.028 | 0.033 | 0.849 | 0.024 | 1.189 |
| 4 | 16 | 0.034 | 0.035 | 0.955 | 0.029 | 1.169 |
| 4 | 64 | 0.059 | 0.055 | 1.072 | 0.05 | 1.179 |
| 16 | 4 | 0.096 | 0.088 | 1.091 | 0.078 | 1.234 |
| 16 | 16 | 0.102 | 0.09 | 1.129 | 0.083 | 1.224 |
| 16 | 64 | 0.127 | 0.11 | 1.157 | 0.105 | 1.218 |
| 32 | 4 | 0.185 | 0.159 | 1.157 | 0.149 | 1.238 |
| 32 | 16 | 0.19 | 0.162 | 1.177 | 0.154 | 1.233 |
| 32 | 64 | 0.216 | 0.181 | 1.19 | 0.176 | 1.228 |
## ์ž๋ฃŒ[[resources]]
CLIP์„ ์‹œ์ž‘ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” Hugging Face์™€ community ์ž๋ฃŒ ๋ชฉ๋ก(๐ŸŒŽ๋กœ ํ‘œ์‹œ๋จ) ์ž…๋‹ˆ๋‹ค.
- [์›๊ฒฉ ์„ผ์‹ฑ (์ธ๊ณต์œ„์„ฑ) ์ด๋ฏธ์ง€์™€ ์บก์…˜์„ ๊ฐ€์ง€๊ณ  CLIP ๋ฏธ์„ธ์กฐ์ •ํ•˜๊ธฐ](https://huggingface.co/blog/fine-tune-clip-rsicd):
[RSICD dataset](https://github.com/201528014227051/RSICD_optimal)์„ ๊ฐ€์ง€๊ณ  CLIP์„ ๋ฏธ์„ธ์กฐ์ • ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์— ๋Œ€ํ•œ ์„ฑ๋Šฅ ๋น„๊ต์— ๋Œ€ํ•œ ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ
- ์ด [์˜ˆ์‹œ ์Šคํฌ๋ฆฝํŠธ](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text)๋Š” [COCO dataset](https://cocodataset.org/#home)๋ฅผ ์ด์šฉํ•œ ์‚ฌ์ „ํ•™์Šต๋œ ๋น„์ „๊ณผ ํ…์ŠคํŠธ์™€ ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•ด์„œ CLIP๊ฐ™์€ ๋น„์ „-ํ…์ŠคํŠธ ๋“€์–ผ ๋ชจ๋ธ์„ ์–ด๋–ป๊ฒŒ ํ•™์Šต์‹œํ‚ค๋Š”์ง€ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.
<PipelineTag pipeline="image-to-text"/>
- ์‚ฌ์ „ํ•™์Šต๋œ CLIP๋ชจ๋ธ์„ ์ด๋ฏธ์ง€ ์บก์…”๋‹์„ ์œ„ํ•œ ๋น”์„œ์น˜ ์ถ”๋ก ์— ์–ด๋–ป๊ฒŒ ํ™œ์šฉํ•˜๋Š”์ง€์— ๊ด€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1tuoAC5F4sC7qid56Z0ap-stR3rwdk0ZV?usp=sharing)
**์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰**
- ์‚ฌ์ „ํ•™์Šต๋œ CLIP๋ชจ๋ธ๊ณผ MRR(Mean Reciprocal Rank) ์ ์ˆ˜ ์—ฐ์‚ฐ์„ ์‚ฌ์šฉํ•œ ์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1bLVwVKpAndpEDHqjzxVPr_9nGrSbuOQd?usp=sharing). ๐ŸŒŽ
- ์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰๊ณผ ์œ ์‚ฌ์„ฑ ์ ์ˆ˜์— ๋Œ€ํ•ด ๋ณด์—ฌ์ฃผ๋Š” [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/deep-diver/image_search_with_natural_language/blob/main/notebooks/Image_Search_CLIP.ipynb). ๐ŸŒŽ
- Multilingual CLIP๋ฅผ ์‚ฌ์šฉํ•ด์„œ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ๋ฅผ ์–ด๋–ป๊ฒŒ ๊ฐ™์€ ๋ฒกํ„ฐ ๊ณต๊ฐ„์— ๋งคํ•‘ ์‹œํ‚ค๋Š”์ง€์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1xO-wC_m_GNzgjIBQ4a4znvQkvDoZJvH4?usp=sharing). ๐ŸŒŽ
- [Unsplash](https://unsplash.com)์™€ [TMDB](https://www.themoviedb.org/) ๋ฐ์ดํ„ฐ์…‹์„ ํ™œ์šฉํ•œ ์˜๋ฏธ๋ก ์ (semantic) ์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰์—์„œ CLIP์„ ๊ตฌ๋™ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/vivien000/clip-demo/blob/master/clip.ipynb#scrollTo=uzdFhRGqiWkR). ๐ŸŒŽ
**์„ค๋ช… ๊ฐ€๋Šฅ์„ฑ**
- ์ž…๋ ฅ ํ† ํฐ๊ณผ ์ด๋ฏธ์ง€ ์กฐ๊ฐ(segment) ์‚ฌ์ด์˜ ์œ ์‚ฌ์„ฑ์„ ์‹œ๊ฐํ™” ์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP_explainability.ipynb). ๐ŸŒŽ
์—ฌ๊ธฐ์— ํฌํ•จ๋  ์ž๋ฃŒ๋ฅผ ์ œ์ถœํ•˜๊ณ  ์‹ถ์œผ์‹œ๋‹ค๋ฉด PR(Pull Request)๋ฅผ ์—ด์–ด์ฃผ์„ธ์š”. ๋ฆฌ๋ทฐ ํ•ด๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ์ž๋ฃŒ๋Š” ๊ธฐ์กด ์ž๋ฃŒ๋ฅผ ๋ณต์ œํ•˜๋Š” ๋Œ€์‹  ์ƒˆ๋กœ์šด ๋‚ด์šฉ์„ ๋‹ด๊ณ  ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
## CLIPConfig[[transformers.CLIPConfig]]
[[autodoc]] CLIPConfig
## CLIPTextConfig[[transformers.CLIPTextConfig]]
[[autodoc]] CLIPTextConfig
## CLIPVisionConfig[[transformers.CLIPVisionConfig]]
[[autodoc]] CLIPVisionConfig
## CLIPTokenizer[[transformers.CLIPTokenizer]]
[[autodoc]] CLIPTokenizer
- get_special_tokens_mask
- save_vocabulary
## CLIPTokenizerFast[[transformers.CLIPTokenizerFast]]
[[autodoc]] CLIPTokenizerFast
## CLIPImageProcessor[[transformers.CLIPImageProcessor]]
[[autodoc]] CLIPImageProcessor
- preprocess
## CLIPProcessor[[transformers.CLIPProcessor]]
[[autodoc]] CLIPProcessor
## CLIPModel[[transformers.CLIPModel]]
[[autodoc]] CLIPModel
- forward
- get_text_features
- get_image_features
## CLIPTextModel[[transformers.CLIPTextModel]]
[[autodoc]] CLIPTextModel
- forward
## CLIPTextModelWithProjection[[transformers.CLIPTextModelWithProjection]]
[[autodoc]] CLIPTextModelWithProjection
- forward
## CLIPVisionModelWithProjection[[transformers.CLIPVisionModelWithProjection]]
[[autodoc]] CLIPVisionModelWithProjection
- forward
## CLIPVisionModel[[transformers.CLIPVisionModel]]
[[autodoc]] CLIPVisionModel
- forward
## CLIPForImageClassification[[transformers.CLIPForImageClassification]]
[[autodoc]] CLIPForImageClassification
- forward