Commit ·
bd89635
1
Parent(s): 5b377a3
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ruImageCaptioning
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg"></a>
|
| 6 |
+
Inference Notebook: <a href="https://colab.research.google.com/drive/1tsVMWUE6_AKXiHyinCSOSRPGhbjVEyRM?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" height=20></a>
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
Русская версия CLIP prefix caption, обученная на ruGPTSMALL + CLIP(OPENAI), можно использовать для VQA, image captioning и прочее. Модель работает <1c + ее можно эффективно квантануть/перенести в ONNX. Обучалось в течении 3х дней на 2*1080ti
|
| 10 |
+
|
| 11 |
+
Trained and validated on ruCOCO
|
| 12 |
+
BLEU: 37.3
|
| 13 |
+
|
| 14 |
+
chrF: 32.4
|
| 15 |
+
|
| 16 |
+
ROUGE-1-F: 33.0
|
| 17 |
+
|
| 18 |
+
ROUGE-2-F: 14.1
|
| 19 |
+
|
| 20 |
+
ROUGE-L-F: 30.3
|
| 21 |
+
|
| 22 |
+
<table>
|
| 23 |
+
<tr>
|
| 24 |
+
<td><img src="images/photo_2021-11-13_13-08-38.jpg" ></td>
|
| 25 |
+
<td><img src="images/Снимок экрана 2022-05-20 в 15.54.33.png" ></td>
|
| 26 |
+
|
| 27 |
+
</tr>
|
| 28 |
+
<tr>
|
| 29 |
+
<td> Пример кэпшенинга</td>
|
| 30 |
+
<td>Пример работы в zero shot</td>
|
| 31 |
+
|
| 32 |
+
</tr>
|
| 33 |
+
</table>
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
THIS WORK IS BASED ON https://github.com/rmokady/CLIP_prefix_caption (english version)
|
| 37 |
+
```
|
| 38 |
+
@article{mokady2021clipcap,
|
| 39 |
+
title={ClipCap: CLIP Prefix for Image Captioning},
|
| 40 |
+
author={Mokady, Ron and Hertz, Amir and Bermano, Amit H},
|
| 41 |
+
journal={arXiv preprint arXiv:2111.09734},
|
| 42 |
+
year={2021}
|
| 43 |
+
}
|
| 44 |
+
```
|
| 45 |
+
```
|
| 46 |
+
@article{AlexWortega,
|
| 47 |
+
title={ruImage captioning},
|
| 48 |
+
author={Aleksandr Nikolic, Asta gpu server },
|
| 49 |
+
}
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
## Acknowledgments
|
| 53 |
+
This repository is heavily based on [CLIP](https://github.com/openai/CLIP) and [Hugging-faces](https://github.com/huggingface/transformers) repositories.
|
| 54 |
+
For training we used the data of [COCO dataset](https://cocodataset.org/#home) and [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/) translated by ALEX WORTEGA [ruCOCO](https://github.com/AlexWortega/ru_COCO)
|