|
|
--- |
|
|
language: |
|
|
- en |
|
|
pipeline_tag: image-text-to-text |
|
|
--- |
|
|
|
|
|
|
|
|
# The model checkpoints used in Fine-grained Image Captioning with CLIP Reward (Findings of NAACL 2022) |
|
|
|
|
|
* Authors: [Jaemin Cho](https://j-min.io), [David Seunghyun Yoon](https://david-yoon.github.io/), [Ajinkya Kale](https://www.linkedin.com/in/kaleajinkya/), [Franck Dernoncourt](https://research.adobe.com/person/franck-dernoncourt), [Trung Bui](https://sites.google.com/site/trungbuistanford/), [Mohit Bansal](https://www.cs.unc.edu/~mbansal/) |
|
|
* Paper: https://arxiv.org/abs/2205.13115 |
|
|
* Code: https://github.com/j-min/CLIP-Caption-Reward |
|
|
* Inference Colab Demo: [](https://colab.research.google.com/github/j-min/CLIP-Caption-Reward/blob/main/Inference_example.ipynb) |
|
|
|
|
|
|
|
|
|
|
|
# Reference |
|
|
Please cite our paper if you use our models in your works: |
|
|
|
|
|
|
|
|
```bibtex |
|
|
@inproceedings{Cho2022CLIPReward, |
|
|
title = {Fine-grained Image Captioning with CLIP Reward}, |
|
|
author = {Jaemin Cho and Seunghyun Yoon and Ajinkya Kale and Franck Dernoncourt and Trung Bui and Mohit Bansal}, |
|
|
booktitle = {Findings of NAACL}, |
|
|
year = {2022} |
|
|
} |