mderakhshani nielsr HF Staff commited on
Commit
95daae6
·
verified ·
1 Parent(s): 1c45297

Add paper link + abstract (#1)

Browse files

- Add paper link + abstract (9dd196061c352debf90947f13d57acad601bb267)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +11 -6
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
 
 
2
  license: cc-by-nc-4.0
 
 
3
  task_categories:
4
  - image-to-text
5
  - text-retrieval
6
  - text-to-image
7
- language:
8
- - en
9
  pretty_name: LONG-DCI
10
- size_categories:
11
- - 1K<n<10K
12
  ---
13
 
14
  ## News
@@ -16,16 +16,21 @@ We are excited to introduce **Long-DCI**, a new benchmark for long-caption image
16
 
17
  ---
18
 
 
 
 
 
 
19
  ### Long-DCI Dataset Card
20
 
21
  #### **Dataset Details**
22
  - **Dataset Type**: Long-DCI includes 7,805 image-caption pairs.
23
  - **Download Datasets**: To download the DCI dataset, please follow the instructions provided in the [DCI GitHub repository](https://github.com/facebookresearch/DCI?tab=readme-ov-file#dataset-download). Once downloaded, you can use our CSV file for evaluation purposes.
24
  - **More Information**:
25
- - [Paper](https://arxiv.org/pdf/2410.10034)
26
  - [Code](https://github.com/ivonajdenkoska/tulip)
27
  - **License**: Attribution-NonCommercial 4.0 International, in compliance with MetaAI’s policy.
28
 
29
  #### **Intended Use**
30
  - **Primary Purpose**: Long-DCI is designed for research on cross-modal retrieval.
31
- - **Target Audience**: This dataset is tailored for researchers and enthusiasts in computer vision, natural language processing, machine learning, and artificial intelligence.
 
1
  ---
2
+ language:
3
+ - en
4
  license: cc-by-nc-4.0
5
+ size_categories:
6
+ - 1K<n<10K
7
  task_categories:
8
  - image-to-text
9
  - text-retrieval
10
  - text-to-image
 
 
11
  pretty_name: LONG-DCI
 
 
12
  ---
13
 
14
  ## News
 
16
 
17
  ---
18
 
19
+ ### Abstract
20
+ We address the challenge of representing long captions in vision-language models, such as CLIP. By design these models are limited by fixed, absolute positional encodings, restricting inputs to a maximum of 77 tokens and hindering performance on tasks requiring longer descriptions. Although recent work has attempted to overcome this limit, their proposed approaches struggle to model token relationships over longer distances and simply extend to a fixed new token length. Instead, we propose a generalizable method, named TULIP, able to upgrade the token length to any length for CLIP-like models. We do so by improving the architecture with relative position encodings, followed by a training procedure that (i) distills the original CLIP text encoder into an encoder with relative position encodings and (ii) enhances the model for aligning longer captions with images. By effectively encoding captions longer than the default 77 tokens, our model outperforms baselines on cross-modal tasks such as retrieval and text-to-image generation. The code repository is available at https://github.com/ivonajdenkoska/tulip.
21
+
22
+ ---
23
+
24
  ### Long-DCI Dataset Card
25
 
26
  #### **Dataset Details**
27
  - **Dataset Type**: Long-DCI includes 7,805 image-caption pairs.
28
  - **Download Datasets**: To download the DCI dataset, please follow the instructions provided in the [DCI GitHub repository](https://github.com/facebookresearch/DCI?tab=readme-ov-file#dataset-download). Once downloaded, you can use our CSV file for evaluation purposes.
29
  - **More Information**:
30
+ - [Paper](https://huggingface.co/papers/2410.10034)
31
  - [Code](https://github.com/ivonajdenkoska/tulip)
32
  - **License**: Attribution-NonCommercial 4.0 International, in compliance with MetaAI’s policy.
33
 
34
  #### **Intended Use**
35
  - **Primary Purpose**: Long-DCI is designed for research on cross-modal retrieval.
36
+ - **Target Audience**: This dataset is tailored for researchers and enthusiasts in computer vision, natural language processing, machine learning, and artificial intelligence.