nielsr HF Staff commited on
Commit
8de6526
·
verified ·
1 Parent(s): c11763d

Enhance dataset card: Add project page, abstract, and `library_name` metadata

Browse files

This Pull Request enhances the dataset card for TIP-I2V by incorporating key information for better discoverability and user experience:

- **Added `library_name: datasets` to the metadata**: This specifies the primary Hugging Face library used to interact with this dataset, improving its integration and searchability on the Hub.
- **Included a prominent link to the project page (`https://tip-i2v.github.io/`)**: This provides direct access to additional resources and information related to the project. The existing arXiv paper link is retained.
- **Added the full paper abstract as a new section**: This offers a comprehensive overview of the dataset's purpose, methodology, and contributions directly within the dataset card, making it easier for users to understand its value.

These updates aim to provide more complete and accessible documentation for the TIP-I2V dataset.

Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -9,6 +9,8 @@ task_categories:
9
  - text-to-video
10
  - text-to-image
11
  - image-to-image
 
 
12
  dataset_info:
13
  features:
14
  - name: UUID
@@ -52,14 +54,18 @@ tags:
52
  - text-to-video
53
  - visual-generation
54
  - video-generation
55
- pretty_name: TIP-I2V
56
  ---
57
 
58
  # Summary
59
  This is the dataset proposed in our paper [**TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation**](https://arxiv.org/abs/2411.04709).
60
 
 
 
61
  TIP-I2V is the first dataset comprising over 1.70 million unique user-provided text and image prompts. Besides the prompts, TIP-I2V also includes videos generated by five state-of-the-art image-to-video models (Pika, Stable Video Diffusion, Open-Sora, I2VGen-XL, and CogVideoX-5B). The TIP-I2V contributes to the development of better and safer image-to-video models.
62
 
 
 
 
63
  <p align="center">
64
  <img src="https://huggingface.co/datasets/WenhaoWang/TIP-I2V/resolve/main/assets/teasor.png" width="1000">
65
  </p>
@@ -197,7 +203,6 @@ hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/cog_vide
197
  Click the [WizMap (TIP-I2V VS VidProM)](https://poloclub.github.io/wizmap/?dataURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fdata_tip-i2v_vidprom.ndjson&gridURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fgrid_tip-i2v_vidprom.json) and [WizMap (TIP-I2V VS DiffusionDB)](https://poloclub.github.io/wizmap/?dataURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fdata_tip-i2v_diffusiondb.ndjson&gridURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fgrid_tip-i2v_diffusiondb.json)
198
  (wait for 5 seconds) for an interactive visualization of our 1.70 million prompts.
199
 
200
-
201
  # License
202
 
203
  The prompts and videos in our TIP-I2V are licensed under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
 
9
  - text-to-video
10
  - text-to-image
11
  - image-to-image
12
+ pretty_name: TIP-I2V
13
+ library_name: datasets
14
  dataset_info:
15
  features:
16
  - name: UUID
 
54
  - text-to-video
55
  - visual-generation
56
  - video-generation
 
57
  ---
58
 
59
  # Summary
60
  This is the dataset proposed in our paper [**TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation**](https://arxiv.org/abs/2411.04709).
61
 
62
+ [Project page](https://tip-i2v.github.io/) | [Paper](https://arxiv.org/abs/2411.04709)
63
+
64
  TIP-I2V is the first dataset comprising over 1.70 million unique user-provided text and image prompts. Besides the prompts, TIP-I2V also includes videos generated by five state-of-the-art image-to-video models (Pika, Stable Video Diffusion, Open-Sora, I2VGen-XL, and CogVideoX-5B). The TIP-I2V contributes to the development of better and safer image-to-video models.
65
 
66
+ # Abstract
67
+ Video generation models are revolutionizing content creation, with image-to-video models drawing increasing attention due to their enhanced controllability, visual consistency, and practical applications. However, despite their popularity, these models rely on user-provided text and image prompts, and there is currently no dedicated dataset for studying these prompts. In this paper, we introduce TIP-I2V, the first large-scale dataset of over 1.70 million unique user-provided Text and Image Prompts specifically for Image-to-Video generation. Additionally, we provide the corresponding generated videos from five state-of-the-art image-to-video models. We begin by outlining the time-consuming and costly process of curating this large-scale dataset. Next, we compare TIP-I2V to two popular prompt datasets, VidProM (text-to-video) and DiffusionDB (text-to-image), highlighting differences in both basic and semantic information. This dataset enables advancements in image-to-video research. For instance, to develop better models, researchers can use the prompts in TIP-I2V to analyze user preferences and evaluate the multi-dimensional performance of their trained models; and to enhance model safety, they may focus on addressing the misinformation issue caused by image-to-video models. The new research inspired by TIP-I2V and the differences with existing datasets emphasize the importance of a specialized image-to-video prompt dataset. The project is available at this https URL .
68
+
69
  <p align="center">
70
  <img src="https://huggingface.co/datasets/WenhaoWang/TIP-I2V/resolve/main/assets/teasor.png" width="1000">
71
  </p>
 
203
  Click the [WizMap (TIP-I2V VS VidProM)](https://poloclub.github.io/wizmap/?dataURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fdata_tip-i2v_vidprom.ndjson&gridURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fgrid_tip-i2v_vidprom.json) and [WizMap (TIP-I2V VS DiffusionDB)](https://poloclub.github.io/wizmap/?dataURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fdata_tip-i2v_diffusiondb.ndjson&gridURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fgrid_tip-i2v_diffusiondb.json)
204
  (wait for 5 seconds) for an interactive visualization of our 1.70 million prompts.
205
 
 
206
  # License
207
 
208
  The prompts and videos in our TIP-I2V are licensed under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).