Update README.md
Browse files
README.md
CHANGED
|
@@ -22,4 +22,94 @@ configs:
|
|
| 22 |
data_files:
|
| 23 |
- split: train
|
| 24 |
path: data/train-*
|
|
|
|
| 25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
data_files:
|
| 23 |
- split: train
|
| 24 |
path: data/train-*
|
| 25 |
+
license: mit
|
| 26 |
---
|
| 27 |
+
# BLIFT: Behavior-LLaVA Instruction Fine-Tuning Dataset
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
Paper: [**Teaching Human Behavior Improves Content Understanding Abilities of VLMs**](https://openreview.net/forum?id=TrKq4Wlwcz)
|
| 31 |
+
|
| 32 |
+
Website: [https://behavior-in-the-wild.github.io/behavior-llava.html](https://behavior-in-the-wild.github.io/behavior-llava.html)
|
| 33 |
+
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## Dataset Summary
|
| 37 |
+
|
| 38 |
+
**BLIFT** (Behavior-LLaVA Instruction Fine-Tuning) is a large-scale multimodal instruction tuning dataset designed to teach **Vision-Language Models (VLMs)** human behavior. It contains over **730k images and videos** collected from Reddit and YouTube, annotated with **reciever behavior** such as **comments, likes, views, and replay graphs**.
|
| 39 |
+
|
| 40 |
+
By modeling these downstream receiver behaviors, training on BLIFT improves **content understanding** of VLMs, showing significant improvements across 46 tasks in image, video, text, and audio understanding.
|
| 41 |
+
|
| 42 |
+
<img src="./bllava-fig_2.pdf" alt="bllava-fig" width="1000"/>
|
| 43 |
+
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
+
## Dataset Structure
|
| 47 |
+
|
| 48 |
+
Each sample in BLIFT includes:
|
| 49 |
+
|
| 50 |
+
| Field | Type | Description |
|
| 51 |
+
|------------------|-----------|-----------------------------------------------------------------------------|
|
| 52 |
+
| `permalink` | `string` | URL to the reddit post |
|
| 53 |
+
| `url` | `string` | Media URL |
|
| 54 |
+
| `title` | `string` | Title of the post or video |
|
| 55 |
+
| `comments` | `list[str]` | Top user comments (cleaned and filtered) |
|
| 56 |
+
| `num_comments` | `int` | Number of comments on the post |
|
| 57 |
+
| `subreddit` | `string` | Subreddit source |
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
## Data Sources
|
| 64 |
+
|
| 65 |
+
BLIFT combines high-quality behavioral data from two sources:
|
| 66 |
+
|
| 67 |
+
### Reddit
|
| 68 |
+
- Subreddits: `r/pics`, `r/videos`
|
| 69 |
+
- Collected: 400k images, 330k videos
|
| 70 |
+
- Metadata: Upvotes and top comments
|
| 71 |
+
- Filtering: NSFW, bots, duplicates, minimum comment quality
|
| 72 |
+
|
| 73 |
+
### YouTube
|
| 74 |
+
- 250k videos from ~6,000 verified channels via Wikidata
|
| 75 |
+
- Metadata: Likes, views, top comments, replay graphs
|
| 76 |
+
- Filtering: English language, minimum 10k views, NSFW, duplicates
|
| 77 |
+
|
| 78 |
+
<img src="./filtering-final.pdf" alt="filtering" width="1000"/>
|
| 79 |
+
|
| 80 |
+
---
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
## Benchmarks & Results
|
| 85 |
+
|
| 86 |
+
Using BLIFT to train **Behavior-LLaVA** (a fine-tuned LLaMA-Vid), the model outperforms base LLaMA-Vid and other supervised baselines on:
|
| 87 |
+
|
| 88 |
+
- 46 tasks
|
| 89 |
+
- 26 benchmark datasets
|
| 90 |
+
- Across image, video, audio, and text modalities
|
| 91 |
+
|
| 92 |
+
<img src="./radar_chart (1).pdf" alt="results" width="1000"/>
|
| 93 |
+
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
## 🔗 Citation
|
| 99 |
+
|
| 100 |
+
If you use BLIFT, please cite:
|
| 101 |
+
|
| 102 |
+
```bibtex
|
| 103 |
+
@article{singh2024teaching,
|
| 104 |
+
title={Teaching Human Behavior Improves Content Understanding Abilities Of LLMs},
|
| 105 |
+
author={Singh, Somesh and SI, Harini and Singla, Yaman K and Baths, Veeky and Shah, Rajiv Ratn and Chen, Changyou and Krishnamurthy, Balaji},
|
| 106 |
+
journal={arXiv preprint arXiv:2405.00942},
|
| 107 |
+
year={2024}
|
| 108 |
+
}
|
| 109 |
+
```
|
| 110 |
+
|
| 111 |
+
---
|
| 112 |
+
|
| 113 |
+
## Contact
|
| 114 |
+
|
| 115 |
+
Contact behavior-in-the-wild@googlegroups.com for questions and suggestions.
|