Update README.md
Browse files
README.md
CHANGED
|
@@ -39,7 +39,7 @@ Website: [https://behavior-in-the-wild.github.io/behavior-llava.html](https://be
|
|
| 39 |
|
| 40 |
By modeling these downstream receiver behaviors, training on BLIFT improves **content understanding** of VLMs, showing significant improvements across 46 tasks in image, video, text, and audio understanding.
|
| 41 |
|
| 42 |
-
<img src="./bllava-fig_2.
|
| 43 |
|
| 44 |
---
|
| 45 |
|
|
@@ -75,7 +75,7 @@ BLIFT combines high-quality behavioral data from two sources:
|
|
| 75 |
- Metadata: Likes, views, top comments, replay graphs
|
| 76 |
- Filtering: English language, minimum 10k views, NSFW, duplicates
|
| 77 |
|
| 78 |
-
<img src="./filtering-final.
|
| 79 |
|
| 80 |
---
|
| 81 |
|
|
@@ -89,7 +89,7 @@ Using BLIFT to train **Behavior-LLaVA** (a fine-tuned LLaMA-Vid), the model outp
|
|
| 89 |
- 26 benchmark datasets
|
| 90 |
- Across image, video, audio, and text modalities
|
| 91 |
|
| 92 |
-
<img src="./radar_chart (1).
|
| 93 |
|
| 94 |
|
| 95 |
---
|
|
|
|
| 39 |
|
| 40 |
By modeling these downstream receiver behaviors, training on BLIFT improves **content understanding** of VLMs, showing significant improvements across 46 tasks in image, video, text, and audio understanding.
|
| 41 |
|
| 42 |
+
<img src="./bllava-fig_2.png" alt="bllava-fig" width="1000"/>
|
| 43 |
|
| 44 |
---
|
| 45 |
|
|
|
|
| 75 |
- Metadata: Likes, views, top comments, replay graphs
|
| 76 |
- Filtering: English language, minimum 10k views, NSFW, duplicates
|
| 77 |
|
| 78 |
+
<img src="./filtering-final.png" alt="filtering" width="1000"/>
|
| 79 |
|
| 80 |
---
|
| 81 |
|
|
|
|
| 89 |
- 26 benchmark datasets
|
| 90 |
- Across image, video, audio, and text modalities
|
| 91 |
|
| 92 |
+
<img src="./radar_chart (1).png" alt="results" width="1000"/>
|
| 93 |
|
| 94 |
|
| 95 |
---
|