Add link to paper, task category
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,5 +1,7 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
# Visual Haystacks Dataset Card
|
|
@@ -31,9 +33,11 @@ license: mit
|
|
| 31 |
|
| 32 |
4. Please check out our [project page](https://visual-haystacks.github.io) for more information. You can also send questions or comments about the model to [our github repo](https://github.com/visual-haystacks/vhs_benchmark/issues).
|
| 33 |
|
|
|
|
|
|
|
| 34 |
5. This is the updated VHs dataset, enhanced for greater diversity and balance. The original dataset can be found at [tsunghanwu/visual_haystacks_v0](https://huggingface.co/datasets/tsunghanwu/visual_haystacks_v0).
|
| 35 |
|
| 36 |
## Intended use
|
| 37 |
Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.
|
| 38 |
|
| 39 |
-
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-text-to-text
|
| 5 |
---
|
| 6 |
|
| 7 |
# Visual Haystacks Dataset Card
|
|
|
|
| 33 |
|
| 34 |
4. Please check out our [project page](https://visual-haystacks.github.io) for more information. You can also send questions or comments about the model to [our github repo](https://github.com/visual-haystacks/vhs_benchmark/issues).
|
| 35 |
|
| 36 |
+
This dataset was presented in the paper [Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark](https://huggingface.co/papers/2407.13766).
|
| 37 |
+
|
| 38 |
5. This is the updated VHs dataset, enhanced for greater diversity and balance. The original dataset can be found at [tsunghanwu/visual_haystacks_v0](https://huggingface.co/datasets/tsunghanwu/visual_haystacks_v0).
|
| 39 |
|
| 40 |
## Intended use
|
| 41 |
Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.
|
| 42 |
|
| 43 |
+
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|