Add image-text-to-text as task category

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -1,24 +1,24 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - visual-question-answering
5
- - question-answering
6
  language:
7
  - en
8
  - zh
 
9
  size_categories:
10
  - 1K<n<10K
11
-
 
 
 
12
  configs:
13
  - config_name: DynVQA_en
14
- data_files:
15
  - split: test
16
- path: "test/DynVQA_en/DynVQA_en.202412.jsonl"
17
  default: true
18
  - config_name: DynVQA_zh
19
- data_files:
20
  - split: test
21
- path: "test/DynVQA_zh/DynVQA_zh.202412.jsonl"
22
  ---
23
 
24
  # ๐Ÿ“š Dyn-VQA Dataset
@@ -53,4 +53,4 @@ The json item of Dyn-VQA dataset is organized in the following format:
53
  url={https://arxiv.org/abs/2411.02937},
54
  }
55
  ```
56
- When citing our work, please kindly consider citing the original papers. The relevant citation information is listed here.
 
1
  ---
 
 
 
 
2
  language:
3
  - en
4
  - zh
5
+ license: apache-2.0
6
  size_categories:
7
  - 1K<n<10K
8
+ task_categories:
9
+ - visual-question-answering
10
+ - question-answering
11
+ - image-text-to-text
12
  configs:
13
  - config_name: DynVQA_en
14
+ data_files:
15
  - split: test
16
+ path: test/DynVQA_en/DynVQA_en.202412.jsonl
17
  default: true
18
  - config_name: DynVQA_zh
19
+ data_files:
20
  - split: test
21
+ path: test/DynVQA_zh/DynVQA_zh.202412.jsonl
22
  ---
23
 
24
  # ๐Ÿ“š Dyn-VQA Dataset
 
53
  url={https://arxiv.org/abs/2411.02937},
54
  }
55
  ```
56
+ When citing our work, please kindly consider citing the original papers. The relevant citation information is listed here.