Upload 9 files
Browse files- .gitattributes +2 -0
- VQA_CL/README.md +119 -0
- VQA_CL/benchmark_info.json +22 -0
- VQA_CL/dataset_info.json +56 -0
- VQA_CL/scienceqa/scienceqa_test.json +0 -0
- VQA_CL/scienceqa/scienceqa_train.json +0 -0
- VQA_CL/vizwiz/vizwiz_test.json +0 -0
- VQA_CL/vizwiz/vizwiz_train.json +0 -0
- VQA_CL/vqav2/vqav2_test.json +3 -0
- VQA_CL/vqav2/vqav2_train.json +3 -0
.gitattributes
CHANGED
|
@@ -57,3 +57,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
VQA_CL/vqav2/vqav2_test.json filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
VQA_CL/vqav2/vqav2_train.json filter=lfs diff=lfs merge=lfs -text
|
VQA_CL/README.md
ADDED
|
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# VQA_CL Benchmark
|
| 2 |
+
|
| 3 |
+
## Introduction
|
| 4 |
+
|
| 5 |
+
The VQA_CL (Visual Question Answering in Continual Learning) benchmark is designed to evaluate the ability of models to perform visual question answering tasks in a continual learning setting. This means the model is expected to learn from a sequence of VQA datasets, corresponding to different visual domains or task types, without catastrophically forgetting previously learned knowledge. This benchmark helps assess model adaptability, knowledge retention, and forward transfer capabilities in evolving VQA scenarios.
|
| 6 |
+
|
| 7 |
+
## Datasets
|
| 8 |
+
|
| 9 |
+
Please download the images from the respective datasets using the links below. The JSON annotation files for train/test splits of each dataset are provided within this benchmark's directory structure, under their respective dataset subfolders.
|
| 10 |
+
|
| 11 |
+
| Image Source | Download Path |
|
| 12 |
+
| :----: | :----: |
|
| 13 |
+
| COCO | [train2014](http://images.cocodataset.org/zips/train2014.zip), [test2015](http://images.cocodataset.org/zips/test2015.zip), [val2014](http://images.cocodataset.org/zips/val2014.zip) |
|
| 14 |
+
| ScienceQA | [images](https://drive.google.com/drive/folders/1w8imCXWYn2LxajmGeGH_g5DaL2rabHev) |
|
| 15 |
+
| VizWiz | [train](https://vizwiz.cs.colorado.edu/VizWiz_final/images/train.zip), [val](https://vizwiz.cs.colorado.edu/VizWiz_final/images/val.zip), [test](https://vizwiz.cs.colorado.edu/VizWiz_final/images/test.zip) |
|
| 16 |
+
|
| 17 |
+
## Data Organization
|
| 18 |
+
|
| 19 |
+
The benchmark expects a specific directory structure for the images and JSON annotation files.
|
| 20 |
+
|
| 21 |
+
### JSON File Format
|
| 22 |
+
|
| 23 |
+
The JSON files (e.g., `scienceqa_train.json`, `vizwiz_test.json`, `vqav2_train.json`) are structured as a list of objects, where each object represents a VQA instance. These JSON files are located within their respective dataset subdirectories (e.g., `scienceqa/scienceqa_train.json`, `vqav2/vqav2_train.json`).
|
| 24 |
+
|
| 25 |
+
Each object typically contains:
|
| 26 |
+
|
| 27 |
+
* `"image"`: A string representing the relative path to the image file. **This path is relative to the directory containing the JSON file itself.** For instance, if `scienceqa_train.json` is in the `benchmark/VQA_CL/scienceqa/` directory, an `image` path like `"./scienceqa/images/train/1/image.png"` inside it will point to `benchmark/VQA_CL/scienceqa/scienceqa/images/train/1/image.png`.
|
| 28 |
+
* `"conversations"`: A list of dialogue turns. Each turn is an object with:
|
| 29 |
+
* `"from"`: A string indicating the source of the text, either `"human"` (for questions) or `"gpt"` (for answers).
|
| 30 |
+
* `"value"`: A string containing the actual question or answer text. Questions may include an `<image>` token, which is a placeholder for the visual context.
|
| 31 |
+
|
| 32 |
+
**Example JSON structure (from an entry in `vqav2_train.json`, assuming it's located in `benchmark/VQA_CL/vqav2/`):**
|
| 33 |
+
```json
|
| 34 |
+
[
|
| 35 |
+
{
|
| 36 |
+
"image": "./COCO2014/train2014/COCO_train2014_000000458752.jpg", // Path relative to benchmark/VQA_CL/vqav2/
|
| 37 |
+
"conversations": [
|
| 38 |
+
{
|
| 39 |
+
"from": "human",
|
| 40 |
+
"value": "<image>\nWhat is this photo taken looking through?\nAnswer the question using a single word or phrase."
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"from": "gpt",
|
| 44 |
+
"value": "net"
|
| 45 |
+
}
|
| 46 |
+
// ... more conversation turns ...
|
| 47 |
+
]
|
| 48 |
+
}
|
| 49 |
+
// ... more VQA instances ...
|
| 50 |
+
]
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
### Directory Structure
|
| 54 |
+
|
| 55 |
+
It is recommended to organize your downloaded image data as follows. JSON files should be placed in their respective dataset subdirectories within `benchmark/VQA_CL/`.
|
| 56 |
+
|
| 57 |
+
```
|
| 58 |
+
benchmark/
|
| 59 |
+
└── VQA_CL/
|
| 60 |
+
├── README.md
|
| 61 |
+
├── scienceqa/ # Subdirectory for ScienceQA dataset
|
| 62 |
+
│ ├── scienceqa_train.json
|
| 63 |
+
│ ├── scienceqa_test.json
|
| 64 |
+
│ └── scienceqa/ # Image directory for ScienceQA, as referenced by paths in its JSONs
|
| 65 |
+
│ └── images/
|
| 66 |
+
│ ├── train/
|
| 67 |
+
│ │ ├── 1/
|
| 68 |
+
│ │ │ └── image.png
|
| 69 |
+
│ │ └── ... (other ScienceQA train images)
|
| 70 |
+
│ └── test/
|
| 71 |
+
│ ├── 5/
|
| 72 |
+
│ │ └── image.png
|
| 73 |
+
│ └── ... (other ScienceQA test images)
|
| 74 |
+
├── vqav2/ # Subdirectory for VQAv2 dataset
|
| 75 |
+
│ ├── vqav2_train.json
|
| 76 |
+
│ ├── vqav2_test.json
|
| 77 |
+
│ └── COCO2014/ # Image directory for COCO, as referenced by paths in VQAv2 JSONs
|
| 78 |
+
│ ├── train2014/
|
| 79 |
+
│ │ ├── COCO_train2014_000000458752.jpg
|
| 80 |
+
│ │ └── ... (other COCO train images)
|
| 81 |
+
│ └── val2014/
|
| 82 |
+
│ ├── COCO_val2014_000000262148.jpg
|
| 83 |
+
│ └── ... (other COCO val images)
|
| 84 |
+
└── vizwiz/ # Subdirectory for VizWiz dataset
|
| 85 |
+
├── vizwiz_train.json
|
| 86 |
+
├── vizwiz_test.json
|
| 87 |
+
└── VizWiz/ # Image directory for VizWiz, as referenced by paths in its JSONs
|
| 88 |
+
├── train/
|
| 89 |
+
│ ��── VizWiz_train_00000000.jpg
|
| 90 |
+
│ └── ... (other VizWiz train images)
|
| 91 |
+
├── val/
|
| 92 |
+
│ ├── VizWiz_val_00000000.jpg
|
| 93 |
+
│ └── ... (other VizWiz val images)
|
| 94 |
+
└── test/
|
| 95 |
+
└── ... (VizWiz test images, if applicable)
|
| 96 |
+
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
**Explanation:**
|
| 100 |
+
|
| 101 |
+
* The main JSON annotation files (e.g., `scienceqa_train.json`, `vqav2_train.json`) are located within dataset-specific subdirectories like `benchmark/VQA_CL/scienceqa/`, `benchmark/VQA_CL/vqav2/`, etc.
|
| 102 |
+
* The `image` paths within the JSON files (e.g., `"./scienceqa/images/train/1/image.png"` in `scienceqa_train.json`) are relative to the location of the JSON file itself.
|
| 103 |
+
* **COCO (for VQAv2):**
|
| 104 |
+
* VQAv2 JSON files (e.g., `vqav2_train.json`) are in `benchmark/VQA_CL/vqav2/`.
|
| 105 |
+
* Image paths like `"./COCO2014/train2014/..."` inside these JSONs mean COCO images should be extracted into `benchmark/VQA_CL/vqav2/COCO2014/`.
|
| 106 |
+
* So, `train2014.zip` contents go into `benchmark/VQA_CL/vqav2/COCO2014/train2014/`.
|
| 107 |
+
* `val2014.zip` contents go into `benchmark/VQA_CL/vqav2/COCO2014/val2014/`.
|
| 108 |
+
* **ScienceQA:**
|
| 109 |
+
* ScienceQA JSON files (e.g., `scienceqa_train.json`) are in `benchmark/VQA_CL/scienceqa/`.
|
| 110 |
+
* Image paths like `"./scienceqa/images/train/..."` inside these JSONs mean ScienceQA images should be extracted such that the `images` directory is under `benchmark/VQA_CL/scienceqa/scienceqa/`.
|
| 111 |
+
* So, the downloaded `images` directory (containing `train/` and `test/` subfolders) from ScienceQA should be placed at `benchmark/VQA_CL/scienceqa/scienceqa/images/`.
|
| 112 |
+
* **VizWiz:**
|
| 113 |
+
* VizWiz JSON files (e.g., `vizwiz_train.json`) are in `benchmark/VQA_CL/vizwiz/`.
|
| 114 |
+
* Image paths like `"./VizWiz/train/..."` inside these JSONs mean VizWiz images should be extracted into `benchmark/VQA_CL/vizwiz/VizWiz/`.
|
| 115 |
+
* So, `train.zip` contents go into `benchmark/VQA_CL/vizwiz/VizWiz/train/`.
|
| 116 |
+
* `val.zip` contents go into `benchmark/VQA_CL/vizwiz/VizWiz/val/`.
|
| 117 |
+
* `test.zip` contents go into `benchmark/VQA_CL/vizwiz/VizWiz/test/`.
|
| 118 |
+
|
| 119 |
+
Ensure the image paths in your local setup match this structure for the benchmark to function correctly.
|
VQA_CL/benchmark_info.json
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"VQA_CL": {
|
| 3 |
+
"description": "VQA_CL benchmark",
|
| 4 |
+
"orders": {
|
| 5 |
+
"order1": [
|
| 6 |
+
"vqav2",
|
| 7 |
+
"vizwiz",
|
| 8 |
+
"scienceqa"
|
| 9 |
+
],
|
| 10 |
+
"order2": [
|
| 11 |
+
"scienceqa",
|
| 12 |
+
"vqav2",
|
| 13 |
+
"vizwiz"
|
| 14 |
+
],
|
| 15 |
+
"order3": [
|
| 16 |
+
"vizwiz",
|
| 17 |
+
"vqav2",
|
| 18 |
+
"scienceqa"
|
| 19 |
+
]
|
| 20 |
+
}
|
| 21 |
+
}
|
| 22 |
+
}
|
VQA_CL/dataset_info.json
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"vqav2": {
|
| 3 |
+
"file_name": "vqav2/vqav2_train.json",
|
| 4 |
+
"formatting": "sharegpt",
|
| 5 |
+
"columns": {
|
| 6 |
+
"images": "image",
|
| 7 |
+
"messages": "conversations"
|
| 8 |
+
},
|
| 9 |
+
"split": "train"
|
| 10 |
+
},
|
| 11 |
+
"vqav2_test": {
|
| 12 |
+
"file_name": "vqav2/vqav2_test.json",
|
| 13 |
+
"formatting": "sharegpt",
|
| 14 |
+
"columns": {
|
| 15 |
+
"images": "image",
|
| 16 |
+
"messages": "conversations"
|
| 17 |
+
},
|
| 18 |
+
"split": "test"
|
| 19 |
+
},
|
| 20 |
+
"vizwiz": {
|
| 21 |
+
"file_name": "vizwiz/vizwiz_train.json",
|
| 22 |
+
"formatting": "sharegpt",
|
| 23 |
+
"columns": {
|
| 24 |
+
"images": "image",
|
| 25 |
+
"messages": "conversations"
|
| 26 |
+
},
|
| 27 |
+
"split": "train"
|
| 28 |
+
},
|
| 29 |
+
"vizwiz_test": {
|
| 30 |
+
"file_name": "vizwiz/vizwiz_test.json",
|
| 31 |
+
"formatting": "sharegpt",
|
| 32 |
+
"columns": {
|
| 33 |
+
"images": "image",
|
| 34 |
+
"messages": "conversations"
|
| 35 |
+
},
|
| 36 |
+
"split": "test"
|
| 37 |
+
},
|
| 38 |
+
"scienceqa": {
|
| 39 |
+
"file_name": "scienceqa/scienceqa_train.json",
|
| 40 |
+
"formatting": "sharegpt",
|
| 41 |
+
"columns": {
|
| 42 |
+
"images": "image",
|
| 43 |
+
"messages": "conversations"
|
| 44 |
+
},
|
| 45 |
+
"split": "train"
|
| 46 |
+
},
|
| 47 |
+
"scienceqa_test": {
|
| 48 |
+
"file_name": "scienceqa/scienceqa_test.json",
|
| 49 |
+
"formatting": "sharegpt",
|
| 50 |
+
"columns": {
|
| 51 |
+
"images": "image",
|
| 52 |
+
"messages": "conversations"
|
| 53 |
+
},
|
| 54 |
+
"split": "test"
|
| 55 |
+
}
|
| 56 |
+
}
|
VQA_CL/scienceqa/scienceqa_test.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
VQA_CL/scienceqa/scienceqa_train.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
VQA_CL/vizwiz/vizwiz_test.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
VQA_CL/vizwiz/vizwiz_train.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
VQA_CL/vqav2/vqav2_test.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fdaa343ca547af94a92c53372af5ba36e642e99576475c35041bc7c95cf7255c
|
| 3 |
+
size 71639823
|
VQA_CL/vqav2/vqav2_train.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dfb7d25b4c339d518d200dd017a226d950145d1a4e8c3ba6ecfacc77fbfa0b64
|
| 3 |
+
size 118272622
|