abhiram4572 commited on
Commit
ce76c14
·
verified ·
1 Parent(s): 7840fe6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md CHANGED
@@ -41,3 +41,90 @@ dataset_info:
41
  download_size: 1573738795
42
  dataset_size: 1880924216.0
43
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  download_size: 1573738795
42
  dataset_size: 1880924216.0
43
  ---
44
+
45
+ # MMCricBench 🏏
46
+ **Multimodal Cricket Scorecard Benchmark for VQA (Evaluation-Only)**
47
+
48
+ MMCricBench evaluates **Large Vision-Language Models (LVLMs)** on **numerical reasoning**, **cross-lingual understanding**, and **multi-image reasoning** over semi-structured cricket scorecard images. It includes English and Hindi scorecards; all questions/answers are in English.
49
+
50
+ **Dataset:** https://huggingface.co/datasets/DIALab/MMCricBench
51
+ **Status:** Evaluation-only (**no train/val splits**)
52
+
53
+ ---
54
+
55
+ ## Overview
56
+ - **Images:** 1,463 synthetic scorecards (PNG)
57
+ - 822 single-image scorecards
58
+ - 641 multi-image scorecards
59
+ - **QA pairs:** 1,500 (English)
60
+ - **Reasoning categories:**
61
+ - **C1** – Direct retrieval & simple inference
62
+ - **C2** – Basic arithmetic & conditional logic
63
+ - **C3** – Multi-step quantitative reasoning (often across images)
64
+
65
+ ---
66
+
67
+ ## Files / Splits
68
+ We provide two evaluation splits:
69
+ - `test_single` — single-image questions
70
+ - `test_multi` — multi-image questions
71
+
72
+ > If you keep a single JSONL (e.g., `test_all.jsonl`), use a **list** for `images` in every row. Single-image rows should have a one-element list. On the Hub, we expose two test splits.
73
+
74
+ ---
75
+
76
+ ## Data Schema
77
+ Each row is a JSON object:
78
+
79
+ | Field | Type | Description |
80
+ |------------|---------------------|----------------------------------------------|
81
+ | `id` | `string` | Unique identifier |
82
+ | `images` | `list[string]` | Paths to one or more scorecard images |
83
+ | `question` | `string` | Question text (English) |
84
+ | `answer` | `string` | Ground-truth answer (canonicalized) |
85
+ | `category` | `string` (`C1/C2/C3`)| Reasoning category |
86
+ | `subset`* | `string` (`single/multi`) | Optional convenience field |
87
+
88
+ **Example (single-image):**
89
+ ```json
90
+ {"id":"english-single-9","images":["English-apr/single_image/1198246_2innings_with_color1.png"],"question":"Which bowler has conceded the most extras?","answer":"Wahab Riaz","category":"C2","subset":"single"}
91
+ ```
92
+
93
+ ## Loading & Preview
94
+
95
+ ### Load from the Hub (two-split layout)
96
+ ```python
97
+ from datasets import load_dataset
98
+
99
+ # Loads: DatasetDict({'test_single': ..., 'test_multi': ...})
100
+ ds = load_dataset("DIALab/MMCricBench")
101
+ print(ds)
102
+
103
+ # Peek a single-image example
104
+ ex = ds["test_single"][0]
105
+ print(ex["id"])
106
+ print(ex["question"], "->", ex["answer"])
107
+
108
+ # Preview images (each example stores a list of PIL images)
109
+ from IPython.display import display
110
+ for img in ex["images"]:
111
+ display(img)
112
+ ```
113
+
114
+ ## Baseline Results (from the paper)
115
+
116
+ Accuracy (%) on MMCricBench by split and language.
117
+
118
+ | Model | #Params | Single-EN (Avg) | Single-HI (Avg) | Multi-EN (Avg) | Multi-HI (Avg) |
119
+ |-------------------|:------:|:---------------:|:---------------:|:--------------:|:--------------:|
120
+ | SmolVLM | 500M | 19.2 | 19.0 | 11.8 | 11.6 |
121
+ | Qwen2.5VL | 3B | 40.2 | 33.3 | 31.2 | 22.0 |
122
+ | LLaVA-NeXT | 7B | 28.3 | 26.6 | 16.2 | 14.8 |
123
+ | mPLUG-DocOwl2 | 8B | 20.7 | 19.9 | 15.2 | 14.4 |
124
+ | Qwen2.5VL | 7B | 49.1 | 42.6 | 37.0 | 32.2 |
125
+ | InternVL-2 | 8B | 29.4 | 23.4 | 18.6 | 18.2 |
126
+ | Llama-3.2-V | 11B | 27.3 | 24.8 | 26.2 | 20.4 |
127
+ | **GPT-4o** | — | **57.3** | **45.1** | **50.6** | **43.6** |
128
+
129
+ *Numbers are exact-match accuracy (higher is better). For C1/C2/C3 breakdowns, see Table 3 (single-image) and Table 5 (multi-image) in the paper.* :contentReference[oaicite:0]{index=0} :contentReference[oaicite:1]{index=1}
130
+