question stringlengths 10 108 | answer stringlengths 1 67 | image stringlengths 72 72 | width int64 131 512 | height int64 131 512 | bboxs listlengths 1 1 | dataset stringclasses 1
value | split stringclasses 1
value | prompt stringlengths 409 507 | target listlengths 1 1 | metadata dict |
|---|---|---|---|---|---|---|---|---|---|---|
what does it say at the top of each poster? | dunk | /content/drive/MyDrive/TextVQA_Dataset/train_images/142d9314847fca4a.jpg | 512 | 341 | [
[
42,
57,
181,
98
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does it say at the top of each poster? Show reasoning in <think> tags. Return bounding box in <answe... | [
[
42,
57,
181,
98
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/142d9314847fca4a.jpg",
"original_size": [
512,
341
]
} |
what type of league is this team? | softball | /content/drive/MyDrive/TextVQA_Dataset/train_images/f93fc430aa92d6ae.jpg | 408 | 512 | [
[
163,
202,
218,
214
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what type of league is this team? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1... | [
[
163,
202,
218,
214
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/f93fc430aa92d6ae.jpg",
"original_size": [
408,
512
]
} |
what will the cashier deduct? | 20% | /content/drive/MyDrive/TextVQA_Dataset/train_images/6737828d503e5959.jpg | 512 | 382 | [
[
144,
59,
249,
99
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what will the cashier deduct? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2... | [
[
144,
59,
249,
99
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/6737828d503e5959.jpg",
"original_size": [
512,
382
]
} |
what team uniform is the player wearing? | omaha | /content/drive/MyDrive/TextVQA_Dataset/train_images/cda2b782a8781a83.jpg | 512 | 340 | [
[
161,
115,
210,
144
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what team uniform is the player wearing? Show reasoning in <think> tags. Return bounding box in <answer> ... | [
[
161,
115,
210,
144
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/cda2b782a8781a83.jpg",
"original_size": [
512,
340
]
} |
what is the email listed? | calenti@terra.es | /content/drive/MyDrive/TextVQA_Dataset/train_images/0ba0b4ff683cfaec.jpg | 384 | 512 | [
[
100,
451,
297,
473
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the email listed? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2... | [
[
100,
451,
297,
473
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/0ba0b4ff683cfaec.jpg",
"original_size": [
384,
512
]
} |
what word is written in the middle of the second cup from the right? | phoblacht | /content/drive/MyDrive/TextVQA_Dataset/train_images/1fea660600b21118.jpg | 512 | 341 | [
[
237,
114,
344,
145
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what word is written in the middle of the second cup from the right? Show reasoning in <think> tags. Retu... | [
[
237,
114,
344,
145
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/1fea660600b21118.jpg",
"original_size": [
512,
341
]
} |
what letter is in the bottom right corner? | m | /content/drive/MyDrive/TextVQA_Dataset/train_images/6a5d7ab3a703878b.jpg | 419 | 512 | [
[
360,
456,
390,
474
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what letter is in the bottom right corner? Show reasoning in <think> tags. Return bounding box in <answer... | [
[
360,
456,
390,
474
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/6a5d7ab3a703878b.jpg",
"original_size": [
419,
512
]
} |
what is the name on the sign? | google | /content/drive/MyDrive/TextVQA_Dataset/train_images/632f00f6f6bbe25b.jpg | 512 | 341 | [
[
203,
146,
273,
199
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name on the sign? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2... | [
[
203,
146,
273,
199
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/632f00f6f6bbe25b.jpg",
"original_size": [
512,
341
]
} |
what date is shown on this bottle? | 2010 | /content/drive/MyDrive/TextVQA_Dataset/train_images/083ea2d88ee71e24.jpg | 220 | 512 | [
[
99,
377,
116,
386
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what date is shown on this bottle? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y... | [
[
99,
377,
116,
386
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/083ea2d88ee71e24.jpg",
"original_size": [
220,
512
]
} |
what year was the wine bottled? | 2007 | /content/drive/MyDrive/TextVQA_Dataset/train_images/050c0066b1ed14bd.jpg | 165 | 512 | [
[
62,
347,
101,
358
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what year was the wine bottled? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, ... | [
[
62,
347,
101,
358
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/050c0066b1ed14bd.jpg",
"original_size": [
165,
512
]
} |
what company made this product? | infinity | /content/drive/MyDrive/TextVQA_Dataset/train_images/c1f4f8f71fb20f1c.jpg | 512 | 384 | [
[
108,
27,
187,
65
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what company made this product? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, ... | [
[
108,
27,
187,
65
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c1f4f8f71fb20f1c.jpg",
"original_size": [
512,
384
]
} |
what year is on this poster? | 2009 | /content/drive/MyDrive/TextVQA_Dataset/train_images/00e2e61cc0b8b5f5.jpg | 364 | 512 | [
[
236,
24,
353,
61
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what year is on this poster? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2,... | [
[
236,
24,
353,
61
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/00e2e61cc0b8b5f5.jpg",
"original_size": [
364,
512
]
} |
what is the bib number on the first runner in blue? | 235 | /content/drive/MyDrive/TextVQA_Dataset/train_images/c69ac316e8979c69.jpg | 512 | 341 | [
[
336,
179,
366,
194
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the bib number on the first runner in blue? Show reasoning in <think> tags. Return bounding box i... | [
[
336,
179,
366,
194
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c69ac316e8979c69.jpg",
"original_size": [
512,
341
]
} |
what year is written on this whiteboard? | 2014 | /content/drive/MyDrive/TextVQA_Dataset/train_images/1da38d2b8f4905d1.jpg | 512 | 287 | [
[
232,
208,
283,
243
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what year is written on this whiteboard? Show reasoning in <think> tags. Return bounding box in <answer> ... | [
[
232,
208,
283,
243
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/1da38d2b8f4905d1.jpg",
"original_size": [
512,
287
]
} |
what number is on the white jersey? | 23 | /content/drive/MyDrive/TextVQA_Dataset/train_images/64e09af4781bdc21.jpg | 417 | 512 | [
[
184,
236,
228,
270
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is on the white jersey? Show reasoning in <think> tags. Return bounding box in <answer> [x1, ... | [
[
184,
236,
228,
270
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/64e09af4781bdc21.jpg",
"original_size": [
417,
512
]
} |
what brand of soda is in the can? | pepsi | /content/drive/MyDrive/TextVQA_Dataset/train_images/cd601b76506905fe.jpg | 512 | 512 | [
[
175,
298,
366,
372
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of soda is in the can? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1... | [
[
175,
298,
366,
372
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/cd601b76506905fe.jpg",
"original_size": [
512,
512
]
} |
what is the name of the simulator? | venturer | /content/drive/MyDrive/TextVQA_Dataset/train_images/900786edd95092d8.jpg | 512 | 384 | [
[
229,
143,
306,
188
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name of the simulator? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y... | [
[
229,
143,
306,
188
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/900786edd95092d8.jpg",
"original_size": [
512,
384
]
} |
what brand is the monitor? | dell | /content/drive/MyDrive/TextVQA_Dataset/train_images/6c4cfd697b44b706.jpg | 512 | 384 | [
[
335,
230,
354,
238
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand is the monitor? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y... | [
[
335,
230,
354,
238
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/6c4cfd697b44b706.jpg",
"original_size": [
512,
384
]
} |
what type of battery does the phone have? | li-ion | /content/drive/MyDrive/TextVQA_Dataset/train_images/ac0f70575202aa3d.jpg | 382 | 512 | [
[
108,
356,
135,
374
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what type of battery does the phone have? Show reasoning in <think> tags. Return bounding box in <answer>... | [
[
108,
356,
135,
374
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ac0f70575202aa3d.jpg",
"original_size": [
382,
512
]
} |
who is sitting on the far right? | federico moran | /content/drive/MyDrive/TextVQA_Dataset/train_images/ce7b52f575e48ebe.jpg | 512 | 340 | [
[
243,
285,
261,
298
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>who is sitting on the far right? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1,... | [
[
243,
285,
261,
298
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ce7b52f575e48ebe.jpg",
"original_size": [
512,
340
]
} |
how should you handle this container? | with care | /content/drive/MyDrive/TextVQA_Dataset/train_images/ce5b24ef4022262b.jpg | 512 | 341 | [
[
164,
276,
178,
288
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>how should you handle this container? Show reasoning in <think> tags. Return bounding box in <answer> [x1... | [
[
164,
276,
178,
288
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ce5b24ef4022262b.jpg",
"original_size": [
512,
341
]
} |
what is the number of the far left player? | 10 | /content/drive/MyDrive/TextVQA_Dataset/train_images/45328afeded4b5f3.jpg | 512 | 341 | [
[
32,
188,
53,
209
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the number of the far left player? Show reasoning in <think> tags. Return bounding box in <answer... | [
[
32,
188,
53,
209
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/45328afeded4b5f3.jpg",
"original_size": [
512,
341
]
} |
who is the next legend, according to the sign? | you | /content/drive/MyDrive/TextVQA_Dataset/train_images/8ef243ffd0d8401c.jpg | 512 | 384 | [
[
216,
159,
236,
170
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>who is the next legend, according to the sign? Show reasoning in <think> tags. Return bounding box in <an... | [
[
216,
159,
236,
170
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/8ef243ffd0d8401c.jpg",
"original_size": [
512,
384
]
} |
who makes that person's shirt? | jako | /content/drive/MyDrive/TextVQA_Dataset/train_images/90503c576ce5f251.jpg | 341 | 512 | [
[
121,
477,
202,
501
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>who makes that person's shirt? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x... | [
[
121,
477,
202,
501
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/90503c576ce5f251.jpg",
"original_size": [
341,
512
]
} |
which key is above the sd hc logo? | ctrl | /content/drive/MyDrive/TextVQA_Dataset/train_images/ae08a7dedc3289e1.jpg | 384 | 512 | [
[
302,
299,
324,
310
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>which key is above the sd hc logo? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y... | [
[
302,
299,
324,
310
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ae08a7dedc3289e1.jpg",
"original_size": [
384,
512
]
} |
what is the drink in the cup? | songkran | /content/drive/MyDrive/TextVQA_Dataset/train_images/52ad9c2c94904260.jpg | 384 | 512 | [
[
76,
178,
292,
245
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the drink in the cup? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2... | [
[
76,
178,
292,
245
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/52ad9c2c94904260.jpg",
"original_size": [
384,
512
]
} |
what is the title of the book? | for better or worse | /content/drive/MyDrive/TextVQA_Dataset/train_images/29d79e36fbccdc97.jpg | 384 | 512 | [
[
73,
152,
103,
162
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the title of the book? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x... | [
[
73,
152,
103,
162
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/29d79e36fbccdc97.jpg",
"original_size": [
384,
512
]
} |
what is the company on this plane? | beechcraft | /content/drive/MyDrive/TextVQA_Dataset/train_images/00b4190b562b62eb.jpg | 512 | 341 | [
[
261,
43,
404,
91
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the company on this plane? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y... | [
[
261,
43,
404,
91
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/00b4190b562b62eb.jpg",
"original_size": [
512,
341
]
} |
what kind of jam is in the jars? | onion | /content/drive/MyDrive/TextVQA_Dataset/train_images/4962490d4203ee83.jpg | 512 | 358 | [
[
157,
100,
203,
112
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of jam is in the jars? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1,... | [
[
157,
100,
203,
112
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/4962490d4203ee83.jpg",
"original_size": [
512,
358
]
} |
who do you donate to? | the salvation army | /content/drive/MyDrive/TextVQA_Dataset/train_images/9b4fbfc458b2bf22.jpg | 384 | 512 | [
[
153,
274,
171,
289
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>who do you donate to? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </... | [
[
153,
274,
171,
289
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/9b4fbfc458b2bf22.jpg",
"original_size": [
384,
512
]
} |
what isn't the object in the window, particularly? | moon | /content/drive/MyDrive/TextVQA_Dataset/train_images/7ef1505e480043d4.jpg | 512 | 384 | [
[
270,
96,
406,
156
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what isn't the object in the window, particularly? Show reasoning in <think> tags. Return bounding box in... | [
[
270,
96,
406,
156
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/7ef1505e480043d4.jpg",
"original_size": [
512,
384
]
} |
what's the author's first name? | peter | /content/drive/MyDrive/TextVQA_Dataset/train_images/f80d7cf9b0152949.jpg | 337 | 512 | [
[
22,
11,
231,
105
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what's the author's first name? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, ... | [
[
22,
11,
231,
105
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/f80d7cf9b0152949.jpg",
"original_size": [
337,
512
]
} |
what team does this man play for? | blue jays | /content/drive/MyDrive/TextVQA_Dataset/train_images/798f065dba647326.jpg | 512 | 489 | [
[
212,
152,
263,
175
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what team does this man play for? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1... | [
[
212,
152,
263,
175
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/798f065dba647326.jpg",
"original_size": [
512,
489
]
} |
what is written in red on the bottom right of the image? | metal | /content/drive/MyDrive/TextVQA_Dataset/train_images/0233e1cb3ae2d70b.jpg | 512 | 384 | [
[
434,
349,
466,
365
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is written in red on the bottom right of the image? Show reasoning in <think> tags. Return bounding ... | [
[
434,
349,
466,
365
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/0233e1cb3ae2d70b.jpg",
"original_size": [
512,
384
]
} |
what type of plane is this? | cessna | /content/drive/MyDrive/TextVQA_Dataset/train_images/ca4ebcce4b7f114a.jpg | 512 | 384 | [
[
462,
139,
512,
159
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what type of plane is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, ... | [
[
462,
139,
512,
159
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ca4ebcce4b7f114a.jpg",
"original_size": [
512,
384
]
} |
what kind of confinement? | solitary | /content/drive/MyDrive/TextVQA_Dataset/train_images/7b512653ce099228.jpg | 512 | 512 | [
[
145,
89,
291,
123
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of confinement? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2... | [
[
145,
89,
291,
123
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/7b512653ce099228.jpg",
"original_size": [
512,
512
]
} |
what is the name of this novel? | el maestro cartografo | /content/drive/MyDrive/TextVQA_Dataset/train_images/09baac4ccc0253f0.jpg | 444 | 512 | [
[
130,
93,
376,
143
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name of this novel? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, ... | [
[
130,
93,
376,
143
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/09baac4ccc0253f0.jpg",
"original_size": [
444,
512
]
} |
what is the brand on display? | lenovo | /content/drive/MyDrive/TextVQA_Dataset/train_images/e73c9d0f19a333a0.jpg | 512 | 512 | [
[
453,
347,
493,
363
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the brand on display? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2... | [
[
453,
347,
493,
363
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/e73c9d0f19a333a0.jpg",
"original_size": [
512,
512
]
} |
who is the mail to? | pfc francis miller | /content/drive/MyDrive/TextVQA_Dataset/train_images/07be3f0ec9377062.jpg | 512 | 414 | [
[
143,
277,
167,
297
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>who is the mail to? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </an... | [
[
143,
277,
167,
297
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/07be3f0ec9377062.jpg",
"original_size": [
512,
414
]
} |
what is the topic of this book? | geography | /content/drive/MyDrive/TextVQA_Dataset/train_images/95b8379fa07700c5.jpg | 512 | 382 | [
[
100,
316,
324,
356
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the topic of this book? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, ... | [
[
100,
316,
324,
356
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/95b8379fa07700c5.jpg",
"original_size": [
512,
382
]
} |
what is the title on the right page? | reflector | /content/drive/MyDrive/TextVQA_Dataset/train_images/d1e54588359cbcba.jpg | 512 | 358 | [
[
288,
28,
388,
51
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the title on the right page? Show reasoning in <think> tags. Return bounding box in <answer> [x1,... | [
[
288,
28,
388,
51
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/d1e54588359cbcba.jpg",
"original_size": [
512,
358
]
} |
what is the brewing company? | fernie | /content/drive/MyDrive/TextVQA_Dataset/train_images/9fef4ac5e5e248e3.jpg | 341 | 512 | [
[
223,
383,
274,
410
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the brewing company? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2,... | [
[
223,
383,
274,
410
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/9fef4ac5e5e248e3.jpg",
"original_size": [
341,
512
]
} |
what production is marilyn horne a part of? | carmen | /content/drive/MyDrive/TextVQA_Dataset/train_images/5c7d5382d5f3a362.jpg | 512 | 384 | [
[
332,
118,
399,
144
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what production is marilyn horne a part of? Show reasoning in <think> tags. Return bounding box in <answe... | [
[
332,
118,
399,
144
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/5c7d5382d5f3a362.jpg",
"original_size": [
512,
384
]
} |
what is the biggest number shown on this ruler? | 5 | /content/drive/MyDrive/TextVQA_Dataset/train_images/9651e51a195b3192.jpg | 512 | 384 | [
[
438,
288,
456,
306
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the biggest number shown on this ruler? Show reasoning in <think> tags. Return bounding box in <a... | [
[
438,
288,
456,
306
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/9651e51a195b3192.jpg",
"original_size": [
512,
384
]
} |
what is written on this plane? | c-fara | /content/drive/MyDrive/TextVQA_Dataset/train_images/5dc915b4fd4f1984.jpg | 512 | 341 | [
[
144,
158,
179,
172
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is written on this plane? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x... | [
[
144,
158,
179,
172
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/5dc915b4fd4f1984.jpg",
"original_size": [
512,
341
]
} |
what kind of rum is in the bottle? | jamaica | /content/drive/MyDrive/TextVQA_Dataset/train_images/b9e5335aaced0671.jpg | 512 | 512 | [
[
318,
216,
365,
233
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of rum is in the bottle? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y... | [
[
318,
216,
365,
233
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/b9e5335aaced0671.jpg",
"original_size": [
512,
512
]
} |
what website is on the bottom? | www.solids.nl | /content/drive/MyDrive/TextVQA_Dataset/train_images/220b1fd1942bc0f3.jpg | 512 | 384 | [
[
270,
318,
341,
329
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what website is on the bottom? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x... | [
[
270,
318,
341,
329
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/220b1fd1942bc0f3.jpg",
"original_size": [
512,
384
]
} |
what flavor is this alcohol? | vanilla | /content/drive/MyDrive/TextVQA_Dataset/train_images/f24db5f37ce84a7f.jpg | 512 | 341 | [
[
177,
166,
309,
203
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what flavor is this alcohol? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2,... | [
[
177,
166,
309,
203
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/f24db5f37ce84a7f.jpg",
"original_size": [
512,
341
]
} |
how many total activities have been logged? | 1000 | /content/drive/MyDrive/TextVQA_Dataset/train_images/7ffb7f285fad405c.jpg | 512 | 288 | [
[
242,
139,
278,
153
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>how many total activities have been logged? Show reasoning in <think> tags. Return bounding box in <answe... | [
[
242,
139,
278,
153
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/7ffb7f285fad405c.jpg",
"original_size": [
512,
288
]
} |
what team does the man in blue play for? | reays | /content/drive/MyDrive/TextVQA_Dataset/train_images/73483713b0c6adde.jpg | 512 | 341 | [
[
207,
193,
271,
228
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what team does the man in blue play for? Show reasoning in <think> tags. Return bounding box in <answer> ... | [
[
207,
193,
271,
228
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/73483713b0c6adde.jpg",
"original_size": [
512,
341
]
} |
what's the name of the book which contains the no-diet weight loss solution? | eat this, not that! | /content/drive/MyDrive/TextVQA_Dataset/train_images/7fc2fee913ff3327.jpg | 512 | 384 | [
[
246,
107,
287,
145
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what's the name of the book which contains the no-diet weight loss solution? Show reasoning in <think> ta... | [
[
246,
107,
287,
145
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/7fc2fee913ff3327.jpg",
"original_size": [
512,
384
]
} |
where was this cup purchased? | malibu diner | /content/drive/MyDrive/TextVQA_Dataset/train_images/a35b263b3a1397c4.jpg | 382 | 512 | [
[
135,
326,
239,
375
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>where was this cup purchased? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2... | [
[
135,
326,
239,
375
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/a35b263b3a1397c4.jpg",
"original_size": [
382,
512
]
} |
what is edition of the cyberlaw? | third | /content/drive/MyDrive/TextVQA_Dataset/train_images/26c9c4c165bddcc2.jpg | 512 | 384 | [
[
159,
298,
194,
316
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is edition of the cyberlaw? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1,... | [
[
159,
298,
194,
316
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/26c9c4c165bddcc2.jpg",
"original_size": [
512,
384
]
} |
how old is this whiskey? | 12 years | /content/drive/MyDrive/TextVQA_Dataset/train_images/e4da301128077a74.jpg | 287 | 512 | [
[
102,
422,
130,
437
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>how old is this whiskey? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2]... | [
[
102,
422,
130,
437
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/e4da301128077a74.jpg",
"original_size": [
287,
512
]
} |
what kind of explorers does galaxy quest welcome? | space | /content/drive/MyDrive/TextVQA_Dataset/train_images/bb777b4707808666.jpg | 512 | 341 | [
[
221,
213,
274,
234
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of explorers does galaxy quest welcome? Show reasoning in <think> tags. Return bounding box in ... | [
[
221,
213,
274,
234
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/bb777b4707808666.jpg",
"original_size": [
512,
341
]
} |
what is the color of the eye-liner gel? | frostlite | /content/drive/MyDrive/TextVQA_Dataset/train_images/8d86c487fae5d2bb.jpg | 512 | 384 | [
[
239,
342,
307,
362
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the color of the eye-liner gel? Show reasoning in <think> tags. Return bounding box in <answer> [... | [
[
239,
342,
307,
362
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/8d86c487fae5d2bb.jpg",
"original_size": [
512,
384
]
} |
what number is the pitcher? | 31 | /content/drive/MyDrive/TextVQA_Dataset/train_images/cd49d74a23b907ee.jpg | 512 | 494 | [
[
234,
155,
277,
196
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is the pitcher? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, ... | [
[
234,
155,
277,
196
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/cd49d74a23b907ee.jpg",
"original_size": [
512,
494
]
} |
what is the name on the top of the posters? | retrato das desigualdades | /content/drive/MyDrive/TextVQA_Dataset/train_images/ea17dcd16c992104.jpg | 512 | 339 | [
[
278,
56,
308,
67
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name on the top of the posters? Show reasoning in <think> tags. Return bounding box in <answe... | [
[
278,
56,
308,
67
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ea17dcd16c992104.jpg",
"original_size": [
512,
339
]
} |
what is the temperature according to the sign? | 84 | /content/drive/MyDrive/TextVQA_Dataset/train_images/4aebcf0dddbcbd37.jpg | 512 | 266 | [
[
219,
70,
233,
83
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the temperature according to the sign? Show reasoning in <think> tags. Return bounding box in <an... | [
[
219,
70,
233,
83
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/4aebcf0dddbcbd37.jpg",
"original_size": [
512,
266
]
} |
what is picture? | mars | /content/drive/MyDrive/TextVQA_Dataset/train_images/007a4e97abc192b9.jpg | 287 | 512 | [
[
86,
415,
123,
445
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is picture? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answe... | [
[
86,
415,
123,
445
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/007a4e97abc192b9.jpg",
"original_size": [
287,
512
]
} |
what street sign is shown here? | stop | /content/drive/MyDrive/TextVQA_Dataset/train_images/1ce8ad84fb615820.jpg | 512 | 384 | [
[
188,
63,
343,
114
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what street sign is shown here? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, ... | [
[
188,
63,
343,
114
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/1ce8ad84fb615820.jpg",
"original_size": [
512,
384
]
} |
how many players is this game for? | two | /content/drive/MyDrive/TextVQA_Dataset/train_images/43465cb3ed00901a.jpg | 384 | 512 | [
[
97,
199,
135,
217
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>how many players is this game for? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y... | [
[
97,
199,
135,
217
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/43465cb3ed00901a.jpg",
"original_size": [
384,
512
]
} |
what is the word over to the left of the list by itself? | glad | /content/drive/MyDrive/TextVQA_Dataset/train_images/bb8a4d45454cae0c.jpg | 512 | 512 | [
[
158,
82,
246,
141
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the word over to the left of the list by itself? Show reasoning in <think> tags. Return bounding ... | [
[
158,
82,
246,
141
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/bb8a4d45454cae0c.jpg",
"original_size": [
512,
512
]
} |
what is the last word in red? | catorce | /content/drive/MyDrive/TextVQA_Dataset/train_images/39db5c82d237a39d.jpg | 512 | 382 | [
[
339,
76,
371,
86
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the last word in red? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2... | [
[
339,
76,
371,
86
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/39db5c82d237a39d.jpg",
"original_size": [
512,
382
]
} |
what brand is this soda? | coca cola | /content/drive/MyDrive/TextVQA_Dataset/train_images/c03e1be1b89031d7.jpg | 384 | 512 | [
[
216,
324,
250,
347
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand is this soda? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2]... | [
[
216,
324,
250,
347
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c03e1be1b89031d7.jpg",
"original_size": [
384,
512
]
} |
what does it say on the orange ring around the stadium? | origin | /content/drive/MyDrive/TextVQA_Dataset/train_images/34afc4d984a1a681.jpg | 512 | 288 | [
[
201,
157,
243,
172
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does it say on the orange ring around the stadium? Show reasoning in <think> tags. Return bounding b... | [
[
201,
157,
243,
172
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/34afc4d984a1a681.jpg",
"original_size": [
512,
288
]
} |
what city is on the bottom right of the map? | sydney | /content/drive/MyDrive/TextVQA_Dataset/train_images/7ca5b8dbd02e0621.jpg | 512 | 384 | [
[
290,
127,
337,
145
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what city is on the bottom right of the map? Show reasoning in <think> tags. Return bounding box in <answ... | [
[
290,
127,
337,
145
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/7ca5b8dbd02e0621.jpg",
"original_size": [
512,
384
]
} |
what year is displayed? | 2035 | /content/drive/MyDrive/TextVQA_Dataset/train_images/cf6a3aef41cfef1d.jpg | 512 | 512 | [
[
154,
447,
354,
491
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what year is displayed? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] ... | [
[
154,
447,
354,
491
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/cf6a3aef41cfef1d.jpg",
"original_size": [
512,
512
]
} |
where is the barclay center? | answering does not require reading text in the image | /content/drive/MyDrive/TextVQA_Dataset/train_images/64291d1eff2aaa10.jpg | 512 | 288 | [
[
331,
65,
344,
80
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>where is the barclay center? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2,... | [
[
331,
65,
344,
80
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/64291d1eff2aaa10.jpg",
"original_size": [
512,
288
]
} |
what sports league do the cheerleaders represent? | nba | /content/drive/MyDrive/TextVQA_Dataset/train_images/2487b60c7d98d917.jpg | 512 | 341 | [
[
251,
53,
347,
99
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what sports league do the cheerleaders represent? Show reasoning in <think> tags. Return bounding box in ... | [
[
251,
53,
347,
99
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/2487b60c7d98d917.jpg",
"original_size": [
512,
341
]
} |
what is the name of the roofing company? | gardner roofing | /content/drive/MyDrive/TextVQA_Dataset/train_images/4b16005ded9e1394.jpg | 512 | 418 | [
[
470,
166,
505,
199
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name of the roofing company? Show reasoning in <think> tags. Return bounding box in <answer> ... | [
[
470,
166,
505,
199
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/4b16005ded9e1394.jpg",
"original_size": [
512,
418
]
} |
in what year was the alcohol made? | 2009 | /content/drive/MyDrive/TextVQA_Dataset/train_images/104bf2461f9962a8.jpg | 512 | 341 | [
[
181,
248,
245,
281
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>in what year was the alcohol made? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y... | [
[
181,
248,
245,
281
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/104bf2461f9962a8.jpg",
"original_size": [
512,
341
]
} |
what brand of wine? | capovolto | /content/drive/MyDrive/TextVQA_Dataset/train_images/4ee0499173c328ac.jpg | 512 | 340 | [
[
329,
204,
364,
217
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of wine? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </an... | [
[
329,
204,
364,
217
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/4ee0499173c328ac.jpg",
"original_size": [
512,
340
]
} |
what is the number of the girl on the further right? | 5 | /content/drive/MyDrive/TextVQA_Dataset/train_images/278105799be0bbf5.jpg | 512 | 340 | [
[
298,
183,
315,
213
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the number of the girl on the further right? Show reasoning in <think> tags. Return bounding box ... | [
[
298,
183,
315,
213
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/278105799be0bbf5.jpg",
"original_size": [
512,
340
]
} |
what is the number on the sign? | 67 | /content/drive/MyDrive/TextVQA_Dataset/train_images/67a0f882425fcbd0.jpg | 512 | 367 | [
[
163,
71,
228,
122
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the number on the sign? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, ... | [
[
163,
71,
228,
122
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/67a0f882425fcbd0.jpg",
"original_size": [
512,
367
]
} |
what state is the plate from? | california | /content/drive/MyDrive/TextVQA_Dataset/train_images/178a7b832a9e317f.jpg | 512 | 384 | [
[
143,
168,
345,
218
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what state is the plate from? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2... | [
[
143,
168,
345,
218
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/178a7b832a9e317f.jpg",
"original_size": [
512,
384
]
} |
what is the title of this page? | bienvenido | /content/drive/MyDrive/TextVQA_Dataset/train_images/27e0a45879e9c53b.jpg | 512 | 384 | [
[
250,
73,
300,
84
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the title of this page? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, ... | [
[
250,
73,
300,
84
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/27e0a45879e9c53b.jpg",
"original_size": [
512,
384
]
} |
what brand is this wine? | terrazas | /content/drive/MyDrive/TextVQA_Dataset/train_images/60951e752336805a.jpg | 220 | 512 | [
[
74,
287,
147,
299
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand is this wine? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2]... | [
[
74,
287,
147,
299
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/60951e752336805a.jpg",
"original_size": [
220,
512
]
} |
what size is this? | medium | /content/drive/MyDrive/TextVQA_Dataset/train_images/f7e353339bb0138b.jpg | 363 | 512 | [
[
81,
390,
289,
442
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what size is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </ans... | [
[
81,
390,
289,
442
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/f7e353339bb0138b.jpg",
"original_size": [
363,
512
]
} |
what company is on the billboard? | aa | /content/drive/MyDrive/TextVQA_Dataset/train_images/ba16ef6e501f9c15.jpg | 512 | 272 | [
[
174,
170,
202,
196
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what company is on the billboard? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1... | [
[
174,
170,
202,
196
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ba16ef6e501f9c15.jpg",
"original_size": [
512,
272
]
} |
what type of comeback is mentioned? | carolina | /content/drive/MyDrive/TextVQA_Dataset/train_images/c6e42ed4e2a5fd48.jpg | 512 | 341 | [
[
127,
74,
187,
106
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what type of comeback is mentioned? Show reasoning in <think> tags. Return bounding box in <answer> [x1, ... | [
[
127,
74,
187,
106
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c6e42ed4e2a5fd48.jpg",
"original_size": [
512,
341
]
} |
how many bottles are in each pack of beer? | 6 | /content/drive/MyDrive/TextVQA_Dataset/train_images/af655a6157758938.jpg | 512 | 384 | [
[
146,
121,
159,
139
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>how many bottles are in each pack of beer? Show reasoning in <think> tags. Return bounding box in <answer... | [
[
146,
121,
159,
139
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/af655a6157758938.jpg",
"original_size": [
512,
384
]
} |
what's the brand of the orange bottle? | brut | /content/drive/MyDrive/TextVQA_Dataset/train_images/177482ed108062de.jpg | 512 | 341 | [
[
135,
179,
163,
191
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what's the brand of the orange bottle? Show reasoning in <think> tags. Return bounding box in <answer> [x... | [
[
135,
179,
163,
191
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/177482ed108062de.jpg",
"original_size": [
512,
341
]
} |
what is the kitchen island section called? | bakery | /content/drive/MyDrive/TextVQA_Dataset/train_images/90d6309ed846aea1.jpg | 512 | 384 | [
[
268,
188,
287,
198
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the kitchen island section called? Show reasoning in <think> tags. Return bounding box in <answer... | [
[
268,
188,
287,
198
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/90d6309ed846aea1.jpg",
"original_size": [
512,
384
]
} |
to what location does the blue sign point? | airport | /content/drive/MyDrive/TextVQA_Dataset/train_images/8941cb8244ed8b59.jpg | 512 | 384 | [
[
65,
148,
275,
234
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>to what location does the blue sign point? Show reasoning in <think> tags. Return bounding box in <answer... | [
[
65,
148,
275,
234
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/8941cb8244ed8b59.jpg",
"original_size": [
512,
384
]
} |
what time is shown on the lcd? | 6:00 | /content/drive/MyDrive/TextVQA_Dataset/train_images/af39d33c86bb07f3.jpg | 512 | 512 | [
[
161,
154,
375,
246
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what time is shown on the lcd? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x... | [
[
161,
154,
375,
246
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/af39d33c86bb07f3.jpg",
"original_size": [
512,
512
]
} |
what letter does the mug on the bottom left have on it? | l | /content/drive/MyDrive/TextVQA_Dataset/train_images/152ed37c40ccb75d.jpg | 512 | 341 | [
[
36,
225,
116,
321
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what letter does the mug on the bottom left have on it? Show reasoning in <think> tags. Return bounding b... | [
[
36,
225,
116,
321
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/152ed37c40ccb75d.jpg",
"original_size": [
512,
341
]
} |
what was lost? | symbol | /content/drive/MyDrive/TextVQA_Dataset/train_images/1d6838aaafab994e.jpg | 384 | 512 | [
[
193,
282,
236,
299
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what was lost? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>... | [
[
193,
282,
236,
299
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/1d6838aaafab994e.jpg",
"original_size": [
384,
512
]
} |
what does the sign say matters? | food | /content/drive/MyDrive/TextVQA_Dataset/train_images/6e5fd21b08dd9ae4.jpg | 512 | 426 | [
[
161,
250,
223,
271
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does the sign say matters? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, ... | [
[
161,
250,
223,
271
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/6e5fd21b08dd9ae4.jpg",
"original_size": [
512,
426
]
} |
what is the plane brand? | lufthansa | /content/drive/MyDrive/TextVQA_Dataset/train_images/9f48434cec1482a6.jpg | 512 | 329 | [
[
113,
149,
181,
162
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the plane brand? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2]... | [
[
113,
149,
181,
162
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/9f48434cec1482a6.jpg",
"original_size": [
512,
329
]
} |
what do they do? | construction | /content/drive/MyDrive/TextVQA_Dataset/train_images/f7476d536a81a069.jpg | 512 | 387 | [
[
75,
205,
133,
216
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what do they do? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answe... | [
[
75,
205,
133,
216
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/f7476d536a81a069.jpg",
"original_size": [
512,
387
]
} |
what company makes the phone in the box? | nokia | /content/drive/MyDrive/TextVQA_Dataset/train_images/df6c75012d9905c5.jpg | 512 | 341 | [
[
182,
225,
278,
293
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what company makes the phone in the box? Show reasoning in <think> tags. Return bounding box in <answer> ... | [
[
182,
225,
278,
293
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/df6c75012d9905c5.jpg",
"original_size": [
512,
341
]
} |
what brand of beer is this? | bock | /content/drive/MyDrive/TextVQA_Dataset/train_images/256be951c5460071.jpg | 384 | 512 | [
[
232,
341,
334,
393
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of beer is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, ... | [
[
232,
341,
334,
393
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/256be951c5460071.jpg",
"original_size": [
384,
512
]
} |
what car brand is on the man standing on home plate? | volvo | /content/drive/MyDrive/TextVQA_Dataset/train_images/60b524e105eb051a.jpg | 512 | 384 | [
[
347,
191,
392,
201
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what car brand is on the man standing on home plate? Show reasoning in <think> tags. Return bounding box ... | [
[
347,
191,
392,
201
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/60b524e105eb051a.jpg",
"original_size": [
512,
384
]
} |
what operating system is on this phone? | windows | /content/drive/MyDrive/TextVQA_Dataset/train_images/e3223bf8a2246fd7.jpg | 512 | 384 | [
[
167,
197,
239,
218
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what operating system is on this phone? Show reasoning in <think> tags. Return bounding box in <answer> [... | [
[
167,
197,
239,
218
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/e3223bf8a2246fd7.jpg",
"original_size": [
512,
384
]
} |
what year were the fireworks? | 1983 | /content/drive/MyDrive/TextVQA_Dataset/train_images/3a33e3832cbd9459.jpg | 338 | 512 | [
[
181,
297,
260,
322
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what year were the fireworks? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2... | [
[
181,
297,
260,
322
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/3a33e3832cbd9459.jpg",
"original_size": [
338,
512
]
} |
what is free? | enemas | /content/drive/MyDrive/TextVQA_Dataset/train_images/00fd7896b0b67f93.jpg | 512 | 409 | [
[
260,
252,
372,
305
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is free? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.... | [
[
260,
252,
372,
305
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/00fd7896b0b67f93.jpg",
"original_size": [
512,
409
]
} |
what are the names of the beer on the bottles? | luxor, sakara | /content/drive/MyDrive/TextVQA_Dataset/train_images/3f75d69c241df1f4.jpg | 512 | 288 | [
[
377,
116,
422,
130
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what are the names of the beer on the bottles? Show reasoning in <think> tags. Return bounding box in <an... | [
[
377,
116,
422,
130
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/3f75d69c241df1f4.jpg",
"original_size": [
512,
288
]
} |
what channel is selected? | 126 | /content/drive/MyDrive/TextVQA_Dataset/train_images/c8a3ab9f5d94cba3.jpg | 512 | 384 | [
[
88,
171,
104,
183
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what channel is selected? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2... | [
[
88,
171,
104,
183
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c8a3ab9f5d94cba3.jpg",
"original_size": [
512,
384
]
} |
what do the green letters on the shirt say? | spalding | /content/drive/MyDrive/TextVQA_Dataset/train_images/b5df62010ee3c903.jpg | 512 | 320 | [
[
200,
153,
236,
166
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what do the green letters on the shirt say? Show reasoning in <think> tags. Return bounding box in <answe... | [
[
200,
153,
236,
166
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/b5df62010ee3c903.jpg",
"original_size": [
512,
320
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.