question stringlengths 10 108 | answer stringlengths 1 67 | image stringlengths 72 72 | width int64 131 512 | height int64 131 512 | bboxs sequencelengths 1 1 | dataset stringclasses 1 value | split stringclasses 1 value | prompt stringlengths 409 507 | target sequencelengths 1 1 | metadata dict |
|---|---|---|---|---|---|---|---|---|---|---|
what brand of typewriter is this? | brother | /content/drive/MyDrive/TextVQA_Dataset/train_images/b800db00a32fd96e.jpg | 512 | 341 | [
[
245,
65,
280,
76
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of typewriter is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
245,
65,
280,
76
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/b800db00a32fd96e.jpg",
"original_size": [
512,
341
]
} |
what can you download for free? | books | /content/drive/MyDrive/TextVQA_Dataset/train_images/39641cbcdc56c156.jpg | 512 | 288 | [
[
223,
160,
496,
226
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what can you download for free? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
223,
160,
496,
226
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/39641cbcdc56c156.jpg",
"original_size": [
512,
288
]
} |
what is the date on the sign? | 2014-09-13 | /content/drive/MyDrive/TextVQA_Dataset/train_images/eb889c475a293355.jpg | 362 | 512 | [
[
196,
441,
313,
463
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the date on the sign? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
196,
441,
313,
463
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/eb889c475a293355.jpg",
"original_size": [
362,
512
]
} |
what phrase is written along the flagpole? | prosecute the police | /content/drive/MyDrive/TextVQA_Dataset/train_images/9925dd9f562c466e.jpg | 512 | 384 | [
[
94,
196,
167,
256
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what phrase is written along the flagpole? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
94,
196,
167,
256
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/9925dd9f562c466e.jpg",
"original_size": [
512,
384
]
} |
what number is on the players shirt? | 26 | /content/drive/MyDrive/TextVQA_Dataset/train_images/dc026b445ada7b61.jpg | 512 | 320 | [
[
175,
257,
226,
301
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is on the players shirt? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
175,
257,
226,
301
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/dc026b445ada7b61.jpg",
"original_size": [
512,
320
]
} |
this is a universal atlas of what? | the world | /content/drive/MyDrive/TextVQA_Dataset/train_images/b48751918cd7c627.jpg | 412 | 512 | [
[
123,
131,
280,
163
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>this is a universal atlas of what? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
123,
131,
280,
163
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/b48751918cd7c627.jpg",
"original_size": [
412,
512
]
} |
what is the brand of the phone? | motorola | /content/drive/MyDrive/TextVQA_Dataset/train_images/0b45d89a6c4cb5a5.jpg | 384 | 512 | [
[
160,
85,
241,
97
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the brand of the phone? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
160,
85,
241,
97
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/0b45d89a6c4cb5a5.jpg",
"original_size": [
384,
512
]
} |
which artist is on person on the right's tshirt? | david lee roth | /content/drive/MyDrive/TextVQA_Dataset/train_images/bf4fa2d92d3cefc8.jpg | 512 | 512 | [
[
343,
187,
421,
250
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>which artist is on person on the right's tshirt? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
343,
187,
421,
250
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/bf4fa2d92d3cefc8.jpg",
"original_size": [
512,
512
]
} |
what is the greatest cup measurement on the measurer? | 1 cup | /content/drive/MyDrive/TextVQA_Dataset/train_images/1428ab127db8c19a.jpg | 512 | 384 | [
[
404,
184,
424,
196
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the greatest cup measurement on the measurer? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
404,
184,
424,
196
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/1428ab127db8c19a.jpg",
"original_size": [
512,
384
]
} |
how many calories does this soda have? | 45 | /content/drive/MyDrive/TextVQA_Dataset/train_images/a47e88174e9a446e.jpg | 384 | 512 | [
[
177,
110,
198,
128
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>how many calories does this soda have? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
177,
110,
198,
128
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/a47e88174e9a446e.jpg",
"original_size": [
384,
512
]
} |
what number is in the black circle? | 3 | /content/drive/MyDrive/TextVQA_Dataset/train_images/7d847665dbe2c010.jpg | 512 | 384 | [
[
461,
51,
476,
70
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is in the black circle? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
461,
51,
476,
70
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/7d847665dbe2c010.jpg",
"original_size": [
512,
384
]
} |
what year was the book with the tree written? | 2015 | /content/drive/MyDrive/TextVQA_Dataset/train_images/554140b113c2f1c9.jpg | 512 | 341 | [
[
458,
218,
477,
238
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what year was the book with the tree written? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
458,
218,
477,
238
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/554140b113c2f1c9.jpg",
"original_size": [
512,
341
]
} |
what type of drink is it? | ale | /content/drive/MyDrive/TextVQA_Dataset/train_images/03d93d295ceb1b4b.jpg | 512 | 384 | [
[
354,
184,
386,
203
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what type of drink is it? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
354,
184,
386,
203
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/03d93d295ceb1b4b.jpg",
"original_size": [
512,
384
]
} |
what supermarket is this ad for? | loblaws | /content/drive/MyDrive/TextVQA_Dataset/train_images/8bfe0e5075f0b273.jpg | 384 | 512 | [
[
208,
391,
268,
424
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what supermarket is this ad for? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
208,
391,
268,
424
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/8bfe0e5075f0b273.jpg",
"original_size": [
384,
512
]
} |
what is the first word on the pink shirt? | lo | /content/drive/MyDrive/TextVQA_Dataset/train_images/17ffa1885c266056.jpg | 512 | 401 | [
[
92,
118,
115,
141
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the first word on the pink shirt? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
92,
118,
115,
141
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/17ffa1885c266056.jpg",
"original_size": [
512,
401
]
} |
what are some of the traits this space encourages, according to their banner? | devotion | /content/drive/MyDrive/TextVQA_Dataset/train_images/ba7979567646d71e.jpg | 512 | 340 | [
[
121,
11,
169,
24
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what are some of the traits this space encourages, according to their banner? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
121,
11,
169,
24
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ba7979567646d71e.jpg",
"original_size": [
512,
340
]
} |
what type of animal is written on his hat? | bull | /content/drive/MyDrive/TextVQA_Dataset/train_images/e9d1e265e8b63afa.jpg | 512 | 341 | [
[
273,
79,
319,
102
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what type of animal is written on his hat? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
273,
79,
319,
102
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/e9d1e265e8b63afa.jpg",
"original_size": [
512,
341
]
} |
what number is above the man's shoulder? | 22 | /content/drive/MyDrive/TextVQA_Dataset/train_images/9f88071c4cdc8dfb.jpg | 512 | 384 | [
[
305,
85,
328,
105
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is above the man's shoulder? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
305,
85,
328,
105
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/9f88071c4cdc8dfb.jpg",
"original_size": [
512,
384
]
} |
what brand is this phone? | samsung | /content/drive/MyDrive/TextVQA_Dataset/train_images/0597ca47b64bb2c3.jpg | 512 | 384 | [
[
224,
113,
292,
125
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand is this phone? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
224,
113,
292,
125
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/0597ca47b64bb2c3.jpg",
"original_size": [
512,
384
]
} |
via le what? | truppe | /content/drive/MyDrive/TextVQA_Dataset/train_images/bb7a7d7eb3c81846.jpg | 384 | 512 | [
[
124,
184,
337,
236
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>via le what? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
124,
184,
337,
236
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/bb7a7d7eb3c81846.jpg",
"original_size": [
384,
512
]
} |
who is the manufacturer of this tv? | lg | /content/drive/MyDrive/TextVQA_Dataset/train_images/ab7b8d6b8e5835ec.jpg | 512 | 384 | [
[
255,
269,
282,
290
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>who is the manufacturer of this tv? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
255,
269,
282,
290
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ab7b8d6b8e5835ec.jpg",
"original_size": [
512,
384
]
} |
where is this tag from/? | california | /content/drive/MyDrive/TextVQA_Dataset/train_images/1923b041b422e356.jpg | 512 | 384 | [
[
147,
150,
328,
191
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>where is this tag from/? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
147,
150,
328,
191
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/1923b041b422e356.jpg",
"original_size": [
512,
384
]
} |
what are 3 letters and one symbol you can see in between the net? | nge & | /content/drive/MyDrive/TextVQA_Dataset/train_images/1ce511c74417159c.jpg | 512 | 512 | [
[
288,
375,
303,
397
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what are 3 letters and one symbol you can see in between the net? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
288,
375,
303,
397
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/1ce511c74417159c.jpg",
"original_size": [
512,
512
]
} |
what brand of drink is this? | fireside chat | /content/drive/MyDrive/TextVQA_Dataset/train_images/0228f1095d117c2d.jpg | 512 | 512 | [
[
12,
305,
150,
370
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of drink is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
12,
305,
150,
370
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/0228f1095d117c2d.jpg",
"original_size": [
512,
512
]
} |
what is boxed water? | better | /content/drive/MyDrive/TextVQA_Dataset/train_images/05df1d7b1abf2c82.jpg | 512 | 512 | [
[
269,
201,
330,
256
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is boxed water? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
269,
201,
330,
256
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/05df1d7b1abf2c82.jpg",
"original_size": [
512,
512
]
} |
what number is on the spray bottles? | 94 | /content/drive/MyDrive/TextVQA_Dataset/train_images/ee313578070530d1.jpg | 512 | 341 | [
[
190,
230,
251,
286
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is on the spray bottles? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
190,
230,
251,
286
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/ee313578070530d1.jpg",
"original_size": [
512,
341
]
} |
what does the computer screen say? | dont eat that | /content/drive/MyDrive/TextVQA_Dataset/train_images/0b458f4f665efb5b.jpg | 512 | 306 | [
[
237,
97,
297,
141
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does the computer screen say? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
237,
97,
297,
141
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/0b458f4f665efb5b.jpg",
"original_size": [
512,
306
]
} |
what is the name of this program? | billyball | /content/drive/MyDrive/TextVQA_Dataset/train_images/54621e1658bfca5a.jpg | 512 | 512 | [
[
13,
217,
512,
312
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name of this program? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
13,
217,
512,
312
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/54621e1658bfca5a.jpg",
"original_size": [
512,
512
]
} |
what kind of drink is the green one on the left? | ginger brew | /content/drive/MyDrive/TextVQA_Dataset/train_images/56b30e976f1518e3.jpg | 512 | 288 | [
[
119,
105,
187,
126
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of drink is the green one on the left? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
119,
105,
187,
126
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/56b30e976f1518e3.jpg",
"original_size": [
512,
288
]
} |
what is the title or author of the red book? | cleis | /content/drive/MyDrive/TextVQA_Dataset/train_images/2ed67bad197d2ba4.jpg | 512 | 384 | [
[
388,
83,
509,
138
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the title or author of the red book? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
388,
83,
509,
138
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/2ed67bad197d2ba4.jpg",
"original_size": [
512,
384
]
} |
what brand of watch is this? | techne | /content/drive/MyDrive/TextVQA_Dataset/train_images/0c14e0a37c7b4c8e.jpg | 512 | 341 | [
[
226,
122,
273,
134
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of watch is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
226,
122,
273,
134
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/0c14e0a37c7b4c8e.jpg",
"original_size": [
512,
341
]
} |
what brand of camera is this? | nikon | /content/drive/MyDrive/TextVQA_Dataset/train_images/23f72adac2a904cb.jpg | 512 | 341 | [
[
222,
92,
293,
114
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of camera is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
222,
92,
293,
114
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/23f72adac2a904cb.jpg",
"original_size": [
512,
341
]
} |
what kind of medical office does the green sign advertise? | dentists | /content/drive/MyDrive/TextVQA_Dataset/train_images/10e388f4eb3205aa.jpg | 512 | 341 | [
[
402,
178,
462,
195
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of medical office does the green sign advertise? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
402,
178,
462,
195
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/10e388f4eb3205aa.jpg",
"original_size": [
512,
341
]
} |
what creamery is shown on the sign on the table? | peddler's | /content/drive/MyDrive/TextVQA_Dataset/train_images/01b67da725f6ab69.jpg | 512 | 343 | [
[
121,
213,
153,
222
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what creamery is shown on the sign on the table? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
121,
213,
153,
222
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/01b67da725f6ab69.jpg",
"original_size": [
512,
343
]
} |
what is written on the coin? | good luck souvenir | /content/drive/MyDrive/TextVQA_Dataset/train_images/a50fdec7a574c8c9.jpg | 499 | 512 | [
[
72,
18,
249,
162
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is written on the coin? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
72,
18,
249,
162
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/a50fdec7a574c8c9.jpg",
"original_size": [
499,
512
]
} |
what number is behind the wine? | 19 | /content/drive/MyDrive/TextVQA_Dataset/train_images/cdd24a8084eaaf94.jpg | 258 | 512 | [
[
42,
165,
101,
198
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is behind the wine? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
42,
165,
101,
198
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/cdd24a8084eaaf94.jpg",
"original_size": [
258,
512
]
} |
where is this podium from? | the university of georgia | /content/drive/MyDrive/TextVQA_Dataset/train_images/c1fe0be46d8c2bbe.jpg | 512 | 341 | [
[
140,
58,
164,
73
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>where is this podium from? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
140,
58,
164,
73
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c1fe0be46d8c2bbe.jpg",
"original_size": [
512,
341
]
} |
what is the license plate number? | duuude | /content/drive/MyDrive/TextVQA_Dataset/train_images/cf1fc2f9cf616782.jpg | 512 | 384 | [
[
141,
139,
353,
219
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the license plate number? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
141,
139,
353,
219
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/cf1fc2f9cf616782.jpg",
"original_size": [
512,
384
]
} |
what company made that machine? | casio | /content/drive/MyDrive/TextVQA_Dataset/train_images/b3d2f168f5e40b1d.jpg | 512 | 342 | [
[
188,
223,
218,
232
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what company made that machine? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
188,
223,
218,
232
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/b3d2f168f5e40b1d.jpg",
"original_size": [
512,
342
]
} |
what is the name of the bottle on the right? | iris | /content/drive/MyDrive/TextVQA_Dataset/train_images/3f15717cfaa941f5.jpg | 384 | 512 | [
[
276,
235,
378,
302
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name of the bottle on the right? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
276,
235,
378,
302
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/3f15717cfaa941f5.jpg",
"original_size": [
384,
512
]
} |
what year was the book made? | 2009 | /content/drive/MyDrive/TextVQA_Dataset/train_images/11241193594a91e9.jpg | 384 | 512 | [
[
162,
432,
198,
449
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what year was the book made? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
162,
432,
198,
449
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/11241193594a91e9.jpg",
"original_size": [
384,
512
]
} |
what two letters appear in the photo? | lm | /content/drive/MyDrive/TextVQA_Dataset/train_images/9feb2f99aca77e18.jpg | 343 | 512 | [
[
246,
441,
332,
495
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what two letters appear in the photo? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
246,
441,
332,
495
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/9feb2f99aca77e18.jpg",
"original_size": [
343,
512
]
} |
what type of key is on the top left of the first row? | esc | /content/drive/MyDrive/TextVQA_Dataset/train_images/c4cc7ac7477acb42.jpg | 512 | 340 | [
[
59,
38,
77,
49
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what type of key is on the top left of the first row? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
59,
38,
77,
49
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c4cc7ac7477acb42.jpg",
"original_size": [
512,
340
]
} |
what brand of soda is this? | crush | /content/drive/MyDrive/TextVQA_Dataset/train_images/e5a3fb663d02a7d9.jpg | 512 | 337 | [
[
141,
179,
362,
269
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of soda is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
141,
179,
362,
269
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/e5a3fb663d02a7d9.jpg",
"original_size": [
512,
337
]
} |
what is the location on the blue sign? | bahnhof | /content/drive/MyDrive/TextVQA_Dataset/train_images/979126a2b43132d9.jpg | 384 | 512 | [
[
174,
39,
233,
58
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the location on the blue sign? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
174,
39,
233,
58
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/979126a2b43132d9.jpg",
"original_size": [
384,
512
]
} |
what is the title of the movie? | eraserhead | /content/drive/MyDrive/TextVQA_Dataset/train_images/87b2667fb96fdf09.jpg | 512 | 357 | [
[
193,
75,
287,
101
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the title of the movie? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
193,
75,
287,
101
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/87b2667fb96fdf09.jpg",
"original_size": [
512,
357
]
} |
what team name is in the background? | blue jays | /content/drive/MyDrive/TextVQA_Dataset/train_images/4e0b5d2c2c09b973.jpg | 512 | 340 | [
[
22,
20,
74,
42
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what team name is in the background? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
22,
20,
74,
42
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/4e0b5d2c2c09b973.jpg",
"original_size": [
512,
340
]
} |
what is the website address? | beckycortino.me | /content/drive/MyDrive/TextVQA_Dataset/train_images/b54216c86a5fcb3e.jpg | 512 | 512 | [
[
121,
435,
401,
475
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the website address? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
121,
435,
401,
475
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/b54216c86a5fcb3e.jpg",
"original_size": [
512,
512
]
} |
what does the sign say? | stop | /content/drive/MyDrive/TextVQA_Dataset/train_images/c12d27c443e12cc6.jpg | 512 | 341 | [
[
169,
90,
349,
165
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does the sign say? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
169,
90,
349,
165
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c12d27c443e12cc6.jpg",
"original_size": [
512,
341
]
} |
what is the newspaper section talking about? | markets | /content/drive/MyDrive/TextVQA_Dataset/train_images/3e50bef68444b80a.jpg | 365 | 512 | [
[
142,
158,
223,
169
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the newspaper section talking about? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
142,
158,
223,
169
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/3e50bef68444b80a.jpg",
"original_size": [
365,
512
]
} |
what is the value of the coin? | one shilling | /content/drive/MyDrive/TextVQA_Dataset/train_images/5ee629fe7fd4d1d0.jpg | 512 | 278 | [
[
256,
258,
309,
275
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the value of the coin? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
256,
258,
309,
275
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/5ee629fe7fd4d1d0.jpg",
"original_size": [
512,
278
]
} |
whats the city on the cans? | philadelphia | /content/drive/MyDrive/TextVQA_Dataset/train_images/67961bc8df6bda87.jpg | 512 | 354 | [
[
71,
271,
154,
295
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>whats the city on the cans? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
71,
271,
154,
295
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/67961bc8df6bda87.jpg",
"original_size": [
512,
354
]
} |
what is the url for the quotes? | lovequotes.net.in | /content/drive/MyDrive/TextVQA_Dataset/train_images/bb78de8275b56848.jpg | 384 | 512 | [
[
74,
391,
238,
416
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the url for the quotes? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
74,
391,
238,
416
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/bb78de8275b56848.jpg",
"original_size": [
384,
512
]
} |
what key is between v and n? | b | /content/drive/MyDrive/TextVQA_Dataset/train_images/875f4d655969790a.jpg | 512 | 288 | [
[
238,
170,
274,
210
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what key is between v and n? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
238,
170,
274,
210
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/875f4d655969790a.jpg",
"original_size": [
512,
288
]
} |
what is written on the boat's sails? | 107045 | /content/drive/MyDrive/TextVQA_Dataset/train_images/cd0ef4eed2e24397.jpg | 384 | 512 | [
[
100,
227,
150,
241
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is written on the boat's sails? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
100,
227,
150,
241
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/cd0ef4eed2e24397.jpg",
"original_size": [
384,
512
]
} |
what is the link to the photographer's flickr? | flickr.com/adrianoit | /content/drive/MyDrive/TextVQA_Dataset/train_images/d5ddd3a4372fa3f0.jpg | 512 | 341 | [
[
6,
311,
163,
331
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the link to the photographer's flickr? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
6,
311,
163,
331
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/d5ddd3a4372fa3f0.jpg",
"original_size": [
512,
341
]
} |
what does the big hand play? | 4 | /content/drive/MyDrive/TextVQA_Dataset/train_images/179e8840e2a1e60c.jpg | 371 | 512 | [
[
218,
374,
236,
399
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does the big hand play? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
218,
374,
236,
399
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/179e8840e2a1e60c.jpg",
"original_size": [
371,
512
]
} |
what is the number on the bottom of the measure? | 28 | /content/drive/MyDrive/TextVQA_Dataset/train_images/7885efd38b1bf40d.jpg | 512 | 384 | [
[
346,
143,
379,
164
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the number on the bottom of the measure? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
346,
143,
379,
164
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/7885efd38b1bf40d.jpg",
"original_size": [
512,
384
]
} |
who does the girl in red run for? | gunn | /content/drive/MyDrive/TextVQA_Dataset/train_images/be31f5f1593661d9.jpg | 341 | 512 | [
[
109,
162,
141,
176
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>who does the girl in red run for? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
109,
162,
141,
176
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/be31f5f1593661d9.jpg",
"original_size": [
341,
512
]
} |
what brand of beer is in the bottle in the very back center? | leute | /content/drive/MyDrive/TextVQA_Dataset/train_images/9a5807fef69a386b.jpg | 512 | 384 | [
[
266,
182,
342,
204
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of beer is in the bottle in the very back center? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
266,
182,
342,
204
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/9a5807fef69a386b.jpg",
"original_size": [
512,
384
]
} |
what company is on the sign? | metrolink | /content/drive/MyDrive/TextVQA_Dataset/train_images/a25d6fd025fc56dd.jpg | 384 | 512 | [
[
118,
479,
174,
510
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what company is on the sign? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
118,
479,
174,
510
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/a25d6fd025fc56dd.jpg",
"original_size": [
384,
512
]
} |
what number is on the side of the plane? | 27 | /content/drive/MyDrive/TextVQA_Dataset/train_images/016ee5993838b383.jpg | 512 | 288 | [
[
269,
151,
321,
202
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is on the side of the plane? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
269,
151,
321,
202
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/016ee5993838b383.jpg",
"original_size": [
512,
288
]
} |
what city is written on the clock? | paris | /content/drive/MyDrive/TextVQA_Dataset/train_images/03e97e53b85a1b59.jpg | 512 | 341 | [
[
196,
169,
227,
184
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what city is written on the clock? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
196,
169,
227,
184
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/03e97e53b85a1b59.jpg",
"original_size": [
512,
341
]
} |
what year did the orioles win the championship? | 1983 | /content/drive/MyDrive/TextVQA_Dataset/train_images/5e6d836026a88b23.jpg | 512 | 384 | [
[
314,
320,
343,
335
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what year did the orioles win the championship? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
314,
320,
343,
335
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/5e6d836026a88b23.jpg",
"original_size": [
512,
384
]
} |
how many inches long is this? | 3 | /content/drive/MyDrive/TextVQA_Dataset/train_images/46e20763ac423f25.jpg | 512 | 512 | [
[
233,
345,
259,
372
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>how many inches long is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
233,
345,
259,
372
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/46e20763ac423f25.jpg",
"original_size": [
512,
512
]
} |
what number is the player walking away? | 54 | /content/drive/MyDrive/TextVQA_Dataset/train_images/e2f5633b36d70df8.jpg | 512 | 341 | [
[
166,
140,
180,
150
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is the player walking away? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
166,
140,
180,
150
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/e2f5633b36d70df8.jpg",
"original_size": [
512,
341
]
} |
what does the sign say above the fan on the right? | exit | /content/drive/MyDrive/TextVQA_Dataset/train_images/12fb007ca12020fb.jpg | 512 | 341 | [
[
433,
35,
462,
57
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does the sign say above the fan on the right? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
433,
35,
462,
57
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/12fb007ca12020fb.jpg",
"original_size": [
512,
341
]
} |
what is the top word? | southern | /content/drive/MyDrive/TextVQA_Dataset/train_images/5f4254eac1ff8cc6.jpg | 384 | 512 | [
[
171,
217,
257,
240
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the top word? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
171,
217,
257,
240
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/5f4254eac1ff8cc6.jpg",
"original_size": [
384,
512
]
} |
what number is the man in orange? | 12 | /content/drive/MyDrive/TextVQA_Dataset/train_images/d12110d645bd585a.jpg | 374 | 512 | [
[
207,
284,
229,
306
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is the man in orange? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
207,
284,
229,
306
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/d12110d645bd585a.jpg",
"original_size": [
374,
512
]
} |
what is the brand of the phone? | cyon | /content/drive/MyDrive/TextVQA_Dataset/train_images/4be5075ef0026694.jpg | 261 | 512 | [
[
113,
398,
152,
412
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the brand of the phone? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
113,
398,
152,
412
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/4be5075ef0026694.jpg",
"original_size": [
261,
512
]
} |
what is the big letter shown on blue background? | p | /content/drive/MyDrive/TextVQA_Dataset/train_images/795214871ab86f2c.jpg | 341 | 512 | [
[
109,
172,
234,
357
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the big letter shown on blue background? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
109,
172,
234,
357
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/795214871ab86f2c.jpg",
"original_size": [
341,
512
]
} |
what kind of night is it? | good | /content/drive/MyDrive/TextVQA_Dataset/train_images/d40bd146d54b7989.jpg | 512 | 512 | [
[
122,
103,
245,
202
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of night is it? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
122,
103,
245,
202
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/d40bd146d54b7989.jpg",
"original_size": [
512,
512
]
} |
what city is on the hat? | kc | /content/drive/MyDrive/TextVQA_Dataset/train_images/d59a3864da65e7e5.jpg | 512 | 339 | [
[
240,
27,
261,
46
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what city is on the hat? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
240,
27,
261,
46
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/d59a3864da65e7e5.jpg",
"original_size": [
512,
339
]
} |
what brand of phone is shown on the left? | samsung | /content/drive/MyDrive/TextVQA_Dataset/train_images/07a4037da62f2812.jpg | 512 | 341 | [
[
162,
67,
195,
75
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of phone is shown on the left? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
162,
67,
195,
75
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/07a4037da62f2812.jpg",
"original_size": [
512,
341
]
} |
what number is on the jersey? | 12 | /content/drive/MyDrive/TextVQA_Dataset/train_images/41c71aa24ed7835a.jpg | 512 | 339 | [
[
321,
149,
377,
195
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is on the jersey? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
321,
149,
377,
195
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/41c71aa24ed7835a.jpg",
"original_size": [
512,
339
]
} |
what time is shown on the computer monitor? | 2:38pm | /content/drive/MyDrive/TextVQA_Dataset/train_images/db7b60ed384e8764.jpg | 512 | 384 | [
[
411,
74,
448,
81
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what time is shown on the computer monitor? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
411,
74,
448,
81
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/db7b60ed384e8764.jpg",
"original_size": [
512,
384
]
} |
what brand of soymilk is on the bottom shelf? | silk | /content/drive/MyDrive/TextVQA_Dataset/train_images/a430c7548c2f6d39.jpg | 382 | 512 | [
[
193,
293,
225,
313
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of soymilk is on the bottom shelf? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
193,
293,
225,
313
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/a430c7548c2f6d39.jpg",
"original_size": [
382,
512
]
} |
which user took this photo? | revolucion505 | /content/drive/MyDrive/TextVQA_Dataset/train_images/7ba3116b8fb5dee4.jpg | 512 | 512 | [
[
32,
478,
136,
494
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>which user took this photo? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
32,
478,
136,
494
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/7ba3116b8fb5dee4.jpg",
"original_size": [
512,
512
]
} |
what is the name of the waterfall on the boy's shirt? | ruby falls | /content/drive/MyDrive/TextVQA_Dataset/train_images/c487daa40104223f.jpg | 512 | 384 | [
[
139,
281,
186,
305
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name of the waterfall on the boy's shirt? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
139,
281,
186,
305
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c487daa40104223f.jpg",
"original_size": [
512,
384
]
} |
what month is it? | september | /content/drive/MyDrive/TextVQA_Dataset/train_images/217f3013d8094546.jpg | 512 | 512 | [
[
313,
209,
355,
220
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what month is it? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
313,
209,
355,
220
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/217f3013d8094546.jpg",
"original_size": [
512,
512
]
} |
what is the first word in big bold print? | sunkforort | /content/drive/MyDrive/TextVQA_Dataset/train_images/e04cf5ffea39dde2.jpg | 512 | 512 | [
[
5,
312,
349,
376
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the first word in big bold print? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
5,
312,
349,
376
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/e04cf5ffea39dde2.jpg",
"original_size": [
512,
512
]
} |
what brand of phone is this? | htc | /content/drive/MyDrive/TextVQA_Dataset/train_images/0c4c3faf024ccab9.jpg | 512 | 512 | [
[
252,
124,
362,
203
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what brand of phone is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
252,
124,
362,
203
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/0c4c3faf024ccab9.jpg",
"original_size": [
512,
512
]
} |
what kind of storage is this? | hybrid | /content/drive/MyDrive/TextVQA_Dataset/train_images/e6a1e3f0fdeb7802.jpg | 512 | 384 | [
[
157,
288,
183,
296
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of storage is this? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
157,
288,
183,
296
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/e6a1e3f0fdeb7802.jpg",
"original_size": [
512,
384
]
} |
what is the name of this player? | cathcart | /content/drive/MyDrive/TextVQA_Dataset/train_images/772de80db5a231bd.jpg | 512 | 338 | [
[
213,
126,
278,
151
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name of this player? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
213,
126,
278,
151
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/772de80db5a231bd.jpg",
"original_size": [
512,
338
]
} |
what is the brand of this watch? | casio | /content/drive/MyDrive/TextVQA_Dataset/train_images/2495f9e929f1036b.jpg | 384 | 512 | [
[
156,
332,
197,
351
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the brand of this watch? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
156,
332,
197,
351
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/2495f9e929f1036b.jpg",
"original_size": [
384,
512
]
} |
what kind of wing is described? | bombardment | /content/drive/MyDrive/TextVQA_Dataset/train_images/a5b4e62640c846df.jpg | 384 | 512 | [
[
168,
186,
244,
222
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of wing is described? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
168,
186,
244,
222
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/a5b4e62640c846df.jpg",
"original_size": [
384,
512
]
} |
what does the top left key say? | esc | /content/drive/MyDrive/TextVQA_Dataset/train_images/c10e78bdb4457b0a.jpg | 512 | 512 | [
[
259,
123,
293,
135
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does the top left key say? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
259,
123,
293,
135
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/c10e78bdb4457b0a.jpg",
"original_size": [
512,
512
]
} |
what number is wright? | 5 | /content/drive/MyDrive/TextVQA_Dataset/train_images/4309b1f0bb7bfb23.jpg | 512 | 384 | [
[
261,
168,
290,
222
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what number is wright? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
261,
168,
290,
222
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/4309b1f0bb7bfb23.jpg",
"original_size": [
512,
384
]
} |
what airline sponsors this stadium? | delta | /content/drive/MyDrive/TextVQA_Dataset/train_images/03c879ba6da8f33f.jpg | 512 | 383 | [
[
109,
101,
158,
112
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what airline sponsors this stadium? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
109,
101,
158,
112
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/03c879ba6da8f33f.jpg",
"original_size": [
512,
383
]
} |
what is his name? | magic | /content/drive/MyDrive/TextVQA_Dataset/train_images/aa6d80bb79b4ec4c.jpg | 377 | 512 | [
[
57,
346,
227,
418
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is his name? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
57,
346,
227,
418
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/aa6d80bb79b4ec4c.jpg",
"original_size": [
377,
512
]
} |
what does it say the word of god is? | alive | /content/drive/MyDrive/TextVQA_Dataset/train_images/f3be1dd0f0f0e017.jpg | 512 | 384 | [
[
411,
47,
459,
74
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does it say the word of god is? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
411,
47,
459,
74
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/f3be1dd0f0f0e017.jpg",
"original_size": [
512,
384
]
} |
what does command-q equal? | quit | /content/drive/MyDrive/TextVQA_Dataset/train_images/84bd4fe3b5bb7020.jpg | 512 | 384 | [
[
399,
16,
477,
54
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does command-q equal? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
399,
16,
477,
54
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/84bd4fe3b5bb7020.jpg",
"original_size": [
512,
384
]
} |
what does the blue sign say in the right corner? | dynamic | /content/drive/MyDrive/TextVQA_Dataset/train_images/905820e47ccb348e.jpg | 384 | 512 | [
[
334,
98,
360,
113
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what does the blue sign say in the right corner? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
334,
98,
360,
113
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/905820e47ccb348e.jpg",
"original_size": [
384,
512
]
} |
what is the name of the production company? | she-devil productions | /content/drive/MyDrive/TextVQA_Dataset/train_images/fbca145b0f700d6e.jpg | 512 | 342 | [
[
32,
233,
130,
275
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what is the name of the production company? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
32,
233,
130,
275
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/fbca145b0f700d6e.jpg",
"original_size": [
512,
342
]
} |
what kind of corps is this girl representing? | ladies drum | /content/drive/MyDrive/TextVQA_Dataset/train_images/99b4b12cac727d74.jpg | 512 | 400 | [
[
414,
239,
458,
266
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what kind of corps is this girl representing? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
414,
239,
458,
266
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/99b4b12cac727d74.jpg",
"original_size": [
512,
400
]
} |
which airline is theat plane for? | singapore | /content/drive/MyDrive/TextVQA_Dataset/train_images/faeca4c50deed4a4.jpg | 512 | 341 | [
[
164,
177,
220,
185
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>which airline is theat plane for? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
164,
177,
220,
185
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/faeca4c50deed4a4.jpg",
"original_size": [
512,
341
]
} |
what year is on the advert? | 2012 | /content/drive/MyDrive/TextVQA_Dataset/train_images/3736f81be7095f5e.jpg | 341 | 512 | [
[
122,
56,
149,
68
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what year is on the advert? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
122,
56,
149,
68
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/3736f81be7095f5e.jpg",
"original_size": [
341,
512
]
} |
who is the artist on the billboard? | beck | /content/drive/MyDrive/TextVQA_Dataset/train_images/b6d5d47d85d7f53e.jpg | 512 | 341 | [
[
191,
147,
304,
178
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>who is the artist on the billboard? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
191,
147,
304,
178
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/b6d5d47d85d7f53e.jpg",
"original_size": [
512,
341
]
} |
what state is this license plate from? | virginia | /content/drive/MyDrive/TextVQA_Dataset/train_images/f484d1b789e6ed63.jpg | 512 | 384 | [
[
118,
77,
373,
130
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what state is this license plate from? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
118,
77,
373,
130
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/f484d1b789e6ed63.jpg",
"original_size": [
512,
384
]
} |
what address number is this store? | 57 | /content/drive/MyDrive/TextVQA_Dataset/train_images/e271d16f9a60e06e.jpg | 512 | 384 | [
[
278,
85,
302,
103
]
] | textvqa | train | <|im_start|>system
You are a Vision Language Model specialized in visual grounding. Provide bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>what address number is this store? Show reasoning in <think> tags. Return bounding box in <answer> [x1, y1, x2, y2] </answer>.<|im_end|>
<|im_start|>assistant
Analyzing image...
<think><|im_end|>
<|im_start|>assistant
| [
[
278,
85,
302,
103
]
] | {
"image_path": "/content/drive/MyDrive/TextVQA_Dataset/train_images/e271d16f9a60e06e.jpg",
"original_size": [
512,
384
]
} |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 9