Datasets:
Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: RetryableConfigNamesError
Exception: HfHubHTTPError
Message: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/lance-format/textvqa-lance/paths-info/4c82668aa85cfe7a27de14675f6922a424bb8259 (Request ID: Root=1-69fe17ad-506accc8426cf17a5252dd49;2536e229-eb81-4e56-be68-2444fd53b30d)
Internal Error - We're working hard to fix this as soon as possible!
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 638, in get_module
patterns = get_data_patterns(base_path, download_config=self.download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 493, in get_data_patterns
return _get_data_files_patterns(resolver)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 268, in _get_data_files_patterns
data_files = pattern_resolver(pattern)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 372, in resolve_pattern
for filepath, info in fs.glob(fs_pattern, detail=True, **glob_kwargs).items():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 521, in glob
return super().glob(path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 604, in glob
allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 556, in find
return super().find(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 495, in find
out[path] = self.info(path)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 719, in info
paths_info = self._api.get_paths_info(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 3371, in get_paths_info
hf_raise_for_status(response)
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/lance-format/textvqa-lance/paths-info/4c82668aa85cfe7a27de14675f6922a424bb8259 (Request ID: Root=1-69fe17ad-506accc8426cf17a5252dd49;2536e229-eb81-4e56-be68-2444fd53b30d)
Internal Error - We're working hard to fix this as soon as possible!Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
TextVQA (Lance Format)
Lance-formatted version of TextVQA — VQA where the question requires reading text in the image — sourced from lmms-lab/textvqa.
Each row carries the image bytes, the question, the 10 reference answers, the OCR tokens detected by the dataset's pre-processing, and CLIP image + question embeddings.
Splits
| Split | Rows |
|---|---|
validation.lance |
5,000 |
train.lance |
34,602 |
Schema
| Column | Type | Notes |
|---|---|---|
id |
int64 |
Row index within split |
image |
large_binary |
Inline JPEG bytes |
image_id |
string? |
TextVQA image id |
question_id |
string? |
TextVQA question id |
question |
string |
The question text |
answers |
list<string> |
10 annotator answers |
answer |
string |
First answer — used as canonical / FTS target |
ocr_tokens |
list<string> |
OCR tokens detected on the image |
image_classes |
list<string> |
OpenImages-style scene tags from the source |
set_name |
string? |
Source partition (train, val) |
image_emb |
fixed_size_list<float32, 512> |
OpenCLIP image embedding (cosine-normalized) |
question_emb |
fixed_size_list<float32, 512> |
OpenCLIP text embedding of the question |
Pre-built indices
IVF_PQonimage_embandquestion_emb—metric=cosineINVERTED(FTS) onquestionandanswerBTREEonimage_id,question_id,set_name
Quick start
import lance
ds = lance.dataset("hf://datasets/lance-format/textvqa-lance/data/validation.lance")
print(ds.count_rows(), ds.schema.names, ds.list_indices())
Cross-modal text→image search
import lance, pyarrow as pa, open_clip, torch
model, _, _ = open_clip.create_model_and_transforms("ViT-B-32", pretrained="laion2b_s34b_b79k")
tokenizer = open_clip.get_tokenizer("ViT-B-32")
model = model.eval().cuda().half()
with torch.no_grad():
q = model.encode_text(tokenizer(["what brand is on this billboard?"]).cuda())
q = (q / q.norm(dim=-1, keepdim=True)).float().cpu().numpy()[0]
ds = lance.dataset("hf://datasets/lance-format/textvqa-lance/data/validation.lance")
emb_field = ds.schema.field("image_emb")
hits = ds.scanner(
nearest={"column": "image_emb", "q": pa.array([q.tolist()], type=emb_field.type)[0], "k": 10},
columns=["question", "answer", "ocr_tokens"],
).to_table().to_pylist()
Why Lance?
- One dataset for images + questions + answers + OCR + dual embeddings + indices — no JSON/feature folders.
- Cross-modal search and OCR-text filtering work on the same dataset on the Hub.
- Schema evolution: add columns (alternate OCR systems, model predictions) without rewriting the data.
Source & license
Converted from lmms-lab/textvqa. TextVQA is released under CC BY 4.0 by Singh et al. (Facebook AI Research).
Citation
@inproceedings{singh2019towards,
title={Towards VQA models that can read},
author={Singh, Amanpreet and Natarajan, Vivek and Shah, Meet and Jiang, Yu and Chen, Xinlei and Batra, Dhruv and Parikh, Devi and Rohrbach, Marcus},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019}
}
- Downloads last month
- -