Datasets:
metadata
datasets: lh9171338/Wireframe
pretty_name: Wireframe Dataset
license: mit
tags:
- computer-vision
- line-segment-detection
- wireframe-parsing
size_categories: 1K<n<10K
Wireframe Dataset
This is the Wireframe dataset hosted on Hugging Face Hub.
Summary
Wireframe dataset with image annotations including line segments.
The dataset is stored as jsonl files (train/metadata.jsonl, test/metadata.jsonl) and images.
Number of samples:
- Train: 5,000
- Test: 462
Download
- Download with huggingface-hub
python3 -m pip install huggingface-hub
huggingface-cli download --repo-type dataset lh9171338/Wireframe --local-dir ./
- Download with Git
git lfs install
git clone https://huggingface.co/datasets/lh9171338/Wireframe
Usage
- Load the dataset from Hugging Face Hub
from datasets import load_dataset
ds = load_dataset("lh9171338/Wireframe")
# or load from `refs/convert/parquet` for acceleration
# from datasets import load_dataset, Features, Image, Sequence, Value
# features = Features({
# "image": Image(),
# "image_file": Value("string"),
# "image_size": Sequence(Value("int32")),
# "lines": Sequence(Sequence(Sequence(Value("float32")))),
# })
# ds = load_dataset("lh9171338/Wireframe", features=features, revision="refs/convert/parquet")
print(ds)
# DatasetDict({
# train: Dataset({
# features: ['image', 'image_file', 'image_size', 'lines'],
# num_rows: 5000
# })
# test: Dataset({
# features: ['image', 'image_file', 'image_size', 'lines'],
# num_rows: 462
# })
# })
print(ds["train"][0].keys())
# dict_keys(['image', 'image_file', 'image_size', 'lines'])
- Load the dataset from local
from datasets import load_dataset
ds = load_dataset("imagefolder", data_dir=".")
print(ds)
# DatasetDict({
# train: Dataset({
# features: ['image', 'image_file', 'image_size', 'lines'],
# num_rows: 5000
# })
# test: Dataset({
# features: ['image', 'image_file', 'image_size', 'lines'],
# num_rows: 462
# })
# })
print(ds["train"][0].keys())
# dict_keys(['image', 'image_file', 'image_size', 'lines'])
- Load the dataset with jsonl files
import jsonlines
with jsonlines.open("train/metadata.jsonl") as reader:
infos = list(reader)
print(infos[0].keys())
# dict_keys(['file_name', 'image_file', 'image_size', 'lines'])