File size: 2,558 Bytes
eaf9324
 
 
 
 
 
 
 
 
 
 
 
 
6f2d9b1
eaf9324
 
 
 
 
 
6f2d9b1
 
 
 
eaf9324
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81c5361
 
 
 
 
 
 
 
 
eaf9324
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f2d9b1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
datasets: lh9171338/Wireframe
pretty_name: Wireframe Dataset
license: mit
tags:
- computer-vision
- line-segment-detection
- wireframe-parsing
size_categories: 1K<n<10K
---

# Wireframe Dataset

This is the [Wireframe dataset](https://github.com/huangkuns/wireframe) hosted on Hugging Face Hub.

## Summary

Wireframe dataset with image annotations including line segments.  
The dataset is stored as jsonl files (`train/metadata.jsonl`, `test/metadata.jsonl`) and images.

Number of samples:
- Train: 5,000
- Test: 462

## Download

- Download with huggingface-hub

```shell
python3 -m pip install huggingface-hub
huggingface-cli download --repo-type dataset lh9171338/Wireframe --local-dir ./
```

- Download with Git

```shell
git lfs install
git clone https://huggingface.co/datasets/lh9171338/Wireframe
```

## Usage

- Load the dataset from Hugging Face Hub

```python
from datasets import load_dataset

ds = load_dataset("lh9171338/Wireframe")
# or load from `refs/convert/parquet` for acceleration
# from datasets import load_dataset, Features, Image, Sequence, Value
# features = Features({
#     "image": Image(),
#     "image_file": Value("string"),
#     "image_size": Sequence(Value("int32")),
#     "lines": Sequence(Sequence(Sequence(Value("float32")))),
# })
# ds = load_dataset("lh9171338/Wireframe", features=features, revision="refs/convert/parquet")
print(ds)
# DatasetDict({
#     train: Dataset({
#         features: ['image', 'image_file', 'image_size', 'lines'],
#         num_rows: 5000
#     })
#     test: Dataset({
#         features: ['image', 'image_file', 'image_size', 'lines'],
#         num_rows: 462
#     })
# })
print(ds["train"][0].keys())
# dict_keys(['image', 'image_file', 'image_size', 'lines'])
```

- Load the dataset from local

```python
from datasets import load_dataset

ds = load_dataset("imagefolder", data_dir=".")
print(ds)
# DatasetDict({
#     train: Dataset({
#         features: ['image', 'image_file', 'image_size', 'lines'],
#         num_rows: 5000
#     })
#     test: Dataset({
#         features: ['image', 'image_file', 'image_size', 'lines'],
#         num_rows: 462
#     })
# })
print(ds["train"][0].keys())
# dict_keys(['image', 'image_file', 'image_size', 'lines'])
```

- Load the dataset with jsonl files
```python
import jsonlines

with jsonlines.open("train/metadata.jsonl") as reader:
    infos = list(reader)
print(infos[0].keys())
# dict_keys(['file_name', 'image_file', 'image_size', 'lines'])
```

## Viewer

[Open in Space](https://huggingface.co/spaces/lh9171338/LineViewer)