Srushti Hirve commited on
Commit
5425c8b
·
1 Parent(s): 157df53

Add VisionReasoner UI dataset with 245 samples

Browse files
Files changed (3) hide show
  1. README.md +36 -7
  2. dataset-card.md +123 -0
  3. demo.py +23 -10
README.md CHANGED
@@ -19,15 +19,23 @@ task_ids:
19
 
20
  This dataset contains user interface (UI) images along with associated annotation prompts and solutions for fine-tuning the VisionReasoner model.
21
 
 
 
 
 
 
 
 
22
  ## Structure
23
 
24
- - `images/`: Folder containing UI images (`.webp` format).
25
- - `visionreasoner_dataset.parquet`: A `.parquet` file with metadata such as:
26
- - Prompt (`problem`)
27
- - Segmentation solution (`solution`)
28
- - Image file reference
29
- - Image height
30
- - `demo.py`: A custom dataset loading script using Hugging Face `datasets` library.
 
31
 
32
  ## Usage
33
 
@@ -38,3 +46,24 @@ from datasets import load_dataset
38
 
39
  dataset = load_dataset("shirve13/Demo", trust_remote_code=True)
40
  print(dataset["train"][0])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  This dataset contains user interface (UI) images along with associated annotation prompts and solutions for fine-tuning the VisionReasoner model.
21
 
22
+ ## Dataset Description
23
+
24
+ - **Size**: 245 samples
25
+ - **Task**: Image Segmentation (Semantic Segmentation)
26
+ - **Language**: English
27
+ - **License**: MIT
28
+
29
  ## Structure
30
 
31
+ - `images/`: Folder containing UI images (`.webp` format)
32
+ - `visionreasoner_dataset.parquet`: Metadata file containing:
33
+ - `id`: Unique identifier for each sample
34
+ - `problem`: Annotation prompt describing the UI element to segment
35
+ - `solution`: JSON-formatted segmentation solution with bounding boxes and points
36
+ - `image`: Reference to the image file
37
+ - `img_height`: Image height in pixels
38
+ - `img_width`: Image width in pixels
39
 
40
  ## Usage
41
 
 
46
 
47
  dataset = load_dataset("shirve13/Demo", trust_remote_code=True)
48
  print(dataset["train"][0])
49
+ ```
50
+
51
+ ## Dataset Loading Script
52
+
53
+ The dataset uses a custom loading script (`demo.py`) that:
54
+ - Loads metadata from the parquet file
55
+ - Handles image paths correctly
56
+ - Provides proper dataset features for Hugging Face compatibility
57
+
58
+ ## Citation
59
+
60
+ If you use this dataset in your research, please cite:
61
+
62
+ ```bibtex
63
+ @dataset{visionreasoner_ui_dataset,
64
+ title={VisionReasoner UI Dataset},
65
+ author={shirve13},
66
+ year={2024},
67
+ url={https://huggingface.co/datasets/shirve13/Demo}
68
+ }
69
+ ```
dataset-card.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - user-generated
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - user-generated
8
+ license:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - image-segmentation
18
+ task_ids:
19
+ - semantic-segmentation
20
+ paperswithcode_id: null
21
+ configs:
22
+ - config_name: default
23
+ data_files:
24
+ - split: train
25
+ path: visionreasoner_dataset.parquet
26
+ default: true
27
+ ---
28
+
29
+ # Dataset Card for VisionReasoner UI Dataset
30
+
31
+ ## Dataset Description
32
+
33
+ - **Repository:** [https://huggingface.co/datasets/shirve13/Demo](https://huggingface.co/datasets/shirve13/Demo)
34
+ - **Paper:** N/A
35
+ - **Point of Contact:** [shirve13](https://huggingface.co/shirve13)
36
+
37
+ ### Dataset Summary
38
+
39
+ The VisionReasoner UI Dataset contains user interface (UI) images along with associated annotation prompts and solutions for fine-tuning the VisionReasoner model. This dataset is designed for semantic segmentation tasks on UI elements.
40
+
41
+ ### Supported Tasks and Leaderboards
42
+
43
+ - **Semantic Segmentation**: The dataset is designed for segmenting UI elements based on natural language prompts.
44
+
45
+ ### Languages
46
+
47
+ The dataset is in English.
48
+
49
+ ## Dataset Structure
50
+
51
+ ### Data Instances
52
+
53
+ Each instance contains:
54
+ - `id`: Unique identifier for the sample
55
+ - `problem`: Natural language prompt describing the UI element to segment
56
+ - `solution`: JSON-formatted segmentation solution with bounding boxes and points
57
+ - `image`: UI image in WebP format
58
+ - `img_height`: Image height in pixels
59
+ - `img_width`: Image width in pixels
60
+
61
+ ### Data Fields
62
+
63
+ - `id` (string): Unique identifier for each sample
64
+ - `problem` (string): Annotation prompt describing the UI element to segment
65
+ - `solution` (string): JSON-formatted segmentation solution
66
+ - `image` (image): UI image file
67
+ - `img_height` (int32): Image height in pixels
68
+ - `img_width` (int32): Image width in pixels
69
+
70
+ ### Data Splits
71
+
72
+ - Train: 245 samples
73
+
74
+ ## Dataset Creation
75
+
76
+ ### Source Data
77
+
78
+ #### Initial Data Collection and Normalization
79
+
80
+ The dataset was created by collecting UI screenshots and annotating them with segmentation prompts and solutions.
81
+
82
+ #### Who are the source language producers?
83
+
84
+ The dataset was created by the author for research purposes.
85
+
86
+ ### Annotations
87
+
88
+ #### Annotation process
89
+
90
+ UI elements were manually annotated with bounding boxes and segmentation masks based on natural language descriptions.
91
+
92
+ #### Who are the annotators?
93
+
94
+ The annotations were created by the dataset author.
95
+
96
+ ### Personal and Sensitive Information
97
+
98
+ The dataset contains UI screenshots but does not contain personal or sensitive information.
99
+
100
+ ## Additional Information
101
+
102
+ ### Dataset Curators
103
+
104
+ The dataset was curated by shirve13.
105
+
106
+ ### Licensing Information
107
+
108
+ This dataset is licensed under the MIT License.
109
+
110
+ ### Citation Information
111
+
112
+ ```bibtex
113
+ @dataset{visionreasoner_ui_dataset,
114
+ title={VisionReasoner UI Dataset},
115
+ author={shirve13},
116
+ year={2024},
117
+ url={https://huggingface.co/datasets/shirve13/Demo}
118
+ }
119
+ ```
120
+
121
+ ### Contributions
122
+
123
+ Thanks to the Hugging Face community for providing the platform to share this dataset.
demo.py CHANGED
@@ -1,5 +1,5 @@
1
- import datasets
2
  import os
 
3
 
4
  class Demo(datasets.GeneratorBasedBuilder):
5
  def _info(self):
@@ -8,27 +8,40 @@ class Demo(datasets.GeneratorBasedBuilder):
8
  "id": datasets.Value("string"),
9
  "problem": datasets.Value("string"),
10
  "solution": datasets.Value("string"),
11
- "image": datasets.Image(),
12
  "img_height": datasets.Value("int32"),
 
13
  }),
14
  )
15
 
16
  def _split_generators(self, dl_manager):
 
 
 
 
17
  return [
18
  datasets.SplitGenerator(
19
  name=datasets.Split.TRAIN,
20
- gen_kwargs={"filepath": "visionreasoner_dataset.parquet"},
21
  ),
22
  ]
23
 
24
- def _generate_examples(self, filepath):
25
  import pandas as pd
26
  df = pd.read_parquet(filepath)
 
27
  for idx, row in df.iterrows():
 
 
 
 
 
 
28
  yield idx, {
29
- "id": row["id"],
30
- "problem": row["problem"],
31
- "solution": row["solution"],
32
- "image": row["image"]["path"], # e.g., images/ui_img_10.webp
33
- "img_height": row["img_height"],
34
- }
 
 
 
1
  import os
2
+ import datasets
3
 
4
  class Demo(datasets.GeneratorBasedBuilder):
5
  def _info(self):
 
8
  "id": datasets.Value("string"),
9
  "problem": datasets.Value("string"),
10
  "solution": datasets.Value("string"),
11
+ "image": datasets.Image(), # Enables image previews
12
  "img_height": datasets.Value("int32"),
13
+ "img_width": datasets.Value("int32"),
14
  }),
15
  )
16
 
17
  def _split_generators(self, dl_manager):
18
+ # Use the current directory where demo.py is located
19
+ base_path = os.path.dirname(os.path.abspath(__file__))
20
+ parquet_path = os.path.join(base_path, "visionreasoner_dataset.parquet")
21
+
22
  return [
23
  datasets.SplitGenerator(
24
  name=datasets.Split.TRAIN,
25
+ gen_kwargs={"filepath": parquet_path, "base_path": base_path}
26
  ),
27
  ]
28
 
29
+ def _generate_examples(self, filepath, base_path):
30
  import pandas as pd
31
  df = pd.read_parquet(filepath)
32
+
33
  for idx, row in df.iterrows():
34
+ # Handle image path correctly
35
+ if isinstance(row["image"], dict) and "path" in row["image"]:
36
+ image_path = os.path.join(base_path, row["image"]["path"])
37
+ else:
38
+ image_path = str(row["image"])
39
+
40
  yield idx, {
41
+ "id": str(row["id"]),
42
+ "problem": str(row["problem"]),
43
+ "solution": str(row["solution"]),
44
+ "image": image_path, # Full path to the image
45
+ "img_height": int(row["img_height"]),
46
+ "img_width": int(row["img_width"]),
47
+ }