Add files using upload-large-folder tool
Browse files- README.md +83 -0
- split_dataset.py +111 -0
- test/0_18.jpg +3 -0
- test/0_9.jpg +3 -0
- test/107_6.jpg +3 -0
- test/118_1.jpg +3 -0
- test/120_17.jpg +3 -0
- test/144_1.jpg +3 -0
- test/14_2.jpg +3 -0
- test/153_13.jpg +3 -0
- test/159_5.jpg +3 -0
- test/16_0.jpg +3 -0
- test/184_0.jpg +3 -0
- test/188_10.jpg +3 -0
- test/195_8.jpg +3 -0
- test/215_15.jpg +3 -0
- test/217_11.jpg +3 -0
- test/231_14.jpg +3 -0
- test/235_8.jpg +3 -0
- test/261_18.jpg +3 -0
- test/270_9.jpg +3 -0
- test/285_0.jpg +3 -0
- test/287_13.jpg +3 -0
- test/295_12.jpg +3 -0
- test/302_6.jpg +3 -0
- test/324_4.jpg +3 -0
- test/361_18.jpg +3 -0
- test/361_19.jpg +3 -0
- test/367_2.jpg +3 -0
- test/378_4.jpg +3 -0
- test/385_5.jpg +3 -0
- test/387_12.jpg +3 -0
- test/387_13.jpg +3 -0
- test/397_16.jpg +3 -0
- test/401_11.jpg +3 -0
- test/413_10.jpg +3 -0
- test/414_0.jpg +3 -0
- test/418_19.jpg +3 -0
- test/436_7.jpg +3 -0
- test/437_11.jpg +3 -0
- test/440_8.jpg +3 -0
- test/455_5.jpg +3 -0
- test/45_16.jpg +3 -0
- test/477_3.jpg +3 -0
- test/493_3.jpg +3 -0
- test/499_9.jpg +3 -0
- test/59_13.jpg +3 -0
- test/97_18.jpg +3 -0
- test/metadata.jsonl +0 -0
- train/metadata.jsonl +0 -0
README.md
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-segmentation
|
| 5 |
+
- object-detection
|
| 6 |
+
tags:
|
| 7 |
+
- P&ID
|
| 8 |
+
- lines
|
| 9 |
+
- pipelines
|
| 10 |
+
- engineering
|
| 11 |
+
- diagrams
|
| 12 |
+
- line-detection
|
| 13 |
+
size_categories:
|
| 14 |
+
- 1K<n<10K
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# P&ID Line Detection Dataset
|
| 18 |
+
|
| 19 |
+
This dataset contains cropped images from P&ID (Piping and Instrumentation Diagrams)
|
| 20 |
+
with line segment annotations for line detection and segmentation tasks.
|
| 21 |
+
|
| 22 |
+
## Dataset Description
|
| 23 |
+
|
| 24 |
+
- **Total source images:** 500
|
| 25 |
+
- **Total cropped samples:** 10000
|
| 26 |
+
- **Total line segments:** 44754
|
| 27 |
+
- **Crops per image:** 20
|
| 28 |
+
- **Image sizes:** Various sizes under 1000px (e.g., 300x500, 512x768, 900x900)
|
| 29 |
+
|
| 30 |
+
## Dataset Structure
|
| 31 |
+
|
| 32 |
+
Each sample contains:
|
| 33 |
+
- `file_name`: Image filename
|
| 34 |
+
- `source_image_idx`: Index of the original P&ID image
|
| 35 |
+
- `crop_idx`: Index of this crop from the source image
|
| 36 |
+
- `width`: Crop width in pixels
|
| 37 |
+
- `height`: Crop height in pixels
|
| 38 |
+
- `lines`: Dictionary with:
|
| 39 |
+
- `segments`: List of line segments as [x1, y1, x2, y2] (start and end points)
|
| 40 |
+
- `line_types`: List of line types ("solid" or "dashed")
|
| 41 |
+
- `pipelines`: List of pipeline names for each line
|
| 42 |
+
|
| 43 |
+
## Usage
|
| 44 |
+
|
| 45 |
+
```python
|
| 46 |
+
from datasets import load_dataset
|
| 47 |
+
|
| 48 |
+
# Load the dataset
|
| 49 |
+
dataset = load_dataset("imagefolder", data_dir="path/to/lines_dataset")
|
| 50 |
+
|
| 51 |
+
# Access a sample
|
| 52 |
+
sample = dataset["train"][0]
|
| 53 |
+
image = sample["image"]
|
| 54 |
+
lines = sample["lines"]
|
| 55 |
+
segments = lines["segments"] # [[x1, y1, x2, y2], ...]
|
| 56 |
+
line_types = lines["line_types"] # ["solid", "dashed", ...]
|
| 57 |
+
pipelines = lines["pipelines"] # ["5\"-EK-2648", ...]
|
| 58 |
+
|
| 59 |
+
# Draw lines on image
|
| 60 |
+
from PIL import ImageDraw
|
| 61 |
+
draw = ImageDraw.Draw(image)
|
| 62 |
+
for seg in segments:
|
| 63 |
+
x1, y1, x2, y2 = seg
|
| 64 |
+
draw.line([(x1, y1), (x2, y2)], fill="blue", width=3)
|
| 65 |
+
image.show()
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
## Line Segment Format
|
| 69 |
+
|
| 70 |
+
Each line segment is represented as `[x1, y1, x2, y2]` where:
|
| 71 |
+
- `(x1, y1)` is the start point
|
| 72 |
+
- `(x2, y2)` is the end point
|
| 73 |
+
- Coordinates are in pixels, relative to the cropped image
|
| 74 |
+
- Only lines where **both endpoints** are fully inside the crop are included
|
| 75 |
+
|
| 76 |
+
## Line Types
|
| 77 |
+
|
| 78 |
+
- `solid`: Continuous pipeline lines
|
| 79 |
+
- `dashed`: Dashed lines (often representing signal/instrument lines)
|
| 80 |
+
|
| 81 |
+
## License
|
| 82 |
+
|
| 83 |
+
MIT License
|
split_dataset.py
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Script to split the lines dataset into train/validation/test sets (80/10/10)
|
| 4 |
+
and transform the data format.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import json
|
| 8 |
+
import os
|
| 9 |
+
import shutil
|
| 10 |
+
import random
|
| 11 |
+
from pathlib import Path
|
| 12 |
+
|
| 13 |
+
# Set random seed for reproducibility
|
| 14 |
+
random.seed(42)
|
| 15 |
+
|
| 16 |
+
# Define paths
|
| 17 |
+
BASE_DIR = Path("/Users/prasatee/Desktop/unsloth/DigitizePID_Dataset/lines_dataset")
|
| 18 |
+
TRAIN_DIR = BASE_DIR / "train"
|
| 19 |
+
VAL_DIR = BASE_DIR / "validation"
|
| 20 |
+
TEST_DIR = BASE_DIR / "test"
|
| 21 |
+
|
| 22 |
+
# Read current metadata
|
| 23 |
+
metadata_path = TRAIN_DIR / "metadata.jsonl"
|
| 24 |
+
data = []
|
| 25 |
+
|
| 26 |
+
print("Reading metadata...")
|
| 27 |
+
with open(metadata_path, "r") as f:
|
| 28 |
+
for line in f:
|
| 29 |
+
entry = json.loads(line.strip())
|
| 30 |
+
# Transform to new format (flatten the "lines" field)
|
| 31 |
+
new_entry = {
|
| 32 |
+
"file_name": entry["file_name"],
|
| 33 |
+
"source_image_idx": entry["source_image_idx"],
|
| 34 |
+
"crop_idx": entry["crop_idx"],
|
| 35 |
+
"width": entry["width"],
|
| 36 |
+
"height": entry["height"],
|
| 37 |
+
"segments": entry["lines"]["segments"],
|
| 38 |
+
"line_types": entry["lines"]["line_types"],
|
| 39 |
+
"pipelines": entry["lines"]["pipelines"],
|
| 40 |
+
}
|
| 41 |
+
data.append(new_entry)
|
| 42 |
+
|
| 43 |
+
print(f"Total entries: {len(data)}")
|
| 44 |
+
|
| 45 |
+
# Shuffle data
|
| 46 |
+
random.shuffle(data)
|
| 47 |
+
|
| 48 |
+
# Calculate split sizes
|
| 49 |
+
total = len(data)
|
| 50 |
+
train_size = int(0.8 * total)
|
| 51 |
+
val_size = int(0.1 * total)
|
| 52 |
+
test_size = total - train_size - val_size
|
| 53 |
+
|
| 54 |
+
train_data = data[:train_size]
|
| 55 |
+
val_data = data[train_size:train_size + val_size]
|
| 56 |
+
test_data = data[train_size + val_size:]
|
| 57 |
+
|
| 58 |
+
print(f"Train: {len(train_data)}, Validation: {len(val_data)}, Test: {len(test_data)}")
|
| 59 |
+
|
| 60 |
+
# Create directories
|
| 61 |
+
VAL_DIR.mkdir(exist_ok=True)
|
| 62 |
+
TEST_DIR.mkdir(exist_ok=True)
|
| 63 |
+
|
| 64 |
+
print("\nMoving files...")
|
| 65 |
+
|
| 66 |
+
# Move validation files
|
| 67 |
+
print("Processing validation set...")
|
| 68 |
+
for entry in val_data:
|
| 69 |
+
src = TRAIN_DIR / entry["file_name"]
|
| 70 |
+
dst = VAL_DIR / entry["file_name"]
|
| 71 |
+
if src.exists():
|
| 72 |
+
shutil.move(str(src), str(dst))
|
| 73 |
+
|
| 74 |
+
# Move test files
|
| 75 |
+
print("Processing test set...")
|
| 76 |
+
for entry in test_data:
|
| 77 |
+
src = TRAIN_DIR / entry["file_name"]
|
| 78 |
+
dst = TEST_DIR / entry["file_name"]
|
| 79 |
+
if src.exists():
|
| 80 |
+
shutil.move(str(src), str(dst))
|
| 81 |
+
|
| 82 |
+
# Write new metadata files
|
| 83 |
+
print("\nWriting metadata files...")
|
| 84 |
+
|
| 85 |
+
# Train metadata
|
| 86 |
+
train_metadata_path = TRAIN_DIR / "metadata.jsonl"
|
| 87 |
+
with open(train_metadata_path, "w") as f:
|
| 88 |
+
for entry in train_data:
|
| 89 |
+
f.write(json.dumps(entry) + "\n")
|
| 90 |
+
|
| 91 |
+
# Validation metadata
|
| 92 |
+
val_metadata_path = VAL_DIR / "metadata.jsonl"
|
| 93 |
+
with open(val_metadata_path, "w") as f:
|
| 94 |
+
for entry in val_data:
|
| 95 |
+
f.write(json.dumps(entry) + "\n")
|
| 96 |
+
|
| 97 |
+
# Test metadata
|
| 98 |
+
test_metadata_path = TEST_DIR / "metadata.jsonl"
|
| 99 |
+
with open(test_metadata_path, "w") as f:
|
| 100 |
+
for entry in test_data:
|
| 101 |
+
f.write(json.dumps(entry) + "\n")
|
| 102 |
+
|
| 103 |
+
print("\nDone!")
|
| 104 |
+
print(f"Train set: {len(train_data)} samples in {TRAIN_DIR}")
|
| 105 |
+
print(f"Validation set: {len(val_data)} samples in {VAL_DIR}")
|
| 106 |
+
print(f"Test set: {len(test_data)} samples in {TEST_DIR}")
|
| 107 |
+
|
| 108 |
+
# Verify first entry format
|
| 109 |
+
print("\nSample entry format:")
|
| 110 |
+
print(json.dumps(train_data[0], indent=2))
|
| 111 |
+
|
test/0_18.jpg
ADDED
|
Git LFS Details
|
test/0_9.jpg
ADDED
|
Git LFS Details
|
test/107_6.jpg
ADDED
|
Git LFS Details
|
test/118_1.jpg
ADDED
|
Git LFS Details
|
test/120_17.jpg
ADDED
|
Git LFS Details
|
test/144_1.jpg
ADDED
|
Git LFS Details
|
test/14_2.jpg
ADDED
|
Git LFS Details
|
test/153_13.jpg
ADDED
|
Git LFS Details
|
test/159_5.jpg
ADDED
|
Git LFS Details
|
test/16_0.jpg
ADDED
|
Git LFS Details
|
test/184_0.jpg
ADDED
|
Git LFS Details
|
test/188_10.jpg
ADDED
|
Git LFS Details
|
test/195_8.jpg
ADDED
|
Git LFS Details
|
test/215_15.jpg
ADDED
|
Git LFS Details
|
test/217_11.jpg
ADDED
|
Git LFS Details
|
test/231_14.jpg
ADDED
|
Git LFS Details
|
test/235_8.jpg
ADDED
|
Git LFS Details
|
test/261_18.jpg
ADDED
|
Git LFS Details
|
test/270_9.jpg
ADDED
|
Git LFS Details
|
test/285_0.jpg
ADDED
|
Git LFS Details
|
test/287_13.jpg
ADDED
|
Git LFS Details
|
test/295_12.jpg
ADDED
|
Git LFS Details
|
test/302_6.jpg
ADDED
|
Git LFS Details
|
test/324_4.jpg
ADDED
|
Git LFS Details
|
test/361_18.jpg
ADDED
|
Git LFS Details
|
test/361_19.jpg
ADDED
|
Git LFS Details
|
test/367_2.jpg
ADDED
|
Git LFS Details
|
test/378_4.jpg
ADDED
|
Git LFS Details
|
test/385_5.jpg
ADDED
|
Git LFS Details
|
test/387_12.jpg
ADDED
|
Git LFS Details
|
test/387_13.jpg
ADDED
|
Git LFS Details
|
test/397_16.jpg
ADDED
|
Git LFS Details
|
test/401_11.jpg
ADDED
|
Git LFS Details
|
test/413_10.jpg
ADDED
|
Git LFS Details
|
test/414_0.jpg
ADDED
|
Git LFS Details
|
test/418_19.jpg
ADDED
|
Git LFS Details
|
test/436_7.jpg
ADDED
|
Git LFS Details
|
test/437_11.jpg
ADDED
|
Git LFS Details
|
test/440_8.jpg
ADDED
|
Git LFS Details
|
test/455_5.jpg
ADDED
|
Git LFS Details
|
test/45_16.jpg
ADDED
|
Git LFS Details
|
test/477_3.jpg
ADDED
|
Git LFS Details
|
test/493_3.jpg
ADDED
|
Git LFS Details
|
test/499_9.jpg
ADDED
|
Git LFS Details
|
test/59_13.jpg
ADDED
|
Git LFS Details
|
test/97_18.jpg
ADDED
|
Git LFS Details
|
test/metadata.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
train/metadata.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|