SagarKeshave commited on
Commit
4037f15
Β·
verified Β·
1 Parent(s): 44c689c

Source added

Browse files
Files changed (5) hide show
  1. README.md +90 -9
  2. Space.yaml +3 -0
  3. YOLO_WD_DETECTION_TRAINING.ipynb +0 -0
  4. app.py +37 -0
  5. requirements.txt +3 -0
README.md CHANGED
@@ -1,12 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- title: WinDoorDetection YOLO
3
- emoji: πŸ“Š
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 5.31.0
8
- app_file: app.py
9
- pinned: false
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # πŸ—οΈ YOLOv8 Object Detection App – Blueprint Door & Window Detector
2
+
3
+ Welcome to my deployed computer vision project using **YOLOv8 + Gradio**, designed to detect **doors and windows** in architectural **construction blueprints**.
4
+
5
+ ![demo](https://github.com/ultralytics/assets/raw/main/yolov8/banner-yolov8.png)
6
+
7
+ ---
8
+
9
+ ## πŸ” About the App
10
+
11
+ This application showcases a deep learning model trained on blueprint images to identify:
12
+ - πŸšͺ **Doors**
13
+ - πŸͺŸ **Windows**
14
+
15
+ The goal was to automate detection in architectural layouts and assist with digitizing or verifying blueprint components.
16
+
17
+ ---
18
+
19
+ ## πŸ’‘ How it Works
20
+
21
+ - The app is powered by a **custom-trained YOLOv8** model.
22
+ - You can **upload a blueprint image** via the Gradio interface.
23
+ - The model will return:
24
+ - βœ… An **annotated image** showing detections
25
+ - βœ… A **JSON output** with detection details in this format:
26
+
27
+ ```json
28
+ {
29
+ "detections": [
30
+ {"label": "door", "confidence": 0.91, "bbox": [x, y, w, h]},
31
+ {"label": "window", "confidence": 0.84, "bbox": [x, y, w, h]}
32
+ ]
33
+ }
34
+ ```
35
+
36
  ---
37
+
38
+ ## πŸš€ How to Use
39
+
40
+ 1. Upload a construction blueprint image.
41
+ 2. View the image with annotated bounding boxes.
42
+ 3. Review the detection results in JSON format.
43
+
44
+ βœ… Try it online: *(add your Hugging Face space URL here)*
45
+ βœ… Or run it locally:
46
+
47
+ ```bash
48
+ git clone https://huggingface.co/spaces/YOUR_USERNAME/yolov8-gradio-demo
49
+ cd yolov8-gradio-demo
50
+ pip install -r requirements.txt
51
+ python app.py
52
+ ```
53
+
54
  ---
55
 
56
+ ## 🧠 Tech Stack
57
+
58
+ | Tool | Purpose |
59
+ |-------------|--------------------------------------|
60
+ | YOLOv8 | Object detection |
61
+ | Ultralytics | Model training & inference framework |
62
+ | Gradio | Web interface for inference |
63
+ | Hugging Face Spaces | App hosting platform |
64
+
65
+ ---
66
+
67
+ ## πŸ“¦ Model Info
68
+
69
+ - **Framework**: [Ultralytics YOLOv8](https://docs.ultralytics.com/)
70
+ - **Trained On**: Custom blueprint dataset with annotated door and window classes
71
+ - **Performance**: Optimized for fast inference on 2D plan layouts
72
+
73
+ ---
74
+
75
+ ## 🎯 Recruiter Notes
76
+
77
+ This project demonstrates:
78
+ - Custom model training and fine-tuning
79
+ - Practical use of object detection in architecture
80
+ - Real-world deployment using Hugging Face Spaces
81
+ - Building user-friendly ML apps with Gradio
82
+
83
+ ---
84
+
85
+ ## πŸ™‹β€β™‚οΈ Author
86
+
87
+ **Your Name**
88
+ [GitHub](https://github.com/yourusername) | [LinkedIn](https://linkedin.com/in/yourprofile)
89
+
90
+ ---
91
+
92
+ > ⭐ Feel free to connect with me to discuss AI in construction tech or computer vision applications!
93
+
Space.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ sdk: gradio
2
+ app_file: app.py
3
+ python_version: 3.10
YOLO_WD_DETECTION_TRAINING.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
app.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from ultralytics import YOLO
3
+ from PIL import Image
4
+ import numpy as np
5
+
6
+ model = YOLO("model/my_model.pt")
7
+
8
+ def detect_objects(image: Image.Image):
9
+ results = model(image)
10
+ im_bgr = results[0].plot()
11
+ im_rgb = Image.fromarray(im_bgr[..., ::-1])
12
+
13
+ detections = []
14
+ for box in results[0].boxes:
15
+ cls_id = int(box.cls[0])
16
+ label = model.names[cls_id]
17
+ conf = float(box.conf[0])
18
+ x1, y1, x2, y2 = box.xyxy[0].tolist()
19
+ w, h = x2 - x1, y2 - y1
20
+ detections.append({
21
+ "label": label,
22
+ "confidence": round(conf, 2),
23
+ "bbox": [round(x1, 2), round(y1, 2), round(w, 2), round(h, 2)]
24
+ })
25
+
26
+ return im_rgb, {"detections": detections}
27
+
28
+ gr.Interface(
29
+ fn=detect_objects,
30
+ inputs=gr.Image(type="pil"),
31
+ outputs=[
32
+ gr.Image(type="pil", label="Detected Image"),
33
+ gr.JSON(label="Detection Results")
34
+ ],
35
+ title="YOLOv8 Object Detection",
36
+ description="Upload an image to detect objects using a YOLOv8 model."
37
+ ).launch()
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ gradio
2
+ ultralytics
3
+ pillow