ameer4wisam commited on
Commit
22c51c2
·
verified ·
1 Parent(s): 52f6c7a
Files changed (40) hide show
  1. .gitattributes +3 -0
  2. requirements.txt +0 -0
  3. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/README.md +52 -0
  4. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/__pycache__/tracker.cpython-312.pyc +0 -0
  5. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/__pycache__/tracker.cpython-37.pyc +0 -0
  6. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/coco.names +80 -0
  7. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/data.csv +3 -0
  8. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/car1.png +0 -0
  9. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/car2.jpg +0 -0
  10. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/bike.png +0 -0
  11. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/bus.png +0 -0
  12. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/car.png +0 -0
  13. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/rickshaw.png +0 -0
  14. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/truck.png +0 -0
  15. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/intersection.jpg +3 -0
  16. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/bike.png +0 -0
  17. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/bus.png +0 -0
  18. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/car.png +0 -0
  19. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/rickshaw.png +0 -0
  20. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/truck.png +0 -0
  21. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/mod_int.png +3 -0
  22. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/bike.png +0 -0
  23. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/bus.png +0 -0
  24. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/car.png +0 -0
  25. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/rickshaw.png +0 -0
  26. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/truck.png +0 -0
  27. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/signals/green.png +0 -0
  28. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/signals/red.png +0 -0
  29. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/signals/yellow.png +0 -0
  30. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/bike.png +0 -0
  31. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/bus.png +0 -0
  32. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/car.png +0 -0
  33. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/rickshaw.png +0 -0
  34. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/truck.png +0 -0
  35. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/requirements.txt +1 -0
  36. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/simulation.py +511 -0
  37. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/tracker.py +56 -0
  38. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/vehicle_count.py +209 -0
  39. traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/yolov3-320.cfg +788 -0
  40. yolov3-320.weights +3 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/intersection.jpg filter=lfs diff=lfs merge=lfs -text
37
+ traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/mod_int.png filter=lfs diff=lfs merge=lfs -text
38
+ yolov3-320.weights filter=lfs diff=lfs merge=lfs -text
requirements.txt ADDED
Binary file (780 Bytes). View file
 
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Traffic-light-management-using-AI
2
+ Despite the fact that computer vision is today a highly developed field, capabilities for recognition and classification are not frequently used in day-to-day activities. Therefore, in this project we will make use of the current traffic infrastructure with some additional requirements, to create and demonstrate an advanced traffic light management
3
+ # Abstract
4
+ Increasing traffic congestion at signalized junctions is a matter of great concern for creating and maintaining sustainable cities. Traffic jams in addition to adding to drivers' delays and stress levels, traffic bottlenecks significantly increase fuel usage and pollution. The main reason behind this is the way of controlling traffic at such junctions. Semi-automated and conventional traffic management has reached it’s saturation point and can no longer handle the high density of traffic flow with complex patterns. In this paper, we discuss and draw comparisons of previous research and works done to solve this problem. Our focus would be on techniques using Machine Learning, such as Computer Vision and Deep Neural Networks.
5
+ # Introduction
6
+ Numerous road networks are experiencing issues due to the decline in road capacity and the accompanying Level of Service as a result of the rise in automobiles in metropolitan areas. The bottleneck for these high vehicular flow are the conventional traffic management system used in these cities. There are two ways to tackle traffic flow currently in implementation : semi-automatic traffic lights with fixed timings or traffic police stationed at junctions to control the flow manually. The problem with use of traffic lights is that they use a fixed time for each side of the cross road, irrespective of the density of vehicles on those roads.
7
+
8
+ ![image](https://user-images.githubusercontent.com/75363378/207390161-76e0fa64-9982-49e7-8f3c-874404b94fe5.png)
9
+ ![image](https://user-images.githubusercontent.com/75363378/207390225-02272ba4-38c9-48a2-b05b-3e329a4a6589.png)
10
+
11
+ Ideally, the road with higher density of vehicles should be given extra time to pass the junction, as that will lead to lower waiting time on average for all of the vehicles. Depending on manual control of traffic at junctions with manpower is inefficient, as a human cannot process all of the information at live with great accuracy for a longer time. Despite the fact that computer vision is today a highly developed field, capabilities for recognition and classification are not frequently used in day-to-day activities. Therefore, in this project we will be using Machine Vision to improve the current state of traffic. We would also come across some new methodology and techniques in analysis and decision making for traffic lights using live cameras installed at junctions.
12
+ # Problem Statement
13
+ To build a self adaptive traffic light control system based on YOLO. Disproportionate and diverse traffic in different lanes leads to inefficient utilization of same time slot for each of them characterized by slower speeds, longer trip times, and increased vehicular queuing.To create a system which enable the traffic management system to take time allocation decisions for a particular lane according to the traffic density on other different lanes with the help of cameras, image processing modules.
14
+ # Methodology
15
+ The solution can be explained in four simple steps:
16
+ 1. Get a real time image of each lane.
17
+ 2. Scan and determine traffic density.
18
+ 3. Input this data to the Time Allocation module.
19
+ 4. The output will be the time slots for each lane, accordingly.
20
+
21
+ <img src="https://user-images.githubusercontent.com/75363378/207391460-a103d439-ec6b-460f-a7b7-acddeb8c42a2.png" height="500"> <img src="https://user-images.githubusercontent.com/75363378/207391504-bf55def6-0644-4673-99ba-233f63f3c63b.png" width="500" height="500">
22
+
23
+ # Results and Discussions
24
+ The result for the project could be divided in two divisions :
25
+
26
+ 1. Evaluation of Vehicle Detection Model
27
+ The vehicle detection module was tested with a variety of test images containing varying amounts of vehicles, and the accuracy of detection was found to be in the range of 75-80%. Some test results are shown above in Fig. 3. This is satisfactory, but not optimum. The primary reason for low accuracy is the lack of a proper dataset. To improve upon this, real-life footage from traffic cameras can be used to train the model, so that accuracy of the system can be improved.
28
+ We have used the images from the live feed camera installed at the traffic junctions or at normal traffic spots around the city or on the outskirts. These videos or live feeds were used to feed the model based on YOLO and OpenCV, where we detected the vehicle and there movement as in downwards or upwards, i.e., towards the traffic signal or away from it. We were also successful in defining what type of vehicle is it.
29
+
30
+ <img src="https://user-images.githubusercontent.com/75363378/207393722-8678178d-4ca3-41d3-a8ab-ffc61a123f9a.png" width="800" height="550">
31
+ <img src="https://user-images.githubusercontent.com/75363378/207393802-b3c7df30-5a11-48f0-baf0-838395017fe3.png" width="800" height="550">
32
+
33
+ We were also succesfull in implementing the derived traffic density from the detection module to the model for finding the optimum timings for each traffic lane to pass the vehicles. This was done on the principle of giving more priority or giving more time to the lane with higher traffic density and lesser to the ones with relatively uncrowded lane, while not creating bottlenecks or congestion in any one lane. For the same purpose, we created a simple simulation using PyGame.
34
+
35
+ <img src="https://user-images.githubusercontent.com/75363378/207395269-acc46692-6cd3-4391-b262-17749ada0868.png" width="800" height="550">
36
+
37
+
38
+ 2. Evaluation of the Proposed Adaptive System
39
+ For the pupose of evaluation of the system, the traffic movement simulation for both the system was run for 5 minutes a total of 15 times, using which the total number of vehicles that were able to pass through this junction were calculated.
40
+
41
+ <img src="https://user-images.githubusercontent.com/75363378/207395478-a0ceec2b-21e0-4954-aa3d-732d7bd51bd4.png" width="500">
42
+
43
+ # References
44
+ [1] <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9358334&isnumber=9358272">M. M. Gandhi, D. S. Solanki, R. S. Daptardar and N. S. Baloorkar, "Smart Control of Traffic Light Using Artificial Intelligence," 2020 5th IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE), 2020, pp. 1-6, doi: 10.1109/ICRAIE51050.2020.9358334.</a>
45
+
46
+ [2] <a href="https://www.researchgate.net/publication/229029935_Intelligent_Traffic_Lights_Control_By_Fuzzy_Logic">Khiang, Kok & Khalid, Marzuki & Yusof, Rubiyah. (1997). Intelligent Traffic Lights Control By Fuzzy Logic. Malaysian Journal of Computer Science. 9. 29-35.</a>
47
+
48
+ [3] <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6137134&isnumber=6137119">M. H. Malhi, M. H. Aslam, F. Saeed, O. Javed and M. Fraz, "Vision Based Intelligent Traffic Management System," 2011 Frontiers of Information Technology, 2011, pp. 137-141, doi: 10.1109/FIT.2011.33.</a>
49
+
50
+ [4] <a href="https://www.researchgate.net/publication/328987822_Improving_Traffic_Light_Control_by_Means_of_Fuzzy_Logic">A. Vogel, I. Oremović, R. Šimić and E. Ivanjko, "Improving Traffic Light Control by Means of Fuzzy Logic," 2018 International Symposium ELMAR, 2018, pp. 51-56, doi: 10.23919/ELMAR.2018.8534692.</a>
51
+
52
+ [5] <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7868350&isnumber=7868337">T. Osman, S. S. Psyche, J. M. Shafi Ferdous and H. U. Zaman, "Intelligent traffic management system for cross section of roads using computer vision," 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC), 2017, pp. 1-7, doi: 10.1109/CCWC.2017.7868350.</a>
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/__pycache__/tracker.cpython-312.pyc ADDED
Binary file (2.08 kB). View file
 
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/__pycache__/tracker.cpython-37.pyc ADDED
Binary file (1.29 kB). View file
 
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/coco.names ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ person
2
+ bicycle
3
+ car
4
+ motorbike
5
+ aeroplane
6
+ bus
7
+ train
8
+ truck
9
+ boat
10
+ traffic light
11
+ fire hydrant
12
+ stop sign
13
+ parking meter
14
+ bench
15
+ bird
16
+ cat
17
+ dog
18
+ horse
19
+ sheep
20
+ cow
21
+ elephant
22
+ bear
23
+ zebra
24
+ giraffe
25
+ backpack
26
+ umbrella
27
+ handbag
28
+ tie
29
+ suitcase
30
+ frisbee
31
+ skis
32
+ snowboard
33
+ sports ball
34
+ kite
35
+ baseball bat
36
+ baseball glove
37
+ skateboard
38
+ surfboard
39
+ tennis racket
40
+ bottle
41
+ wine glass
42
+ cup
43
+ fork
44
+ knife
45
+ spoon
46
+ bowl
47
+ banana
48
+ apple
49
+ sandwich
50
+ orange
51
+ broccoli
52
+ carrot
53
+ hot dog
54
+ pizza
55
+ donut
56
+ cake
57
+ chair
58
+ sofa
59
+ pottedplant
60
+ bed
61
+ diningtable
62
+ toilet
63
+ tvmonitor
64
+ laptop
65
+ mouse
66
+ remote
67
+ keyboard
68
+ cell phone
69
+ microwave
70
+ oven
71
+ toaster
72
+ sink
73
+ refrigerator
74
+ book
75
+ clock
76
+ vase
77
+ scissors
78
+ teddy bear
79
+ hair drier
80
+ toothbrush
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/data.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ Direction,car,motorbike,bus,truck
2
+ Up,20,0,2,0
3
+ Down,5,3,2,1
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/car1.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/car2.jpg ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/bike.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/bus.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/car.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/rickshaw.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/down/truck.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/intersection.jpg ADDED

Git LFS Details

  • SHA256: 20a0fdfb788761616a45712f38c96e7e4f4dc96240ecc78fbeac7ab08239b64f
  • Pointer size: 131 Bytes
  • Size of remote file: 118 kB
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/bike.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/bus.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/car.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/rickshaw.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/left/truck.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/mod_int.png ADDED

Git LFS Details

  • SHA256: 34c28bcac2df4ca409ecc638eacf88b7cd4cbc9d29bf9982d2a5188a5b189d65
  • Pointer size: 132 Bytes
  • Size of remote file: 2.25 MB
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/bike.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/bus.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/car.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/rickshaw.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/right/truck.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/signals/green.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/signals/red.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/signals/yellow.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/bike.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/bus.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/car.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/rickshaw.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/images/up/truck.png ADDED
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Download the YoloV3 weights file from here : https://drive.google.com/file/d/1RuNsdS9MgumwTFwVqI9rwSQJwk8qSAQS/view?usp=sharing
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/simulation.py ADDED
@@ -0,0 +1,511 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LAG
2
+ # NO. OF VEHICLES IN SIGNAL CLASS
3
+ # stops not used
4
+ # DISTRIBUTION
5
+ # BUS TOUCHING ON TURNS
6
+ # Distribution using python class
7
+
8
+ # *** IMAGE XY COOD IS TOP LEFT
9
+ import random
10
+ import math
11
+ import time
12
+ import threading
13
+ # from vehicle_detection import detection
14
+ import pygame
15
+ import sys
16
+ import os
17
+
18
+ # options={
19
+ # 'model':'./cfg/yolo.cfg', #specifying the path of model
20
+ # 'load':'./bin/yolov2.weights', #weights
21
+ # 'threshold':0.3 #minimum confidence factor to create a box, greater than 0.3 good
22
+ # }
23
+
24
+ # tfnet=TFNet(options) #READ ABOUT TFNET
25
+
26
+ # Default values of signal times
27
+ defaultRed = 150
28
+ defaultYellow = 5
29
+ defaultGreen = 20
30
+ defaultMinimum = 10
31
+ defaultMaximum = 60
32
+
33
+ signals = []
34
+ noOfSignals = 4
35
+ simTime = 300 # change this to change time of simulation
36
+ timeElapsed = 0
37
+
38
+ currentGreen = 0 # Indicates which signal is green
39
+ nextGreen = (currentGreen+1)%noOfSignals
40
+ currentYellow = 0 # Indicates whether yellow signal is on or off
41
+
42
+ # Average times for vehicles to pass the intersection
43
+ carTime = 2
44
+ bikeTime = 1
45
+ busTime = 2.5
46
+ truckTime = 2.5
47
+
48
+ # Count of cars at a traffic signal
49
+ noOfCars = 0
50
+ noOfBikes = 0
51
+ noOfBuses = 0
52
+ noOfTrucks = 0
53
+ noOfLanes = 2
54
+
55
+ # Red signal time at which cars will be detected at a signal
56
+ detectionTime = 5
57
+
58
+ speeds = {'car': 2.25, 'bus': 1.8, 'truck': 1.8, 'bike': 2.5} # average speeds of vehicles
59
+
60
+ # Coordinates of start
61
+ x = {'right':[0,0,0], 'down':[755,727,697], 'left':[1400,1400,1400], 'up':[602,627,657]}
62
+ y = {'right':[348,370,398], 'down':[0,0,0], 'left':[498,466,436], 'up':[800,800,800]}
63
+
64
+ vehicles = {'right': {0:[], 1:[], 2:[], 'crossed':0}, 'down': {0:[], 1:[], 2:[], 'crossed':0}, 'left': {0:[], 1:[], 2:[], 'crossed':0}, 'up': {0:[], 1:[], 2:[], 'crossed':0}}
65
+ vehicleTypes = {0: 'car', 1: 'bus', 2: 'truck', 3: 'bike'}
66
+ directionNumbers = {0:'right', 1:'down', 2:'left', 3:'up'}
67
+
68
+ # Coordinates of signal image, timer, and vehicle count
69
+ signalCoods = [(530,230),(810,230),(810,570),(530,570)]
70
+ signalTimerCoods = [(530,210),(810,210),(810,550),(530,550)]
71
+ vehicleCountCoods = [(480,210),(880,210),(880,550),(480,550)]
72
+ vehicleCountTexts = ["0", "0", "0", "0"]
73
+
74
+ # Coordinates of stop lines
75
+ stopLines = {'right': 590, 'down': 330, 'left': 800, 'up': 535}
76
+ defaultStop = {'right': 580, 'down': 320, 'left': 810, 'up': 545}
77
+ stops = {'right': [580,580,580], 'down': [320,320,320], 'left': [810,810,810], 'up': [545,545,545]}
78
+
79
+ mid = {'right': {'x':705, 'y':445}, 'down': {'x':695, 'y':450}, 'left': {'x':695, 'y':425}, 'up': {'x':695, 'y':400}}
80
+ rotationAngle = 3
81
+
82
+ # Gap between vehicles
83
+ gap = 15 # stopping gap
84
+ gap2 = 15 # moving gap
85
+
86
+ pygame.init()
87
+ simulation = pygame.sprite.Group()
88
+
89
+ class TrafficSignal:
90
+ def __init__(self, red, yellow, green, minimum, maximum):
91
+ self.red = red
92
+ self.yellow = yellow
93
+ self.green = green
94
+ self.minimum = minimum
95
+ self.maximum = maximum
96
+ self.signalText = "30"
97
+ self.totalGreenTime = 0
98
+
99
+ class Vehicle(pygame.sprite.Sprite):
100
+ def __init__(self, lane, vehicleClass, direction_number, direction, will_turn):
101
+ pygame.sprite.Sprite.__init__(self)
102
+ self.lane = lane
103
+ self.vehicleClass = vehicleClass
104
+ self.speed = speeds[vehicleClass]
105
+ self.direction_number = direction_number
106
+ self.direction = direction
107
+ self.x = x[direction][lane]
108
+ self.y = y[direction][lane]
109
+ self.crossed = 0
110
+ self.willTurn = will_turn
111
+ self.turned = 0
112
+ self.rotateAngle = 0
113
+ vehicles[direction][lane].append(self)
114
+ # self.stop = stops[direction][lane]
115
+ self.index = len(vehicles[direction][lane]) - 1
116
+ path = "images/" + direction + "/" + vehicleClass + ".png"
117
+ self.originalImage = pygame.image.load(path)
118
+ self.currentImage = pygame.image.load(path)
119
+
120
+
121
+ if(direction=='right'):
122
+ if(len(vehicles[direction][lane])>1 and vehicles[direction][lane][self.index-1].crossed==0): # if more than 1 vehicle in the lane of vehicle before it has crossed stop line
123
+ self.stop = vehicles[direction][lane][self.index-1].stop - vehicles[direction][lane][self.index-1].currentImage.get_rect().width - gap # setting stop coordinate as: stop coordinate of next vehicle - width of next vehicle - gap
124
+ else:
125
+ self.stop = defaultStop[direction]
126
+ # Set new starting and stopping coordinate
127
+ temp = self.currentImage.get_rect().width + gap
128
+ x[direction][lane] -= temp
129
+ stops[direction][lane] -= temp
130
+ elif(direction=='left'):
131
+ if(len(vehicles[direction][lane])>1 and vehicles[direction][lane][self.index-1].crossed==0):
132
+ self.stop = vehicles[direction][lane][self.index-1].stop + vehicles[direction][lane][self.index-1].currentImage.get_rect().width + gap
133
+ else:
134
+ self.stop = defaultStop[direction]
135
+ temp = self.currentImage.get_rect().width + gap
136
+ x[direction][lane] += temp
137
+ stops[direction][lane] += temp
138
+ elif(direction=='down'):
139
+ if(len(vehicles[direction][lane])>1 and vehicles[direction][lane][self.index-1].crossed==0):
140
+ self.stop = vehicles[direction][lane][self.index-1].stop - vehicles[direction][lane][self.index-1].currentImage.get_rect().height - gap
141
+ else:
142
+ self.stop = defaultStop[direction]
143
+ temp = self.currentImage.get_rect().height + gap
144
+ y[direction][lane] -= temp
145
+ stops[direction][lane] -= temp
146
+ elif(direction=='up'):
147
+ if(len(vehicles[direction][lane])>1 and vehicles[direction][lane][self.index-1].crossed==0):
148
+ self.stop = vehicles[direction][lane][self.index-1].stop + vehicles[direction][lane][self.index-1].currentImage.get_rect().height + gap
149
+ else:
150
+ self.stop = defaultStop[direction]
151
+ temp = self.currentImage.get_rect().height + gap
152
+ y[direction][lane] += temp
153
+ stops[direction][lane] += temp
154
+ simulation.add(self)
155
+
156
+ def render(self, screen):
157
+ screen.blit(self.currentImage, (self.x, self.y))
158
+
159
+ def move(self):
160
+ if(self.direction=='right'):
161
+ if(self.crossed==0 and self.x+self.currentImage.get_rect().width>stopLines[self.direction]): # if the image has crossed stop line now
162
+ self.crossed = 1
163
+ vehicles[self.direction]['crossed'] += 1
164
+ if(self.willTurn==1):
165
+ if(self.crossed==0 or self.x+self.currentImage.get_rect().width<mid[self.direction]['x']):
166
+ if((self.x+self.currentImage.get_rect().width<=self.stop or (currentGreen==0 and currentYellow==0) or self.crossed==1) and (self.index==0 or self.x+self.currentImage.get_rect().width<(vehicles[self.direction][self.lane][self.index-1].x - gap2) or vehicles[self.direction][self.lane][self.index-1].turned==1)):
167
+ self.x += self.speed
168
+ else:
169
+ if(self.turned==0):
170
+ self.rotateAngle += rotationAngle
171
+ self.currentImage = pygame.transform.rotate(self.originalImage, -self.rotateAngle)
172
+ self.x += 2
173
+ self.y += 1.8
174
+ if(self.rotateAngle==90):
175
+ self.turned = 1
176
+ # path = "images/" + directionNumbers[((self.direction_number+1)%noOfSignals)] + "/" + self.vehicleClass + ".png"
177
+ # self.x = mid[self.direction]['x']
178
+ # self.y = mid[self.direction]['y']
179
+ # self.image = pygame.image.load(path)
180
+ else:
181
+ if(self.index==0 or self.y+self.currentImage.get_rect().height<(vehicles[self.direction][self.lane][self.index-1].y - gap2) or self.x+self.currentImage.get_rect().width<(vehicles[self.direction][self.lane][self.index-1].x - gap2)):
182
+ self.y += self.speed
183
+ else:
184
+ if((self.x+self.currentImage.get_rect().width<=self.stop or self.crossed == 1 or (currentGreen==0 and currentYellow==0)) and (self.index==0 or self.x+self.currentImage.get_rect().width<(vehicles[self.direction][self.lane][self.index-1].x - gap2) or (vehicles[self.direction][self.lane][self.index-1].turned==1))):
185
+ # (if the image has not reached its stop coordinate or has crossed stop line or has green signal) and (it is either the first vehicle in that lane or it is has enough gap to the next vehicle in that lane)
186
+ self.x += self.speed # move the vehicle
187
+
188
+
189
+
190
+ elif(self.direction=='down'):
191
+ if(self.crossed==0 and self.y+self.currentImage.get_rect().height>stopLines[self.direction]):
192
+ self.crossed = 1
193
+ vehicles[self.direction]['crossed'] += 1
194
+ if(self.willTurn==1):
195
+ if(self.crossed==0 or self.y+self.currentImage.get_rect().height<mid[self.direction]['y']):
196
+ if((self.y+self.currentImage.get_rect().height<=self.stop or (currentGreen==1 and currentYellow==0) or self.crossed==1) and (self.index==0 or self.y+self.currentImage.get_rect().height<(vehicles[self.direction][self.lane][self.index-1].y - gap2) or vehicles[self.direction][self.lane][self.index-1].turned==1)):
197
+ self.y += self.speed
198
+ else:
199
+ if(self.turned==0):
200
+ self.rotateAngle += rotationAngle
201
+ self.currentImage = pygame.transform.rotate(self.originalImage, -self.rotateAngle)
202
+ self.x -= 2.5
203
+ self.y += 2
204
+ if(self.rotateAngle==90):
205
+ self.turned = 1
206
+ else:
207
+ if(self.index==0 or self.x>(vehicles[self.direction][self.lane][self.index-1].x + vehicles[self.direction][self.lane][self.index-1].currentImage.get_rect().width + gap2) or self.y<(vehicles[self.direction][self.lane][self.index-1].y - gap2)):
208
+ self.x -= self.speed
209
+ else:
210
+ if((self.y+self.currentImage.get_rect().height<=self.stop or self.crossed == 1 or (currentGreen==1 and currentYellow==0)) and (self.index==0 or self.y+self.currentImage.get_rect().height<(vehicles[self.direction][self.lane][self.index-1].y - gap2) or (vehicles[self.direction][self.lane][self.index-1].turned==1))):
211
+ self.y += self.speed
212
+
213
+ elif(self.direction=='left'):
214
+ if(self.crossed==0 and self.x<stopLines[self.direction]):
215
+ self.crossed = 1
216
+ vehicles[self.direction]['crossed'] += 1
217
+ if(self.willTurn==1):
218
+ if(self.crossed==0 or self.x>mid[self.direction]['x']):
219
+ if((self.x>=self.stop or (currentGreen==2 and currentYellow==0) or self.crossed==1) and (self.index==0 or self.x>(vehicles[self.direction][self.lane][self.index-1].x + vehicles[self.direction][self.lane][self.index-1].currentImage.get_rect().width + gap2) or vehicles[self.direction][self.lane][self.index-1].turned==1)):
220
+ self.x -= self.speed
221
+ else:
222
+ if(self.turned==0):
223
+ self.rotateAngle += rotationAngle
224
+ self.currentImage = pygame.transform.rotate(self.originalImage, -self.rotateAngle)
225
+ self.x -= 1.8
226
+ self.y -= 2.5
227
+ if(self.rotateAngle==90):
228
+ self.turned = 1
229
+ # path = "images/" + directionNumbers[((self.direction_number+1)%noOfSignals)] + "/" + self.vehicleClass + ".png"
230
+ # self.x = mid[self.direction]['x']
231
+ # self.y = mid[self.direction]['y']
232
+ # self.currentImage = pygame.image.load(path)
233
+ else:
234
+ if(self.index==0 or self.y>(vehicles[self.direction][self.lane][self.index-1].y + vehicles[self.direction][self.lane][self.index-1].currentImage.get_rect().height + gap2) or self.x>(vehicles[self.direction][self.lane][self.index-1].x + gap2)):
235
+ self.y -= self.speed
236
+ else:
237
+ if((self.x>=self.stop or self.crossed == 1 or (currentGreen==2 and currentYellow==0)) and (self.index==0 or self.x>(vehicles[self.direction][self.lane][self.index-1].x + vehicles[self.direction][self.lane][self.index-1].currentImage.get_rect().width + gap2) or (vehicles[self.direction][self.lane][self.index-1].turned==1))):
238
+ # (if the image has not reached its stop coordinate or has crossed stop line or has green signal) and (it is either the first vehicle in that lane or it is has enough gap to the next vehicle in that lane)
239
+ self.x -= self.speed # move the vehicle
240
+ # if((self.x>=self.stop or self.crossed == 1 or (currentGreen==2 and currentYellow==0)) and (self.index==0 or self.x>(vehicles[self.direction][self.lane][self.index-1].x + vehicles[self.direction][self.lane][self.index-1].currentImage.get_rect().width + gap2))):
241
+ # self.x -= self.speed
242
+ elif(self.direction=='up'):
243
+ if(self.crossed==0 and self.y<stopLines[self.direction]):
244
+ self.crossed = 1
245
+ vehicles[self.direction]['crossed'] += 1
246
+ if(self.willTurn==1):
247
+ if(self.crossed==0 or self.y>mid[self.direction]['y']):
248
+ if((self.y>=self.stop or (currentGreen==3 and currentYellow==0) or self.crossed == 1) and (self.index==0 or self.y>(vehicles[self.direction][self.lane][self.index-1].y + vehicles[self.direction][self.lane][self.index-1].currentImage.get_rect().height + gap2) or vehicles[self.direction][self.lane][self.index-1].turned==1)):
249
+ self.y -= self.speed
250
+ else:
251
+ if(self.turned==0):
252
+ self.rotateAngle += rotationAngle
253
+ self.currentImage = pygame.transform.rotate(self.originalImage, -self.rotateAngle)
254
+ self.x += 1
255
+ self.y -= 1
256
+ if(self.rotateAngle==90):
257
+ self.turned = 1
258
+ else:
259
+ if(self.index==0 or self.x<(vehicles[self.direction][self.lane][self.index-1].x - vehicles[self.direction][self.lane][self.index-1].currentImage.get_rect().width - gap2) or self.y>(vehicles[self.direction][self.lane][self.index-1].y + gap2)):
260
+ self.x += self.speed
261
+ else:
262
+ if((self.y>=self.stop or self.crossed == 1 or (currentGreen==3 and currentYellow==0)) and (self.index==0 or self.y>(vehicles[self.direction][self.lane][self.index-1].y + vehicles[self.direction][self.lane][self.index-1].currentImage.get_rect().height + gap2) or (vehicles[self.direction][self.lane][self.index-1].turned==1))):
263
+ self.y -= self.speed
264
+
265
+ # Initialization of signals with default values
266
+ def initialize():
267
+ ts1 = TrafficSignal(0, defaultYellow, defaultGreen, defaultMinimum, defaultMaximum)
268
+ signals.append(ts1)
269
+ ts2 = TrafficSignal(ts1.red+ts1.yellow+ts1.green, defaultYellow, defaultGreen, defaultMinimum, defaultMaximum)
270
+ signals.append(ts2)
271
+ ts3 = TrafficSignal(defaultRed, defaultYellow, defaultGreen, defaultMinimum, defaultMaximum)
272
+ signals.append(ts3)
273
+ ts4 = TrafficSignal(defaultRed, defaultYellow, defaultGreen, defaultMinimum, defaultMaximum)
274
+ signals.append(ts4)
275
+ repeat()
276
+
277
+ # Set time according to formula
278
+ def setTime():
279
+ global noOfCars, noOfBikes, noOfBuses, noOfTrucks, noOfLanes
280
+ global carTime, busTime, truckTime, bikeTime
281
+ os.system("say detecting vehicles, "+directionNumbers[(currentGreen+1)%noOfSignals])
282
+ # detection_result=detection(currentGreen,tfnet)
283
+ # greenTime = math.ceil(((noOfCars*carTime) + (noOfBuses*busTime) + (noOfBikes*bikeTime))/(noOfLanes+1))
284
+ # if(greenTime<defaultMinimum):
285
+ # greenTime = defaultMinimum
286
+ # elif(greenTime>defaultMaximum):
287
+ # greenTime = defaultMaximum
288
+ # greenTime = len(vehicles[currentGreen][0])+len(vehicles[currentGreen][1])+len(vehicles[currentGreen][2])
289
+ # noOfVehicles = len(vehicles[directionNumbers[nextGreen]][1])+len(vehicles[directionNumbers[nextGreen]][2])-vehicles[directionNumbers[nextGreen]]['crossed']
290
+ # print("no. of vehicles = ",noOfVehicles)
291
+ noOfCars, noOfBuses, noOfTrucks, noOfBikes = 0,0,0,0
292
+ for j in range(len(vehicles[directionNumbers[nextGreen]][0])):
293
+ vehicle = vehicles[directionNumbers[nextGreen]][0][j]
294
+ if(vehicle.crossed==0):
295
+ vclass = vehicle.vehicleClass
296
+ # print(vclass)
297
+ noOfBikes += 1
298
+ for i in range(1,3):
299
+ for j in range(len(vehicles[directionNumbers[nextGreen]][i])):
300
+ vehicle = vehicles[directionNumbers[nextGreen]][i][j]
301
+ if(vehicle.crossed==0):
302
+ vclass = vehicle.vehicleClass
303
+ # print(vclass)
304
+ if(vclass=='car'):
305
+ noOfCars += 1
306
+ elif(vclass=='bus'):
307
+ noOfBuses += 1
308
+ elif(vclass=='truck'):
309
+ noOfTrucks += 1
310
+ # print(noOfCars)
311
+ greenTime = math.ceil(((noOfCars*carTime) + (noOfBuses*busTime) + (noOfTrucks*truckTime)+ (noOfBikes*bikeTime))/(noOfLanes+1))
312
+ # greenTime = math.ceil((noOfVehicles)/noOfLanes)
313
+ print('Green Time: ',greenTime)
314
+ if(greenTime<defaultMinimum):
315
+ greenTime = defaultMinimum
316
+ elif(greenTime>defaultMaximum):
317
+ greenTime = defaultMaximum
318
+ # greenTime = random.randint(15,50)
319
+ signals[(currentGreen+1)%(noOfSignals)].green = greenTime
320
+
321
+ def repeat():
322
+ global currentGreen, currentYellow, nextGreen
323
+ while(signals[currentGreen].green>0): # while the timer of current green signal is not zero
324
+ printStatus()
325
+ updateValues()
326
+ if(signals[(currentGreen+1)%(noOfSignals)].red==detectionTime): # set time of next green signal
327
+ thread = threading.Thread(name="detection",target=setTime, args=())
328
+ thread.daemon = True
329
+ thread.start()
330
+ # setTime()
331
+ time.sleep(1)
332
+ currentYellow = 1 # set yellow signal on
333
+ vehicleCountTexts[currentGreen] = "0"
334
+ # reset stop coordinates of lanes and vehicles
335
+ for i in range(0,3):
336
+ stops[directionNumbers[currentGreen]][i] = defaultStop[directionNumbers[currentGreen]]
337
+ for vehicle in vehicles[directionNumbers[currentGreen]][i]:
338
+ vehicle.stop = defaultStop[directionNumbers[currentGreen]]
339
+ while(signals[currentGreen].yellow>0): # while the timer of current yellow signal is not zero
340
+ printStatus()
341
+ updateValues()
342
+ time.sleep(1)
343
+ currentYellow = 0 # set yellow signal off
344
+
345
+ # reset all signal times of current signal to default times
346
+ signals[currentGreen].green = defaultGreen
347
+ signals[currentGreen].yellow = defaultYellow
348
+ signals[currentGreen].red = defaultRed
349
+
350
+ currentGreen = nextGreen # set next signal as green signal
351
+ nextGreen = (currentGreen+1)%noOfSignals # set next green signal
352
+ signals[nextGreen].red = signals[currentGreen].yellow+signals[currentGreen].green # set the red time of next to next signal as (yellow time + green time) of next signal
353
+ repeat()
354
+
355
+ # Print the signal timers on cmd
356
+ def printStatus():
357
+ for i in range(0, noOfSignals):
358
+ if(i==currentGreen):
359
+ if(currentYellow==0):
360
+ print(" GREEN TS",i+1,"-> r:",signals[i].red," y:",signals[i].yellow," g:",signals[i].green)
361
+ else:
362
+ print("YELLOW TS",i+1,"-> r:",signals[i].red," y:",signals[i].yellow," g:",signals[i].green)
363
+ else:
364
+ print(" RED TS",i+1,"-> r:",signals[i].red," y:",signals[i].yellow," g:",signals[i].green)
365
+ print()
366
+
367
+ # Update values of the signal timers after every second
368
+ def updateValues():
369
+ for i in range(0, noOfSignals):
370
+ if(i==currentGreen):
371
+ if(currentYellow==0):
372
+ signals[i].green-=1
373
+ signals[i].totalGreenTime+=1
374
+ else:
375
+ signals[i].yellow-=1
376
+ else:
377
+ signals[i].red-=1
378
+
379
+ # Generating vehicles in the simulation
380
+ def generateVehicles():
381
+ while True:
382
+ vehicle_type = random.randint(0, 3)
383
+ if vehicle_type == 3:
384
+ lane_number = 0
385
+ else:
386
+ lane_number = random.randint(0, 1) + 1
387
+ will_turn = 0
388
+ if lane_number == 2:
389
+ temp = random.randint(0, 4)
390
+ if temp <= 2:
391
+ will_turn = 1
392
+ elif temp > 2:
393
+ will_turn = 0
394
+ temp = random.randint(0, 999)
395
+ direction_number = 0
396
+ a = [400, 800, 900, 1000]
397
+ if temp < a[0]:
398
+ direction_number = 0
399
+ elif temp < a[1]:
400
+ direction_number = 1
401
+ elif temp < a[2]:
402
+ direction_number = 2
403
+ elif temp < a[3]:
404
+ direction_number = 3
405
+ Vehicle(lane_number, vehicleTypes[vehicle_type], direction_number, directionNumbers[direction_number], will_turn)
406
+ time.sleep(0.75)
407
+
408
+ def simulationTime():
409
+ global timeElapsed, simTime
410
+ while(True):
411
+ timeElapsed += 1
412
+ time.sleep(1)
413
+ if(timeElapsed==simTime):
414
+ totalVehicles = 0
415
+ print('Lane-wise Vehicle Counts')
416
+ for i in range(noOfSignals):
417
+ print('Lane',i+1,':',vehicles[directionNumbers[i]]['crossed'])
418
+ totalVehicles += vehicles[directionNumbers[i]]['crossed']
419
+ print('Total vehicles passed: ',totalVehicles)
420
+ print('Total time passed: ',timeElapsed)
421
+ print('No. of vehicles passed per unit time: ',(float(totalVehicles)/float(timeElapsed)))
422
+ os._exit(1)
423
+
424
+
425
+ class Main:
426
+ thread4 = threading.Thread(name="simulationTime",target=simulationTime, args=())
427
+ thread4.daemon = True
428
+ thread4.start()
429
+
430
+ thread2 = threading.Thread(name="initialization",target=initialize, args=()) # initialization
431
+ thread2.daemon = True
432
+ thread2.start()
433
+
434
+ # Colours
435
+ black = (0, 0, 0)
436
+ white = (255, 255, 255)
437
+
438
+ # Screensize
439
+ screenWidth = 1400
440
+ screenHeight = 800
441
+ screenSize = (screenWidth, screenHeight)
442
+
443
+ # Setting background image i.e. image of intersection
444
+ background = pygame.image.load('images/mod_int.png')
445
+
446
+ screen = pygame.display.set_mode(screenSize)
447
+ pygame.display.set_caption("SIMULATION")
448
+
449
+ # Loading signal images and font
450
+ redSignal = pygame.image.load('images/signals/red.png')
451
+ yellowSignal = pygame.image.load('images/signals/yellow.png')
452
+ greenSignal = pygame.image.load('images/signals/green.png')
453
+ font = pygame.font.Font(None, 30)
454
+
455
+ thread3 = threading.Thread(name="generateVehicles",target=generateVehicles, args=()) # Generating vehicles
456
+ thread3.daemon = True
457
+ thread3.start()
458
+
459
+ while True:
460
+ for event in pygame.event.get():
461
+ if event.type == pygame.QUIT:
462
+ sys.exit()
463
+
464
+ screen.blit(background,(0,0)) # display background in simulation
465
+ for i in range(0,noOfSignals): # display signal and set timer according to current status: green, yello, or red
466
+ if(i==currentGreen):
467
+ if(currentYellow==1):
468
+ if(signals[i].yellow==0):
469
+ signals[i].signalText = "STOP"
470
+ else:
471
+ signals[i].signalText = signals[i].yellow
472
+ screen.blit(yellowSignal, signalCoods[i])
473
+ else:
474
+ if(signals[i].green==0):
475
+ signals[i].signalText = "SLOW"
476
+ else:
477
+ signals[i].signalText = signals[i].green
478
+ screen.blit(greenSignal, signalCoods[i])
479
+ else:
480
+ if(signals[i].red<=10):
481
+ if(signals[i].red==0):
482
+ signals[i].signalText = "GO"
483
+ else:
484
+ signals[i].signalText = signals[i].red
485
+ else:
486
+ signals[i].signalText = "---"
487
+ screen.blit(redSignal, signalCoods[i])
488
+ signalTexts = ["","","",""]
489
+
490
+ # display signal timer and vehicle count
491
+ for i in range(0,noOfSignals):
492
+ signalTexts[i] = font.render(str(signals[i].signalText), True, white, black)
493
+ screen.blit(signalTexts[i],signalTimerCoods[i])
494
+ displayText = vehicles[directionNumbers[i]]['crossed']
495
+ vehicleCountTexts[i] = font.render(str(displayText), True, black, white)
496
+ screen.blit(vehicleCountTexts[i],vehicleCountCoods[i])
497
+
498
+ timeElapsedText = font.render(("Time Elapsed: "+str(timeElapsed)), True, black, white)
499
+ screen.blit(timeElapsedText,(1100,50))
500
+
501
+ # display the vehicles
502
+ for vehicle in simulation:
503
+ screen.blit(vehicle.currentImage, [vehicle.x, vehicle.y])
504
+ # vehicle.render(screen)
505
+ vehicle.move()
506
+ pygame.display.update()
507
+
508
+ Main()
509
+
510
+
511
+
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/tracker.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TechVidvan Vehicle-Tracker
2
+
3
+ import math
4
+
5
+ class EuclideanDistTracker:
6
+ def __init__(self):
7
+ # Store the center positions of the objects
8
+ self.center_points = {}
9
+ # Keep the count of the IDs
10
+ # each time a new object id detected, the count will increase by one
11
+ self.id_count = 0
12
+
13
+
14
+ def update(self, objects_rect):
15
+ # Objects boxes and ids
16
+ objects_bbs_ids = []
17
+
18
+ # Get center point of new object
19
+ for rect in objects_rect:
20
+ x, y, w, h, index = rect
21
+ cx = (x + x + w) // 2
22
+ cy = (y + y + h) // 2
23
+
24
+ # Find out if that object was detected already
25
+ same_object_detected = False
26
+ for id, pt in self.center_points.items():
27
+ dist = math.hypot(cx - pt[0], cy - pt[1])
28
+
29
+ if dist < 25:
30
+ self.center_points[id] = (cx, cy)
31
+ # print(self.center_points)
32
+ objects_bbs_ids.append([x, y, w, h, id, index])
33
+ same_object_detected = True
34
+ break
35
+
36
+ # New object is detected we assign the ID to that object
37
+ if same_object_detected is False:
38
+ self.center_points[self.id_count] = (cx, cy)
39
+ objects_bbs_ids.append([x, y, w, h, self.id_count, index])
40
+ self.id_count += 1
41
+
42
+ # Clean the dictionary by center points to remove IDS not used anymore
43
+ new_center_points = {}
44
+ for obj_bb_id in objects_bbs_ids:
45
+ _, _, _, _, object_id, index = obj_bb_id
46
+ center = self.center_points[object_id]
47
+ new_center_points[object_id] = center
48
+
49
+ # Update dictionary with IDs not used removed
50
+ self.center_points = new_center_points.copy()
51
+ return objects_bbs_ids
52
+
53
+
54
+
55
+ def ad(a, b):
56
+ return a+b
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/vehicle_count.py ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TechVidvan Vehicle counting and Classification
2
+
3
+ # Import necessary packages
4
+
5
+ import cv2
6
+ import csv
7
+ import collections
8
+ import numpy as np
9
+ from tracker import *
10
+ import os
11
+
12
+ # Initialize Tracker
13
+ tracker = EuclideanDistTracker()
14
+
15
+ # Initialize the videocapture object
16
+ cap = cv2.VideoCapture('video.mp4')
17
+ input_size = 320
18
+
19
+ # Detection confidence threshold
20
+ confThreshold =0.2
21
+ nmsThreshold= 0.2
22
+
23
+ font_color = (0, 0, 255)
24
+ font_size = 0.5
25
+ font_thickness = 2
26
+
27
+ # Middle cross line position
28
+ middle_line_position = 225
29
+ up_line_position = middle_line_position - 15
30
+ down_line_position = middle_line_position + 15
31
+
32
+
33
+ # Store Coco Names in a list
34
+ classesFile = "coco.names"
35
+ classNames = open(classesFile).read().strip().split('\n')
36
+ print(classNames)
37
+ print(len(classNames))
38
+
39
+ # class index for our required detection classes
40
+ required_class_index = [2, 3, 5, 7]
41
+
42
+ detected_classNames = []
43
+
44
+ ## Model Files
45
+ modelConfiguration = 'yolov3-320.cfg'
46
+ modelWeigheights = 'yolov3-320.weights'
47
+
48
+ # configure the network model
49
+ net = cv2.dnn.readNetFromDarknet(modelConfiguration, modelWeigheights)
50
+
51
+ # Configure the network backend
52
+
53
+ # Configure the network backend
54
+ net.setPreferableBackend(cv2.dnn.DNN_BACKEND_DEFAULT)
55
+ net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
56
+
57
+ # Define random colour for each class
58
+ np.random.seed(42)
59
+ colors = np.random.randint(0, 255, size=(len(classNames), 3), dtype='uint8')
60
+
61
+
62
+ # Function for finding the center of a rectangle
63
+ def find_center(x, y, w, h):
64
+ x1=int(w/2)
65
+ y1=int(h/2)
66
+ cx = x+x1
67
+ cy=y+y1
68
+ return cx, cy
69
+
70
+ # List for store vehicle count information
71
+ temp_up_list = []
72
+ temp_down_list = []
73
+ up_list = [0, 0, 0, 0]
74
+ down_list = [0, 0, 0, 0]
75
+
76
+ # Function for count vehicle
77
+ def count_vehicle(box_id, img):
78
+
79
+ x, y, w, h, id, index = box_id
80
+
81
+ # Find the center of the rectangle for detection
82
+ center = find_center(x, y, w, h)
83
+ ix, iy = center
84
+
85
+ # Find the current position of the vehicle
86
+ if (iy > up_line_position) and (iy < middle_line_position):
87
+
88
+ if id not in temp_up_list:
89
+ temp_up_list.append(id)
90
+
91
+ elif iy < down_line_position and iy > middle_line_position:
92
+ if id not in temp_down_list:
93
+ temp_down_list.append(id)
94
+
95
+ elif iy < up_line_position:
96
+ if id in temp_down_list:
97
+ temp_down_list.remove(id)
98
+ up_list[index] = up_list[index]+1
99
+
100
+ elif iy > down_line_position:
101
+ if id in temp_up_list:
102
+ temp_up_list.remove(id)
103
+ down_list[index] = down_list[index] + 1
104
+
105
+ # Draw circle in the middle of the rectangle
106
+ cv2.circle(img, center, 2, (0, 0, 255), -1) # end here
107
+ # print(up_list, down_list)
108
+
109
+ # Function for finding the detected objects from the network output
110
+ def postProcess(outputs,img):
111
+ global detected_classNames
112
+ height, width = img.shape[:2]
113
+ boxes = []
114
+ classIds = []
115
+ confidence_scores = []
116
+ detection = []
117
+ for output in outputs:
118
+ for det in output:
119
+ scores = det[5:]
120
+ classId = np.argmax(scores)
121
+ confidence = scores[classId]
122
+ if classId in required_class_index:
123
+ if confidence > confThreshold:
124
+ # print(classId)
125
+ w,h = int(det[2]*width) , int(det[3]*height)
126
+ x,y = int((det[0]*width)-w/2) , int((det[1]*height)-h/2)
127
+ boxes.append([x,y,w,h])
128
+ classIds.append(classId)
129
+ confidence_scores.append(float(confidence))
130
+
131
+ # Apply Non-Max Suppression
132
+ indices = cv2.dnn.NMSBoxes(boxes, confidence_scores, confThreshold, nmsThreshold)
133
+ # print(classIds)
134
+ for i in indices.flatten():
135
+ x, y, w, h = boxes[i][0], boxes[i][1], boxes[i][2], boxes[i][3]
136
+ # print(x,y,w,h)
137
+
138
+ color = [int(c) for c in colors[classIds[i]]]
139
+ name = classNames[classIds[i]]
140
+ detected_classNames.append(name)
141
+ # Draw classname and confidence score
142
+ cv2.putText(img,f'{name.upper()} {int(confidence_scores[i]*100)}%',
143
+ (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 1)
144
+
145
+ # Draw bounding rectangle
146
+ cv2.rectangle(img, (x, y), (x + w, y + h), color, 1)
147
+ detection.append([x, y, w, h, required_class_index.index(classIds[i])])
148
+
149
+ # Update the tracker for each object
150
+ boxes_ids = tracker.update(detection)
151
+ for box_id in boxes_ids:
152
+ count_vehicle(box_id, img)
153
+
154
+
155
+ def realTime():
156
+ while True:
157
+ success, img = cap.read()
158
+ img = cv2.resize(img,(0,0),None,0.5,0.5)
159
+ ih, iw, channels = img.shape
160
+
161
+ blob = cv2.dnn.blobFromImage(img, 1 / 255, (input_size, input_size), [0, 0, 0], 1, crop=False)
162
+
163
+ # Set the input of the network
164
+ net.setInput(blob)
165
+ layersNames = net.getLayerNames()
166
+ outputNames = [(layersNames[i - 1]) for i in net.getUnconnectedOutLayers()]
167
+ # Feed data to the network
168
+ outputs = net.forward(outputNames)
169
+
170
+ # Find the objects from the network output
171
+ postProcess(outputs,img)
172
+
173
+ # Draw the crossing lines
174
+
175
+ cv2.line(img, (0, middle_line_position), (iw, middle_line_position), (255, 0, 255), 2)
176
+ cv2.line(img, (0, up_line_position), (iw, up_line_position), (0, 0, 255), 2)
177
+ cv2.line(img, (0, down_line_position), (iw, down_line_position), (0, 0, 255), 2)
178
+
179
+ # Draw counting texts in the frame
180
+ cv2.putText(img, "Up", (110, 20), cv2.FONT_HERSHEY_SIMPLEX, font_size, font_color, font_thickness)
181
+ cv2.putText(img, "Down", (160, 20), cv2.FONT_HERSHEY_SIMPLEX, font_size, font_color, font_thickness)
182
+ cv2.putText(img, "Car: "+str(up_list[0])+" "+ str(down_list[0]), (20, 40), cv2.FONT_HERSHEY_SIMPLEX, font_size, font_color, font_thickness)
183
+ cv2.putText(img, "Motorbike: "+str(up_list[1])+" "+ str(down_list[1]), (20, 60), cv2.FONT_HERSHEY_SIMPLEX, font_size, font_color, font_thickness)
184
+ cv2.putText(img, "Bus: "+str(up_list[2])+" "+ str(down_list[2]), (20, 80), cv2.FONT_HERSHEY_SIMPLEX, font_size, font_color, font_thickness)
185
+ cv2.putText(img, "Truck: "+str(up_list[3])+" "+ str(down_list[3]), (20, 100), cv2.FONT_HERSHEY_SIMPLEX, font_size, font_color, font_thickness)
186
+
187
+ # Show the frames
188
+ cv2.imshow('Output', img)
189
+
190
+ if cv2.waitKey(1) == ord('q'):
191
+ break
192
+
193
+ # Write the vehicle counting information in a file and save it
194
+
195
+ with open("data.csv", 'w') as f1:
196
+ cwriter = csv.writer(f1)
197
+ cwriter.writerow(['Direction', 'car', 'motorbike', 'bus', 'truck'])
198
+ up_list.insert(0, "Up")
199
+ down_list.insert(0, "Down")
200
+ cwriter.writerow(up_list)
201
+ cwriter.writerow(down_list)
202
+ f1.close()
203
+ # print("Data saved at 'data.csv'")
204
+ # Finally realese the capture object and destroy all active windows
205
+ cap.release()
206
+ cv2.destroyAllWindows()
207
+
208
+ if __name__ == '__main__':
209
+ realTime()
traffic-light-management-using-AI-main/traffic-light-management-using-AI-main/yolov3-320.cfg ADDED
@@ -0,0 +1,788 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [net]
2
+ # Testing
3
+ # batch=1
4
+ # subdivisions=1
5
+ # Training
6
+ batch=64
7
+ subdivisions=16
8
+ width=608
9
+ height=608
10
+ channels=3
11
+ momentum=0.9
12
+ decay=0.0005
13
+ angle=0
14
+ saturation = 1.5
15
+ exposure = 1.5
16
+ hue=.1
17
+
18
+ learning_rate=0.001
19
+ burn_in=1000
20
+ max_batches = 500200
21
+ policy=steps
22
+ steps=400000,450000
23
+ scales=.1,.1
24
+
25
+ [convolutional]
26
+ batch_normalize=1
27
+ filters=32
28
+ size=3
29
+ stride=1
30
+ pad=1
31
+ activation=leaky
32
+
33
+ # Downsample
34
+
35
+ [convolutional]
36
+ batch_normalize=1
37
+ filters=64
38
+ size=3
39
+ stride=2
40
+ pad=1
41
+ activation=leaky
42
+
43
+ [convolutional]
44
+ batch_normalize=1
45
+ filters=32
46
+ size=1
47
+ stride=1
48
+ pad=1
49
+ activation=leaky
50
+
51
+ [convolutional]
52
+ batch_normalize=1
53
+ filters=64
54
+ size=3
55
+ stride=1
56
+ pad=1
57
+ activation=leaky
58
+
59
+ [shortcut]
60
+ from=-3
61
+ activation=linear
62
+
63
+ # Downsample
64
+
65
+ [convolutional]
66
+ batch_normalize=1
67
+ filters=128
68
+ size=3
69
+ stride=2
70
+ pad=1
71
+ activation=leaky
72
+
73
+ [convolutional]
74
+ batch_normalize=1
75
+ filters=64
76
+ size=1
77
+ stride=1
78
+ pad=1
79
+ activation=leaky
80
+
81
+ [convolutional]
82
+ batch_normalize=1
83
+ filters=128
84
+ size=3
85
+ stride=1
86
+ pad=1
87
+ activation=leaky
88
+
89
+ [shortcut]
90
+ from=-3
91
+ activation=linear
92
+
93
+ [convolutional]
94
+ batch_normalize=1
95
+ filters=64
96
+ size=1
97
+ stride=1
98
+ pad=1
99
+ activation=leaky
100
+
101
+ [convolutional]
102
+ batch_normalize=1
103
+ filters=128
104
+ size=3
105
+ stride=1
106
+ pad=1
107
+ activation=leaky
108
+
109
+ [shortcut]
110
+ from=-3
111
+ activation=linear
112
+
113
+ # Downsample
114
+
115
+ [convolutional]
116
+ batch_normalize=1
117
+ filters=256
118
+ size=3
119
+ stride=2
120
+ pad=1
121
+ activation=leaky
122
+
123
+ [convolutional]
124
+ batch_normalize=1
125
+ filters=128
126
+ size=1
127
+ stride=1
128
+ pad=1
129
+ activation=leaky
130
+
131
+ [convolutional]
132
+ batch_normalize=1
133
+ filters=256
134
+ size=3
135
+ stride=1
136
+ pad=1
137
+ activation=leaky
138
+
139
+ [shortcut]
140
+ from=-3
141
+ activation=linear
142
+
143
+ [convolutional]
144
+ batch_normalize=1
145
+ filters=128
146
+ size=1
147
+ stride=1
148
+ pad=1
149
+ activation=leaky
150
+
151
+ [convolutional]
152
+ batch_normalize=1
153
+ filters=256
154
+ size=3
155
+ stride=1
156
+ pad=1
157
+ activation=leaky
158
+
159
+ [shortcut]
160
+ from=-3
161
+ activation=linear
162
+
163
+ [convolutional]
164
+ batch_normalize=1
165
+ filters=128
166
+ size=1
167
+ stride=1
168
+ pad=1
169
+ activation=leaky
170
+
171
+ [convolutional]
172
+ batch_normalize=1
173
+ filters=256
174
+ size=3
175
+ stride=1
176
+ pad=1
177
+ activation=leaky
178
+
179
+ [shortcut]
180
+ from=-3
181
+ activation=linear
182
+
183
+ [convolutional]
184
+ batch_normalize=1
185
+ filters=128
186
+ size=1
187
+ stride=1
188
+ pad=1
189
+ activation=leaky
190
+
191
+ [convolutional]
192
+ batch_normalize=1
193
+ filters=256
194
+ size=3
195
+ stride=1
196
+ pad=1
197
+ activation=leaky
198
+
199
+ [shortcut]
200
+ from=-3
201
+ activation=linear
202
+
203
+
204
+ [convolutional]
205
+ batch_normalize=1
206
+ filters=128
207
+ size=1
208
+ stride=1
209
+ pad=1
210
+ activation=leaky
211
+
212
+ [convolutional]
213
+ batch_normalize=1
214
+ filters=256
215
+ size=3
216
+ stride=1
217
+ pad=1
218
+ activation=leaky
219
+
220
+ [shortcut]
221
+ from=-3
222
+ activation=linear
223
+
224
+ [convolutional]
225
+ batch_normalize=1
226
+ filters=128
227
+ size=1
228
+ stride=1
229
+ pad=1
230
+ activation=leaky
231
+
232
+ [convolutional]
233
+ batch_normalize=1
234
+ filters=256
235
+ size=3
236
+ stride=1
237
+ pad=1
238
+ activation=leaky
239
+
240
+ [shortcut]
241
+ from=-3
242
+ activation=linear
243
+
244
+ [convolutional]
245
+ batch_normalize=1
246
+ filters=128
247
+ size=1
248
+ stride=1
249
+ pad=1
250
+ activation=leaky
251
+
252
+ [convolutional]
253
+ batch_normalize=1
254
+ filters=256
255
+ size=3
256
+ stride=1
257
+ pad=1
258
+ activation=leaky
259
+
260
+ [shortcut]
261
+ from=-3
262
+ activation=linear
263
+
264
+ [convolutional]
265
+ batch_normalize=1
266
+ filters=128
267
+ size=1
268
+ stride=1
269
+ pad=1
270
+ activation=leaky
271
+
272
+ [convolutional]
273
+ batch_normalize=1
274
+ filters=256
275
+ size=3
276
+ stride=1
277
+ pad=1
278
+ activation=leaky
279
+
280
+ [shortcut]
281
+ from=-3
282
+ activation=linear
283
+
284
+ # Downsample
285
+
286
+ [convolutional]
287
+ batch_normalize=1
288
+ filters=512
289
+ size=3
290
+ stride=2
291
+ pad=1
292
+ activation=leaky
293
+
294
+ [convolutional]
295
+ batch_normalize=1
296
+ filters=256
297
+ size=1
298
+ stride=1
299
+ pad=1
300
+ activation=leaky
301
+
302
+ [convolutional]
303
+ batch_normalize=1
304
+ filters=512
305
+ size=3
306
+ stride=1
307
+ pad=1
308
+ activation=leaky
309
+
310
+ [shortcut]
311
+ from=-3
312
+ activation=linear
313
+
314
+
315
+ [convolutional]
316
+ batch_normalize=1
317
+ filters=256
318
+ size=1
319
+ stride=1
320
+ pad=1
321
+ activation=leaky
322
+
323
+ [convolutional]
324
+ batch_normalize=1
325
+ filters=512
326
+ size=3
327
+ stride=1
328
+ pad=1
329
+ activation=leaky
330
+
331
+ [shortcut]
332
+ from=-3
333
+ activation=linear
334
+
335
+
336
+ [convolutional]
337
+ batch_normalize=1
338
+ filters=256
339
+ size=1
340
+ stride=1
341
+ pad=1
342
+ activation=leaky
343
+
344
+ [convolutional]
345
+ batch_normalize=1
346
+ filters=512
347
+ size=3
348
+ stride=1
349
+ pad=1
350
+ activation=leaky
351
+
352
+ [shortcut]
353
+ from=-3
354
+ activation=linear
355
+
356
+
357
+ [convolutional]
358
+ batch_normalize=1
359
+ filters=256
360
+ size=1
361
+ stride=1
362
+ pad=1
363
+ activation=leaky
364
+
365
+ [convolutional]
366
+ batch_normalize=1
367
+ filters=512
368
+ size=3
369
+ stride=1
370
+ pad=1
371
+ activation=leaky
372
+
373
+ [shortcut]
374
+ from=-3
375
+ activation=linear
376
+
377
+ [convolutional]
378
+ batch_normalize=1
379
+ filters=256
380
+ size=1
381
+ stride=1
382
+ pad=1
383
+ activation=leaky
384
+
385
+ [convolutional]
386
+ batch_normalize=1
387
+ filters=512
388
+ size=3
389
+ stride=1
390
+ pad=1
391
+ activation=leaky
392
+
393
+ [shortcut]
394
+ from=-3
395
+ activation=linear
396
+
397
+
398
+ [convolutional]
399
+ batch_normalize=1
400
+ filters=256
401
+ size=1
402
+ stride=1
403
+ pad=1
404
+ activation=leaky
405
+
406
+ [convolutional]
407
+ batch_normalize=1
408
+ filters=512
409
+ size=3
410
+ stride=1
411
+ pad=1
412
+ activation=leaky
413
+
414
+ [shortcut]
415
+ from=-3
416
+ activation=linear
417
+
418
+
419
+ [convolutional]
420
+ batch_normalize=1
421
+ filters=256
422
+ size=1
423
+ stride=1
424
+ pad=1
425
+ activation=leaky
426
+
427
+ [convolutional]
428
+ batch_normalize=1
429
+ filters=512
430
+ size=3
431
+ stride=1
432
+ pad=1
433
+ activation=leaky
434
+
435
+ [shortcut]
436
+ from=-3
437
+ activation=linear
438
+
439
+ [convolutional]
440
+ batch_normalize=1
441
+ filters=256
442
+ size=1
443
+ stride=1
444
+ pad=1
445
+ activation=leaky
446
+
447
+ [convolutional]
448
+ batch_normalize=1
449
+ filters=512
450
+ size=3
451
+ stride=1
452
+ pad=1
453
+ activation=leaky
454
+
455
+ [shortcut]
456
+ from=-3
457
+ activation=linear
458
+
459
+ # Downsample
460
+
461
+ [convolutional]
462
+ batch_normalize=1
463
+ filters=1024
464
+ size=3
465
+ stride=2
466
+ pad=1
467
+ activation=leaky
468
+
469
+ [convolutional]
470
+ batch_normalize=1
471
+ filters=512
472
+ size=1
473
+ stride=1
474
+ pad=1
475
+ activation=leaky
476
+
477
+ [convolutional]
478
+ batch_normalize=1
479
+ filters=1024
480
+ size=3
481
+ stride=1
482
+ pad=1
483
+ activation=leaky
484
+
485
+ [shortcut]
486
+ from=-3
487
+ activation=linear
488
+
489
+ [convolutional]
490
+ batch_normalize=1
491
+ filters=512
492
+ size=1
493
+ stride=1
494
+ pad=1
495
+ activation=leaky
496
+
497
+ [convolutional]
498
+ batch_normalize=1
499
+ filters=1024
500
+ size=3
501
+ stride=1
502
+ pad=1
503
+ activation=leaky
504
+
505
+ [shortcut]
506
+ from=-3
507
+ activation=linear
508
+
509
+ [convolutional]
510
+ batch_normalize=1
511
+ filters=512
512
+ size=1
513
+ stride=1
514
+ pad=1
515
+ activation=leaky
516
+
517
+ [convolutional]
518
+ batch_normalize=1
519
+ filters=1024
520
+ size=3
521
+ stride=1
522
+ pad=1
523
+ activation=leaky
524
+
525
+ [shortcut]
526
+ from=-3
527
+ activation=linear
528
+
529
+ [convolutional]
530
+ batch_normalize=1
531
+ filters=512
532
+ size=1
533
+ stride=1
534
+ pad=1
535
+ activation=leaky
536
+
537
+ [convolutional]
538
+ batch_normalize=1
539
+ filters=1024
540
+ size=3
541
+ stride=1
542
+ pad=1
543
+ activation=leaky
544
+
545
+ [shortcut]
546
+ from=-3
547
+ activation=linear
548
+
549
+ ######################
550
+
551
+ [convolutional]
552
+ batch_normalize=1
553
+ filters=512
554
+ size=1
555
+ stride=1
556
+ pad=1
557
+ activation=leaky
558
+
559
+ [convolutional]
560
+ batch_normalize=1
561
+ size=3
562
+ stride=1
563
+ pad=1
564
+ filters=1024
565
+ activation=leaky
566
+
567
+ [convolutional]
568
+ batch_normalize=1
569
+ filters=512
570
+ size=1
571
+ stride=1
572
+ pad=1
573
+ activation=leaky
574
+
575
+ [convolutional]
576
+ batch_normalize=1
577
+ size=3
578
+ stride=1
579
+ pad=1
580
+ filters=1024
581
+ activation=leaky
582
+
583
+ [convolutional]
584
+ batch_normalize=1
585
+ filters=512
586
+ size=1
587
+ stride=1
588
+ pad=1
589
+ activation=leaky
590
+
591
+ [convolutional]
592
+ batch_normalize=1
593
+ size=3
594
+ stride=1
595
+ pad=1
596
+ filters=1024
597
+ activation=leaky
598
+
599
+ [convolutional]
600
+ size=1
601
+ stride=1
602
+ pad=1
603
+ filters=255
604
+ activation=linear
605
+
606
+
607
+ [yolo]
608
+ mask = 6,7,8
609
+ anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
610
+ classes=80
611
+ num=9
612
+ jitter=.3
613
+ ignore_thresh = .7
614
+ truth_thresh = 1
615
+ random=1
616
+
617
+
618
+ [route]
619
+ layers = -4
620
+
621
+ [convolutional]
622
+ batch_normalize=1
623
+ filters=256
624
+ size=1
625
+ stride=1
626
+ pad=1
627
+ activation=leaky
628
+
629
+ [upsample]
630
+ stride=2
631
+
632
+ [route]
633
+ layers = -1, 61
634
+
635
+
636
+
637
+ [convolutional]
638
+ batch_normalize=1
639
+ filters=256
640
+ size=1
641
+ stride=1
642
+ pad=1
643
+ activation=leaky
644
+
645
+ [convolutional]
646
+ batch_normalize=1
647
+ size=3
648
+ stride=1
649
+ pad=1
650
+ filters=512
651
+ activation=leaky
652
+
653
+ [convolutional]
654
+ batch_normalize=1
655
+ filters=256
656
+ size=1
657
+ stride=1
658
+ pad=1
659
+ activation=leaky
660
+
661
+ [convolutional]
662
+ batch_normalize=1
663
+ size=3
664
+ stride=1
665
+ pad=1
666
+ filters=512
667
+ activation=leaky
668
+
669
+ [convolutional]
670
+ batch_normalize=1
671
+ filters=256
672
+ size=1
673
+ stride=1
674
+ pad=1
675
+ activation=leaky
676
+
677
+ [convolutional]
678
+ batch_normalize=1
679
+ size=3
680
+ stride=1
681
+ pad=1
682
+ filters=512
683
+ activation=leaky
684
+
685
+ [convolutional]
686
+ size=1
687
+ stride=1
688
+ pad=1
689
+ filters=255
690
+ activation=linear
691
+
692
+
693
+ [yolo]
694
+ mask = 3,4,5
695
+ anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
696
+ classes=80
697
+ num=9
698
+ jitter=.3
699
+ ignore_thresh = .7
700
+ truth_thresh = 1
701
+ random=1
702
+
703
+
704
+
705
+ [route]
706
+ layers = -4
707
+
708
+ [convolutional]
709
+ batch_normalize=1
710
+ filters=128
711
+ size=1
712
+ stride=1
713
+ pad=1
714
+ activation=leaky
715
+
716
+ [upsample]
717
+ stride=2
718
+
719
+ [route]
720
+ layers = -1, 36
721
+
722
+
723
+
724
+ [convolutional]
725
+ batch_normalize=1
726
+ filters=128
727
+ size=1
728
+ stride=1
729
+ pad=1
730
+ activation=leaky
731
+
732
+ [convolutional]
733
+ batch_normalize=1
734
+ size=3
735
+ stride=1
736
+ pad=1
737
+ filters=256
738
+ activation=leaky
739
+
740
+ [convolutional]
741
+ batch_normalize=1
742
+ filters=128
743
+ size=1
744
+ stride=1
745
+ pad=1
746
+ activation=leaky
747
+
748
+ [convolutional]
749
+ batch_normalize=1
750
+ size=3
751
+ stride=1
752
+ pad=1
753
+ filters=256
754
+ activation=leaky
755
+
756
+ [convolutional]
757
+ batch_normalize=1
758
+ filters=128
759
+ size=1
760
+ stride=1
761
+ pad=1
762
+ activation=leaky
763
+
764
+ [convolutional]
765
+ batch_normalize=1
766
+ size=3
767
+ stride=1
768
+ pad=1
769
+ filters=256
770
+ activation=leaky
771
+
772
+ [convolutional]
773
+ size=1
774
+ stride=1
775
+ pad=1
776
+ filters=255
777
+ activation=linear
778
+
779
+
780
+ [yolo]
781
+ mask = 0,1,2
782
+ anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
783
+ classes=80
784
+ num=9
785
+ jitter=.3
786
+ ignore_thresh = .7
787
+ truth_thresh = 1
788
+ random=1
yolov3-320.weights ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:523e4e69e1d015393a1b0a441cef1d9c7659e3eb2d7e15f793f060a21b32f297
3
+ size 248007048