GunaKoppula commited on
Commit
2ba7da1
·
1 Parent(s): f23c452

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -11
README.md CHANGED
@@ -12,7 +12,7 @@ license: mit
12
 
13
  # ERA-SESSION13 YoloV3 with Pytorch Lightning & Gradio
14
 
15
- HF Link: https://huggingface.co/spaces/RaviNaik/ERA-SESSION13
16
 
17
  ### Achieved:
18
  1. **Training Loss: 3.680**
@@ -21,13 +21,42 @@ HF Link: https://huggingface.co/spaces/RaviNaik/ERA-SESSION13
21
  4. **No obj accuracy: 97.991463%**
22
  5. **Obj accuracy: 75.976616%**
23
  6. **MAP: 0.4366795**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ### Results
26
- ![image](https://github.com/RaviNaik/ERA-SESSION13/blob/main/yolo_results.png)
27
 
28
  ### Gradio App
29
- ![image](https://github.com/RaviNaik/ERA-SESSION13/assets/23289802/95335687-e717-4467-bcb1-227a79dd5c3f)
30
- ![image](https://github.com/RaviNaik/ERA-SESSION13/assets/23289802/3ab67d32-38e6-436a-86d4-b76b5bd52a77)
31
 
32
  ### Model Summary
33
  ```python
@@ -397,11 +426,11 @@ HF Link: https://huggingface.co/spaces/RaviNaik/ERA-SESSION13
397
  ```
398
 
399
  ### LR Finder
400
- ![image](https://github.com/RaviNaik/ERA-SESSION13/assets/23289802/a6d64f13-a7b7-4e17-abfc-3ec86e84b710)
401
 
402
  ### Loss & Accuracy
403
  **Training & Validation Loss:**
404
- ![image](https://github.com/RaviNaik/ERA-SESSION13/assets/23289802/9391157e-a889-480d-b233-b72e86745245)
405
 
406
  **Testing Accuracy:**
407
  ```python
@@ -494,12 +523,15 @@ Obj accuracy is: 75.976616%
494
  MAP: 0.43667954206466675
495
  ```
496
  ### Tensorboard Plots
497
- **Training Loss vs Steps:** ![image](https://github.com/RaviNaik/ERA-SESSION13/assets/23289802/5cb753e0-377b-4d9f-a240-871270ed50db)
498
 
499
  **Validation Loss vs Steps:**
500
- (Info: Validation loss calculated every 10 epochs to save time, thats why the straight line)
501
- ![image](https://github.com/RaviNaik/ERA-SESSION13/assets/23289802/7401c0aa-f7ff-4a5b-bab2-dbb5ebe0b400)
502
 
503
  ### GradCAM Representations
504
- EigenCAM is used to generate CAM representation, since usal gradient based method wont work with detection models like Yolo, FRCNN etc.
505
- ![image](https://github.com/RaviNaik/ERA-SESSION13/assets/23289802/3e3917f1-c8d1-4c3f-a028-de1292575e0b)
 
 
 
 
12
 
13
  # ERA-SESSION13 YoloV3 with Pytorch Lightning & Gradio
14
 
15
+ HF Link: https://huggingface.co/spaces/GunaKoppula/Session13
16
 
17
  ### Achieved:
18
  1. **Training Loss: 3.680**
 
21
  4. **No obj accuracy: 97.991463%**
22
  5. **Obj accuracy: 75.976616%**
23
  6. **MAP: 0.4366795**
24
+
25
+ ### Tasks:
26
+ 1. :heavy_check_mark: Move the code to PytorchLightning
27
+ 2. :heavy_check_mark: Train the model to reach such that all of these are true:
28
+ - Class accuracy is more than 75%
29
+ - No Obj accuracy of more than 95%
30
+ - Object Accuracy of more than 70% (assuming you had to reduce the kernel numbers, else 80/98/78)
31
+ - Ideally trained till 40 epochs
32
+ 3. :heavy_check_mark: Add these training features:
33
+ - Add multi-resolution training - the code shared trains only on one resolution 416
34
+ - Add Implement Mosaic Augmentation only 75% of the times
35
+ - Train on float16
36
+ - GradCam must be implemented.
37
+ 4. :heavy_check_mark: Things that are allowed due to HW constraints:
38
+ - Change of batch size
39
+ - Change of resolution
40
+ - Change of OCP parameters
41
+ 5. :heavy_check_mark: Once done:
42
+ - Move the app to HuggingFace Spaces
43
+ - Allow custom upload of images
44
+ - Share some samples from the existing dataset
45
+ - Show the GradCAM output for the image that the user uploads as well as for the samples.
46
+ 6. :heavy_check_mark: Mention things like:
47
+ - classes that your model support
48
+ - link to the actual model
49
+ 7. :heavy_check_mark: Assignment:
50
+ - Share HuggingFace App Link
51
+ - Share LightningCode Link on Github
52
+ - Share notebook link (with logs) on GitHub
53
 
54
  ### Results
55
+ ![image](https://github.com/GunaKoppula/ERAV1-Session-13/blob/main/yolo_results.png)
56
 
57
  ### Gradio App
58
+ ![image](https://github.com/GunaKoppula/ERAV1-Session-13/assets/61241928/5304d1e4-a545-4b8c-951c-10cc5da09e00)
59
+ ![image](https://github.com/GunaKoppula/ERAV1-Session-13/assets/61241928/8d558059-4477-4d54-a0c6-9f6bb424c77c)
60
 
61
  ### Model Summary
62
  ```python
 
426
  ```
427
 
428
  ### LR Finder
429
+ ![image](https://github.com/GunaKoppula/ERAV1-Session-13/assets/61241928/7ffabc81-f1d3-4379-bbfb-6ba7da277a02)
430
 
431
  ### Loss & Accuracy
432
  **Training & Validation Loss:**
433
+ ![image](https://github.com/GunaKoppula/ERAV1-Session-13/assets/61241928/332fda1e-acfb-4aec-979f-93984bc43e2d)
434
 
435
  **Testing Accuracy:**
436
  ```python
 
523
  MAP: 0.43667954206466675
524
  ```
525
  ### Tensorboard Plots
526
+ **Training Loss vs Steps:** ![image](https://github.com/GunaKoppula/ERAV1-Session-13/assets/61241928/3b4fb334-5b2a-45b0-a892-5222c147160a)
527
 
528
  **Validation Loss vs Steps:**
529
+ (Info: Validation loss calculated every 10 epochs to save time, that's why the straight line)
530
+ ![image](https://github.com/GunaKoppula/ERAV1-Session-13/assets/61241928/8b5f2b66-41d9-40bc-80f9-94589d5ffb59)
531
 
532
  ### GradCAM Representations
533
+ EigenCAM is used to generate CAM representation, since the usual gradient-based method won't work with detection models like Yolo, FRCNN, etc.
534
+ ![image](https://github.com/GunaKoppula/ERAV1-Session-13/assets/61241928/0f679781-fe30-41f8-9625-8fa312ae7f38)
535
+
536
+
537
+