HichTala commited on
Commit
8f7a462
·
verified ·
1 Parent(s): 21de365

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +174 -1
README.md CHANGED
@@ -9,4 +9,177 @@ base_model:
9
  pipeline_tag: object-detection
10
  library_name: transformers
11
  ---
12
- # 👀
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  pipeline_tag: object-detection
10
  library_name: transformers
11
  ---
12
+ <div align="center">
13
+ <p>
14
+ <img src="https://raw.githubusercontent.com/HichTala/draw2/refs/heads/main/figures/banner-draw.png">
15
+ </p>
16
+
17
+
18
+ <div>
19
+
20
+ [![Licence](https://img.shields.io/pypi/l/ultralytics)](LICENSE)
21
+ [![Twitter](https://badgen.net/badge/icon/twitter?icon=twitter&label)](https://twitter.com/tiazden)
22
+ [![Github](https://img.shields.io/badge/-github-181717?logo=github&labelColor=555&color=%23181717)](https://github.com/HichTala/draw2)
23
+ [![HuggingFace Downloads](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fhuggingface.co%2Fapi%2Fmodels%2FHichTala%2Fdraw2&query=%24.downloads&logo=huggingface&label=downloads&color=%23FFD21E)](https://huggingface.co/HichTala/draw2)
24
+ [![OBS Plugin](https://img.shields.io/badge/-obs_plugin-302E31?logo=obsstudio&labelColor=555&color=%23302E31)](https://github.com/HichTala/draw2-obsplugin)
25
+ [![WandB](https://img.shields.io/badge/visualize_in-W%26B-yellow?logo=weightsandbiases&color=%23FFBE00)](https://wandb.ai/hich_/draw)
26
+ [![Medium](https://img.shields.io/badge/Medium-12100E?style=flat&logo=medium&logoColor=white)](https://medium.com/@hich.tala.phd/how-i-trained-a-model-to-detect-and-recognise-a-wide-range-of-yu-gi-oh-cards-6ea71da007fd)
27
+
28
+
29
+ </div>
30
+
31
+ </div>
32
+
33
+ DRAW 2 (which stands for **D**etect and **R**ecognize **A** **W**ide range of cards version 2) is an object detector
34
+ trained to detect _Yu-Gi-Oh!_ cards in all types of images, and in particular in dueling images.
35
+
36
+ With this new version, **DRAW 2** goes beyond its predecessor. It’s more accurate, more robust, and way easier to use.
37
+ It now includes an [OBS plugin](https://github.com/HichTala/draw2-obsplugin) that lets users seamlessly integrate the
38
+ detector directly into their live streams or recorded videos; and those **without any particular technical skills**.
39
+ The plugin can display detected cards in real time for an enhanced viewing experience.
40
+
41
+ Other works exist (see [Related Works](#div-aligncenterrelated-worksdiv)) but none is capable of recognizing cards
42
+ during a duel.
43
+
44
+ This project is licensed under the [GNU Affero General Public License v3.0](LICENCE); all contributions are welcome.
45
+
46
+ ---
47
+
48
+ ## <div align="center">📄Documentation</div>
49
+
50
+ If you juste want to use the plugin, please refer to the [OBS plugin page](https://github.com/HichTala/draw2-obsplugin).
51
+ You don't need to install anything from this repository.
52
+ The documentation below is for people who want to use the detector outside of OBS, this will require some coding skills.
53
+
54
+ ### Installation
55
+
56
+ You need python to be installed. Python installation isn't going to be detailed here, you can refer to
57
+ the [documentation](https://www.python.org/).
58
+
59
+ We first need to install pytorch. It is recommended to use a package manager such
60
+ as [miniconda](https://docs.conda.io/projects/miniconda/en/latest/).
61
+ Please refer to the [documentation](https://docs.conda.io/projects/miniconda/en/latest/).
62
+
63
+ When everything is set up you can run the following command to install pytorch:
64
+
65
+ ```shell
66
+ python -m pip install torch torchvision
67
+ ```
68
+
69
+ If you want to use you gpus to make everything run faster, please refer
70
+ the [documentation](https://pytorch.org/get-started/locally/)
71
+
72
+ Then you just have to clone the repo and install `requirements`:
73
+
74
+ ```shell
75
+ git clone https://github.com/HichTala/draw
76
+ cd draw
77
+ python -m pip install -r requirements.txt
78
+ ```
79
+
80
+ If you don't want to clone the repository and have already all the requirements installed, you can just run:
81
+
82
+ ```shell
83
+ python -m pip install git+https://github.com/HichTala/draw2.git
84
+ ```
85
+
86
+ Your installation is now completed.
87
+
88
+ ### 🚀 Usage
89
+
90
+ Once the installation is done, you can use the detector by executing the following command:
91
+
92
+ ```shell
93
+ python -m draw
94
+ ```
95
+
96
+ You can use the `--help` flag to see all available options:
97
+
98
+ ```shell
99
+ python -m draw --help
100
+ ```
101
+
102
+ Here are the most important options:
103
+
104
+ - `--source`: Path to your image, video, or webcam index (default is `0` for webcam).
105
+ - `--save`: Save path for the output.
106
+ - `--show`: Display the output in a window.
107
+ - `--display-card`: Display detected cards on the output.
108
+ - `--deck-list`: Path to a ydk file containing the list of cards in your deck for better recognition.
109
+ - `--fps`: FPS of the saved video (default is 60).
110
+
111
+ ---
112
+
113
+ ## <div align="center">💡Inspiration</div>
114
+
115
+ This project is inspired by content creator [SuperZouloux](https://www.youtube.com/watch?v=64-LfbggqKI)'s idea of a
116
+ hologram bringing _Yu-Gi-Oh!_ cards to life.
117
+ His project uses chips inserted under the sleeves of each card,
118
+ which are read by the play mat, enabling the cards to be recognized.
119
+
120
+ Inserting the chips into the sleeves is not only laborious, but also poses another problem:
121
+ face-down cards are read in the same way as face-up ones.
122
+ So an automatic detector is a really suitable solution.
123
+
124
+ Although this project was discouraged by _KONAMI_ <sup>®</sup>, the game's publisher (which is quite understandable),
125
+ we can nevertheless imagine such a system being used to display the cards played during a live duel,
126
+ to allow viewers to read the cards.
127
+
128
+ ---
129
+
130
+ ## <div align="center">🔗Related Works</div>
131
+
132
+ Although to my knowledge `draw` is the first detector capable of locating and detecting _Yu-Gi-Oh!_ cards in a dueling
133
+ environment,
134
+ other works exist and were a source of inspiration for this project. It's worth mentioning them here.
135
+
136
+ [Yu-Gi-Oh! NEURON](https://www.konami.com/games/eu/fr/products/yugioh_neuron/) is an official application developed by
137
+ _KONAMI_ <sup>®</sup>.
138
+ It's packed with features, including cards recognition. The application is capable of recognizing a total of 20 cards at
139
+ a time, which is very decent.
140
+ The drawback is that the cards must be of good quality to be recognized, which is not necessarily the case in a duel
141
+ context.
142
+ What's more, it can't be integrated, so the only way to use it is to use the application.
143
+
144
+ [yugioh one shot learning](https://github.com/vanstorm9/yugioh-one-shot-learning) made by `vanstorm9` is a
145
+ Yu-Gi-Oh! cards classification program that allow you to recognize cards. It uses siamese network to train its
146
+ classification
147
+ model. It gives very impressive results on images with a good quality but not that good on low quality images, and it
148
+ can't localize cards.
149
+
150
+ [Yolov11](https://github.com/ultralytics/ultralytics) is the last version of the very famous `yolo` family of object
151
+ detector models that handle oriented bounding boxes.
152
+ I think it doesn't need to be presented today, it represents state-of-the-art real time object detection model.
153
+
154
+ [ViT](https://arxiv.org/pdf/2010.11929.pdf) is a pre-trained model for image classification based on the Vision
155
+ Transformer architecture.
156
+ It relies entirely on attention mechanisms to process image patches instead of using convolutional layers.
157
+ It fits our task well since pre-trained versions on large-scale datasets such as ImageNet-21K are available.
158
+ This is particularly relevant for our use case, as it enables handling a large number of visual categories similar to
159
+ the 13k+ unique cards found in _Yu-Gi-Oh!_.
160
+
161
+ [SpellTable](https://spelltable.wizards.com/) is a free application designed and built by `Jonathan Rowny` and his team
162
+ for playing paper _Magic: The Gathering_ from a distance.
163
+ It allows player to click on a card on any player's feed to quickly identify it.
164
+ It has some similarity with `draw` since it can localize and recognize any card from a built in database of 17 000
165
+ cards.
166
+ The idea is close to this project, but it didn't originate it.
167
+
168
+ ---
169
+
170
+ ## <div align="center">💬Contact</div>
171
+
172
+ You can reach me on Twitter [@tiazden](https://twitter.com/tiazden) or by email
173
+ at [hich.tala.phd@gmail.com](mailto:hich.tala.phd@gmail.com).
174
+
175
+ ---
176
+
177
+ ## <div align="center">⭐Star History</div>
178
+
179
+ <a href="https://www.star-history.com/#HichTala/draw2&type=date&legend=top-left">
180
+ <picture>
181
+ <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=HichTala/draw2&type=date&theme=dark&legend=top-left" />
182
+ <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=HichTala/draw2&type=date&legend=top-left" />
183
+ <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=HichTala/draw2&type=date&legend=top-left" />
184
+ </picture>
185
+ </a>