marlonsousa commited on
Commit
2d16da1
·
verified ·
1 Parent(s): d2c0ee0

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -1
README.md CHANGED
@@ -11,4 +11,49 @@ license: mit
11
  short_description: 'This project implements a Generative Adversarial Network '
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  short_description: 'This project implements a Generative Adversarial Network '
12
  ---
13
 
14
+ ## **GenDigit**
15
+
16
+ This project implements a **Generative Adversarial Network (GAN)** that generates images of handwritten digits using the **MNIST dataset**.
17
+ The network consists of two primary components:
18
+
19
+ - **Generator (G):** A neural network that generates fake images based on random noise and the desired label (digit).
20
+ - **Discriminator (D):** A neural network that evaluates whether an image is real (from the dataset) or fake (generated by the generator).
21
+
22
+
23
+ ### **Generator (G) Loss Function**:
24
+ The Generator tries to minimize the **binary cross-entropy loss** by generating images that fool the Discriminator. The loss for the Generator is:
25
+
26
+ $$
27
+ L_G = -\mathbb{E}_{z}[ \log D(G(z)) ]
28
+ $$
29
+
30
+ Where:
31
+ - $$ z $$ is the random noise input to the Generator.
32
+ - $$ G(z) $$ is the generated image.
33
+ - $$ D(G(z)) $$ is the Discriminator's probability that the generated image is real.
34
+
35
+ The Generator is trained to minimize this loss by improving its ability to generate realistic images.
36
+
37
+ ### **Discriminator (D) Loss Function**:
38
+ The Discriminator tries to distinguish between real and fake images. Its loss consists of two parts:
39
+ 1. **Loss for real images**:
40
+ $$
41
+ L_{D_{\text{real}}} = -\mathbb{E}_{x_{\text{real}}}[ \log D(x_{\text{real}}) ]
42
+ $$
43
+ 2. **Loss for fake images**:
44
+ $$
45
+ L_{D_{\text{fake}}} = -\mathbb{E}_{z}[ \log (1 - D(G(z))) ]
46
+ $$
47
+
48
+ Where:
49
+ - $$ x_{\text{real}} $$ is a real image from the dataset.
50
+ - $$ D(x_{\text{real}}) $$ is the Discriminator's prediction for a real image.
51
+ - $$ G(z) $$ is the fake image generated by the Generator.
52
+
53
+ The total loss for the Discriminator is:
54
+
55
+ $$
56
+ L_D = L_{D_{\text{real}}} + L_{D_{\text{fake}}}
57
+ $$
58
+
59
+ The Discriminator is trained to minimize this loss by improving its ability to classify real and fake images correctly.