Update pages/Vanishing Gradient (Sigmoid).py
Browse files
pages/Vanishing Gradient (Sigmoid).py
CHANGED
|
@@ -74,4 +74,42 @@ st.markdown("""
|
|
| 74 |
**Highlighted Issue:**
|
| 75 |
- Repeated multiplication of small derivatives ($\\sigma'(z)$)
|
| 76 |
- Gradients shrink exponentially → **Vanishing Gradient Problem**
|
| 77 |
-
""")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
**Highlighted Issue:**
|
| 75 |
- Repeated multiplication of small derivatives ($\\sigma'(z)$)
|
| 76 |
- Gradients shrink exponentially → **Vanishing Gradient Problem**
|
| 77 |
+
""")
|
| 78 |
+
|
| 79 |
+
import numpy as np
|
| 80 |
+
import pandas as pd
|
| 81 |
+
|
| 82 |
+
st.write("""
|
| 83 |
+
The **vanishing gradient problem** occurs when gradients become very small during backpropagation,
|
| 84 |
+
causing weights to update very slowly (or stop updating).
|
| 85 |
+
This mostly happens with activation functions like **Sigmoid** or **tanh** in deep networks.
|
| 86 |
+
""")
|
| 87 |
+
|
| 88 |
+
# --- Define sigmoid & its derivative ---
|
| 89 |
+
def sigmoid(x):
|
| 90 |
+
return 1 / (1 + np.exp(-x))
|
| 91 |
+
|
| 92 |
+
def sigmoid_derivative(x):
|
| 93 |
+
s = sigmoid(x)
|
| 94 |
+
return s * (1 - s)
|
| 95 |
+
|
| 96 |
+
# --- Generate values ---
|
| 97 |
+
x = np.linspace(-10, 10, 200)
|
| 98 |
+
y_sigmoid = sigmoid(x)
|
| 99 |
+
y_grad = sigmoid_derivative(x)
|
| 100 |
+
|
| 101 |
+
# --- Put into DataFrame for st.line_chart ---
|
| 102 |
+
df = pd.DataFrame({
|
| 103 |
+
"x": x,
|
| 104 |
+
"Sigmoid": y_sigmoid,
|
| 105 |
+
"Gradient": y_grad
|
| 106 |
+
})
|
| 107 |
+
|
| 108 |
+
st.subheader("📊 Sigmoid Function vs Gradient")
|
| 109 |
+
st.line_chart(df.set_index("x"))
|
| 110 |
+
|
| 111 |
+
st.write("""
|
| 112 |
+
- Notice how the **Sigmoid** squashes input into the range (0,1).
|
| 113 |
+
- The **gradient (derivative)** is maximum at `x = 0`, but becomes **very small** for large |x|.
|
| 114 |
+
- In deep networks, multiplying many small gradients leads to the **vanishing gradient problem**.
|
| 115 |
+
""")
|