Spaces:
Sleeping
Sleeping
File size: 4,295 Bytes
b44a37f cbe8ed2 b44a37f 592dadb b44a37f 592dadb b44a37f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
import streamlit as st
st.set_page_config(page_title="Support Vector Machines", page_icon="π§", layout="wide")
st.sidebar.title("π Support Vector Machines")
st.sidebar.markdown("Learn how SVM works for classification and regression tasks.")
st.sidebar.markdown("---")
section = st.radio(
"π Select a section to explore:",
[
"π What is SVM?",
"π§ Types of SVM",
"π οΈ Working of SVC",
"π Hard Margin vs Soft Margin",
"π Mathematical Formulation",
"β
Pros & Cons of SVM",
"π Dual Form & Kernel Trick",
"βοΈ Hyperparameter Tuning"
]
)
st.markdown("<h1 style='text-align: center;'>π Support Vector Machines (SVM)</h1>", unsafe_allow_html=True)
if section == "π What is SVM?":
st.write("""
Support Vector Machines (SVM) is a **supervised learning algorithm** used for both **classification** and **regression** problems.
In practice, it's most often used for **classification** tasks.
π§ SVM finds the **optimal decision boundary (hyperplane)** that maximizes the **margin** between classes.
""")
elif section == "π§ Types of SVM":
st.write("""
1. **Support Vector Classifier (SVC)**: Used for classification
2. **Support Vector Regression (SVR)**: Used for predicting continuous values
""")
elif section == "π οΈ Working of SVC":
st.write("""
Steps:
1. Start with a random separating hyperplane.
2. Identify **support vectors** (closest points from each class).
3. Adjust the hyperplane to **maximize the margin** between the support vectors.
π Goal: Maximize the distance between the hyperplane and the nearest data points.
""")
elif section == "π Hard Margin vs Soft Margin":
st.write("""
- **Hard Margin**:
- Assumes **perfectly separable** data.
- No misclassifications allowed.
- **Soft Margin**:
- Allows **some misclassification**.
- More flexible, better for real-world noisy data.
""")
elif section == "π Mathematical Formulation":
st.markdown("### Hard Margin Condition:")
st.latex(r"y_i (w^T x_i + b) \geq 1")
st.markdown("### Soft Margin Condition:")
st.latex(r"y_i (w^T x_i + b) \geq 1 - \xi_i")
st.markdown(r"### Slack Variable \( \xi_i \) Interpretation:")
st.write(r"""
- \( \xi_i = 0 \): Correct and outside the margin
- \( 0 < \xi_i \leq 1 \): Inside the margin, but correctly classified
- \( \xi_i > 1 \): Misclassified
""")
elif section == "β
Pros & Cons of SVM":
st.markdown("### Advantages:")
st.write("""
- Works well in **high-dimensional** spaces
- Effective with both linear and **non-linear** data (using kernels)
- Resistant to **overfitting**
""")
st.markdown("### Disadvantages:")
st.write("""
- Computationally **slow** for large datasets
- Requires tuning of hyperparameters (`C`, `gamma`)
""")
elif section == "π Dual Form & Kernel Trick":
st.markdown(r"""
When data is not linearly separable in its original space, we use the **kernel trick** to transform it.
### Common Kernels:
- **Linear Kernel**: \( K(x, x') = x^T x' \)
- **Polynomial Kernel**: \( K(x, x') = (x^T x' + c)^d \)
- **RBF (Gaussian)**: \( K(x, x') = \exp(-\gamma \|x - x'\|^2) \)
- **Sigmoid Kernel**: Mimics activation of neural networks
β
The kernel trick allows working in higher dimensions **without explicitly transforming** the data.
""")
elif section == "βοΈ Hyperparameter Tuning":
st.write("""
- **C (Regularization)**:
- Controls the trade-off between maximizing margin and minimizing misclassification.
- High C β strict on misclassification (may overfit)
- Low C β allows more slack (better generalization)
- **Gamma** (only for RBF/Polynomial Kernels):
- Defines how far the influence of a single data point reaches.
- High Gamma β close points matter more β can overfit
- Low Gamma β wider influence β can underfit
""")
st.markdown("---")
st.success("SVMs are powerful and flexible. Mastering margins, kernels, and regularization is key to using them effectively!")
|