File size: 4,295 Bytes
b44a37f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cbe8ed2
b44a37f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
592dadb
 
b44a37f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
592dadb
b44a37f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
import streamlit as st

st.set_page_config(page_title="Support Vector Machines", page_icon="🧭", layout="wide")

st.sidebar.title("πŸ” Support Vector Machines")
st.sidebar.markdown("Learn how SVM works for classification and regression tasks.")
st.sidebar.markdown("---")

section = st.radio(
    "πŸ“š Select a section to explore:",
    [
        "πŸ“˜ What is SVM?",
        "🧠 Types of SVM",
        "πŸ› οΈ Working of SVC",
        "πŸ“ Hard Margin vs Soft Margin",
        "πŸ“ Mathematical Formulation",
        "βœ… Pros & Cons of SVM",
        "πŸ”„ Dual Form & Kernel Trick",
        "βš™οΈ Hyperparameter Tuning"
    ]
)

st.markdown("<h1 style='text-align: center;'>πŸ“Œ Support Vector Machines (SVM)</h1>", unsafe_allow_html=True)

if section == "πŸ“˜ What is SVM?":
    st.write("""
    Support Vector Machines (SVM) is a **supervised learning algorithm** used for both **classification** and **regression** problems.  
    In practice, it's most often used for **classification** tasks.
    
    🧠 SVM finds the **optimal decision boundary (hyperplane)** that maximizes the **margin** between classes.
    """)

elif section == "🧠 Types of SVM":
    st.write("""
    1. **Support Vector Classifier (SVC)**: Used for classification  
    2. **Support Vector Regression (SVR)**: Used for predicting continuous values
    """)

elif section == "πŸ› οΈ Working of SVC":
    st.write("""
    Steps:
    1. Start with a random separating hyperplane.
    2. Identify **support vectors** (closest points from each class).
    3. Adjust the hyperplane to **maximize the margin** between the support vectors.
    
    πŸ”‘ Goal: Maximize the distance between the hyperplane and the nearest data points.
    """)

elif section == "πŸ“ Hard Margin vs Soft Margin":
    st.write("""
    - **Hard Margin**:
        - Assumes **perfectly separable** data.
        - No misclassifications allowed.
    - **Soft Margin**:
        - Allows **some misclassification**.
        - More flexible, better for real-world noisy data.
    """)

elif section == "πŸ“ Mathematical Formulation":
    st.markdown("### Hard Margin Condition:")
    st.latex(r"y_i (w^T x_i + b) \geq 1")

    st.markdown("### Soft Margin Condition:")
    st.latex(r"y_i (w^T x_i + b) \geq 1 - \xi_i")

    st.markdown(r"### Slack Variable \( \xi_i \) Interpretation:")
    st.write(r"""
    - \( \xi_i = 0 \): Correct and outside the margin  
    - \( 0 < \xi_i \leq 1 \): Inside the margin, but correctly classified  
    - \( \xi_i > 1 \): Misclassified
    """)

elif section == "βœ… Pros & Cons of SVM":
    st.markdown("### Advantages:")
    st.write("""
    - Works well in **high-dimensional** spaces  
    - Effective with both linear and **non-linear** data (using kernels)  
    - Resistant to **overfitting**
    """)

    st.markdown("### Disadvantages:")
    st.write("""
    - Computationally **slow** for large datasets  
    - Requires tuning of hyperparameters (`C`, `gamma`)
    """)

elif section == "πŸ”„ Dual Form & Kernel Trick":
    st.markdown(r"""
    When data is not linearly separable in its original space, we use the **kernel trick** to transform it.
    
    ### Common Kernels:
    - **Linear Kernel**: \( K(x, x') = x^T x' \)  
    - **Polynomial Kernel**: \( K(x, x') = (x^T x' + c)^d \)  
    - **RBF (Gaussian)**: \( K(x, x') = \exp(-\gamma \|x - x'\|^2) \)  
    - **Sigmoid Kernel**: Mimics activation of neural networks
    
    βœ… The kernel trick allows working in higher dimensions **without explicitly transforming** the data.
    """)

elif section == "βš™οΈ Hyperparameter Tuning":
    st.write("""
    - **C (Regularization)**:
        - Controls the trade-off between maximizing margin and minimizing misclassification.
        - High C β†’ strict on misclassification (may overfit)  
        - Low C β†’ allows more slack (better generalization)
        
    - **Gamma** (only for RBF/Polynomial Kernels):
        - Defines how far the influence of a single data point reaches.  
        - High Gamma β†’ close points matter more β†’ can overfit  
        - Low Gamma β†’ wider influence β†’ can underfit
    """)

st.markdown("---")
st.success("SVMs are powerful and flexible. Mastering margins, kernels, and regularization is key to using them effectively!")