Spaces:
Sleeping
Sleeping
| import streamlit as st | |
| st.set_page_config(page_title="Support Vector Machines", page_icon="π§", layout="wide") | |
| st.sidebar.title("π Support Vector Machines") | |
| st.sidebar.markdown("Learn how SVM works for classification and regression tasks.") | |
| st.sidebar.markdown("---") | |
| section = st.radio( | |
| "π Select a section to explore:", | |
| [ | |
| "π What is SVM?", | |
| "π§ Types of SVM", | |
| "π οΈ Working of SVC", | |
| "π Hard Margin vs Soft Margin", | |
| "π Mathematical Formulation", | |
| "β Pros & Cons of SVM", | |
| "π Dual Form & Kernel Trick", | |
| "βοΈ Hyperparameter Tuning" | |
| ] | |
| ) | |
| st.markdown("<h1 style='text-align: center;'>π Support Vector Machines (SVM)</h1>", unsafe_allow_html=True) | |
| if section == "π What is SVM?": | |
| st.write(""" | |
| Support Vector Machines (SVM) is a **supervised learning algorithm** used for both **classification** and **regression** problems. | |
| In practice, it's most often used for **classification** tasks. | |
| π§ SVM finds the **optimal decision boundary (hyperplane)** that maximizes the **margin** between classes. | |
| """) | |
| elif section == "π§ Types of SVM": | |
| st.write(""" | |
| 1. **Support Vector Classifier (SVC)**: Used for classification | |
| 2. **Support Vector Regression (SVR)**: Used for predicting continuous values | |
| """) | |
| elif section == "π οΈ Working of SVC": | |
| st.write(""" | |
| Steps: | |
| 1. Start with a random separating hyperplane. | |
| 2. Identify **support vectors** (closest points from each class). | |
| 3. Adjust the hyperplane to **maximize the margin** between the support vectors. | |
| π Goal: Maximize the distance between the hyperplane and the nearest data points. | |
| """) | |
| elif section == "π Hard Margin vs Soft Margin": | |
| st.write(""" | |
| - **Hard Margin**: | |
| - Assumes **perfectly separable** data. | |
| - No misclassifications allowed. | |
| - **Soft Margin**: | |
| - Allows **some misclassification**. | |
| - More flexible, better for real-world noisy data. | |
| """) | |
| elif section == "π Mathematical Formulation": | |
| st.markdown("### Hard Margin Condition:") | |
| st.latex(r"y_i (w^T x_i + b) \geq 1") | |
| st.markdown("### Soft Margin Condition:") | |
| st.latex(r"y_i (w^T x_i + b) \geq 1 - \xi_i") | |
| st.markdown(r"### Slack Variable \( \xi_i \) Interpretation:") | |
| st.write(r""" | |
| - \( \xi_i = 0 \): Correct and outside the margin | |
| - \( 0 < \xi_i \leq 1 \): Inside the margin, but correctly classified | |
| - \( \xi_i > 1 \): Misclassified | |
| """) | |
| elif section == "β Pros & Cons of SVM": | |
| st.markdown("### Advantages:") | |
| st.write(""" | |
| - Works well in **high-dimensional** spaces | |
| - Effective with both linear and **non-linear** data (using kernels) | |
| - Resistant to **overfitting** | |
| """) | |
| st.markdown("### Disadvantages:") | |
| st.write(""" | |
| - Computationally **slow** for large datasets | |
| - Requires tuning of hyperparameters (`C`, `gamma`) | |
| """) | |
| elif section == "π Dual Form & Kernel Trick": | |
| st.markdown(r""" | |
| When data is not linearly separable in its original space, we use the **kernel trick** to transform it. | |
| ### Common Kernels: | |
| - **Linear Kernel**: \( K(x, x') = x^T x' \) | |
| - **Polynomial Kernel**: \( K(x, x') = (x^T x' + c)^d \) | |
| - **RBF (Gaussian)**: \( K(x, x') = \exp(-\gamma \|x - x'\|^2) \) | |
| - **Sigmoid Kernel**: Mimics activation of neural networks | |
| β The kernel trick allows working in higher dimensions **without explicitly transforming** the data. | |
| """) | |
| elif section == "βοΈ Hyperparameter Tuning": | |
| st.write(""" | |
| - **C (Regularization)**: | |
| - Controls the trade-off between maximizing margin and minimizing misclassification. | |
| - High C β strict on misclassification (may overfit) | |
| - Low C β allows more slack (better generalization) | |
| - **Gamma** (only for RBF/Polynomial Kernels): | |
| - Defines how far the influence of a single data point reaches. | |
| - High Gamma β close points matter more β can overfit | |
| - Low Gamma β wider influence β can underfit | |
| """) | |
| st.markdown("---") | |
| st.success("SVMs are powerful and flexible. Mastering margins, kernels, and regularization is key to using them effectively!") | |