import streamlit as st
st.set_page_config(page_title="Support Vector Machines", page_icon="🧭", layout="wide")
st.sidebar.title("🔍 Support Vector Machines")
st.sidebar.markdown("Learn how SVM works for classification and regression tasks.")
st.sidebar.markdown("---")
section = st.radio(
"📚 Select a section to explore:",
[
"📘 What is SVM?",
"🧠 Types of SVM",
"🛠️ Working of SVC",
"📏 Hard Margin vs Soft Margin",
"📐 Mathematical Formulation",
"✅ Pros & Cons of SVM",
"🔄 Dual Form & Kernel Trick",
"⚙️ Hyperparameter Tuning"
]
)
st.markdown("
📌 Support Vector Machines (SVM)
", unsafe_allow_html=True)
if section == "📘 What is SVM?":
st.write("""
Support Vector Machines (SVM) is a **supervised learning algorithm** used for both **classification** and **regression** problems.
In practice, it's most often used for **classification** tasks.
🧠 SVM finds the **optimal decision boundary (hyperplane)** that maximizes the **margin** between classes.
""")
elif section == "🧠 Types of SVM":
st.write("""
1. **Support Vector Classifier (SVC)**: Used for classification
2. **Support Vector Regression (SVR)**: Used for predicting continuous values
""")
elif section == "🛠️ Working of SVC":
st.write("""
Steps:
1. Start with a random separating hyperplane.
2. Identify **support vectors** (closest points from each class).
3. Adjust the hyperplane to **maximize the margin** between the support vectors.
🔑 Goal: Maximize the distance between the hyperplane and the nearest data points.
""")
elif section == "📏 Hard Margin vs Soft Margin":
st.write("""
- **Hard Margin**:
- Assumes **perfectly separable** data.
- No misclassifications allowed.
- **Soft Margin**:
- Allows **some misclassification**.
- More flexible, better for real-world noisy data.
""")
elif section == "📐 Mathematical Formulation":
st.markdown("### Hard Margin Condition:")
st.latex(r"y_i (w^T x_i + b) \geq 1")
st.markdown("### Soft Margin Condition:")
st.latex(r"y_i (w^T x_i + b) \geq 1 - \xi_i")
st.markdown(r"### Slack Variable \( \xi_i \) Interpretation:")
st.write(r"""
- \( \xi_i = 0 \): Correct and outside the margin
- \( 0 < \xi_i \leq 1 \): Inside the margin, but correctly classified
- \( \xi_i > 1 \): Misclassified
""")
elif section == "✅ Pros & Cons of SVM":
st.markdown("### Advantages:")
st.write("""
- Works well in **high-dimensional** spaces
- Effective with both linear and **non-linear** data (using kernels)
- Resistant to **overfitting**
""")
st.markdown("### Disadvantages:")
st.write("""
- Computationally **slow** for large datasets
- Requires tuning of hyperparameters (`C`, `gamma`)
""")
elif section == "🔄 Dual Form & Kernel Trick":
st.markdown(r"""
When data is not linearly separable in its original space, we use the **kernel trick** to transform it.
### Common Kernels:
- **Linear Kernel**: \( K(x, x') = x^T x' \)
- **Polynomial Kernel**: \( K(x, x') = (x^T x' + c)^d \)
- **RBF (Gaussian)**: \( K(x, x') = \exp(-\gamma \|x - x'\|^2) \)
- **Sigmoid Kernel**: Mimics activation of neural networks
✅ The kernel trick allows working in higher dimensions **without explicitly transforming** the data.
""")
elif section == "⚙️ Hyperparameter Tuning":
st.write("""
- **C (Regularization)**:
- Controls the trade-off between maximizing margin and minimizing misclassification.
- High C → strict on misclassification (may overfit)
- Low C → allows more slack (better generalization)
- **Gamma** (only for RBF/Polynomial Kernels):
- Defines how far the influence of a single data point reaches.
- High Gamma → close points matter more → can overfit
- Low Gamma → wider influence → can underfit
""")
st.markdown("---")
st.success("SVMs are powerful and flexible. Mastering margins, kernels, and regularization is key to using them effectively!")