ML_ALGORITHMS / pages /Logistic Regression.py
sree4411's picture
Update pages/Logistic Regression.py
16bf702 verified
import streamlit as st
st.set_page_config(page_title="Logistic Regression", page_icon="πŸ”", layout="wide")
# Title
st.markdown("<h1 style='text-align: center;'>πŸ” Logistic Regression - Made Simple</h1>", unsafe_allow_html=True)
# What is Logistic Regression?
st.header("πŸ“š What is Logistic Regression?")
st.markdown("""
Logistic Regression is a **classification algorithm** used to predict **discrete outcomes** (like Yes/No, 0/1, Spam/Not Spam).
Think of it like:
> β€œBased on your features (like age, income, health), should you get insurance? Yes or No?”
### Key Facts:
- It's a **supervised learning** algorithm.
- Despite the name, it’s used for **classification**, not regression.
- It outputs a **probability** between 0 and 1 using the **sigmoid function**.
""")
# Use Cases
st.header("🎯 Real-World Use Cases")
st.markdown("""
- πŸ“§ **Spam Detection**: Is this email spam or not?
- πŸ₯ **Medical Diagnosis**: Does a patient have diabetes or not?
- πŸ‘₯ **Customer Churn**: Will a customer leave the company?
- πŸ’³ **Fraud Detection**: Is this transaction fraudulent?
""")
# How it works
st.header("βš™οΈ How Does Logistic Regression Work?")
st.markdown("Let's break it down step-by-step:")
with st.expander("πŸ”’ Step 1: Make a Linear Equation (just like Linear Regression)"):
st.latex(r"z = w_0 + w_1x_1 + w_2x_2 + \ldots + w_nx_n")
st.markdown("""
- We calculate a **weighted sum** of the input features.
- It's just a straight line equation where weights decide how important each feature is.
""")
with st.expander("πŸ§ͺ Step 2: Pass it into a Sigmoid Function"):
st.latex(r"\sigma(z) = \frac{1}{1 + e^{-z}}")
st.markdown("""
- The sigmoid turns the output into a **probability** between 0 and 1.
- If the output is close to 1 β†’ Class 1
If close to 0 β†’ Class 0
""")
st.image("https://upload.wikimedia.org/wikipedia/commons/8/88/Logistic-curve.svg",
caption="S-Shaped Sigmoid Curve", use_column_width=True)
with st.expander("🧠 Step 3: Make the Final Prediction"):
st.markdown("""
- If \\( \sigma(z) > 0.5 \\), we predict **Class 1**
- Else, we predict **Class 0**
""")
# Visualize the Flow
st.header("πŸ”„ Visualization of the Process")
st.markdown("Feature inputs β†’ Weighted sum β†’ Sigmoid function β†’ Probability β†’ Class label")
# Loss Function
st.header("❌ Loss Function - Binary Cross Entropy")
st.markdown("To improve the model, we need to know **how wrong it is**. That’s where the loss function comes in.")
st.latex(r"Loss = - [y \cdot \log(\hat{y}) + (1 - y) \cdot \log(1 - \hat{y})]")
st.markdown("""
- If predicted probability (\\( \hat{y} \\)) is far from actual label (\\( y \\)), the loss is high.
- This helps adjust weights to improve predictions.
""")
# Evaluation Metrics
st.header("πŸ“ Evaluation Metrics Explained Simply")
with st.expander("1️⃣ Accuracy"):
st.latex(r"Accuracy = \frac{TP + TN}{TP + TN + FP + FN}")
st.markdown("**How often** did we get it right?")
with st.expander("2️⃣ Precision (When you care about False Positives)"):
st.latex(r"Precision = \frac{TP}{TP + FP}")
st.markdown("Out of all predicted **Positives**, how many were **correct**?")
with st.expander("3️⃣ Recall (When you care about False Negatives)"):
st.latex(r"Recall = \frac{TP}{TP + FN}")
st.markdown("Out of all actual **Positives**, how many did we catch?")
with st.expander("4️⃣ F1 Score (Balance between Precision and Recall)"):
st.latex(r"F1 = 2 \cdot \frac{Precision \cdot Recall}{Precision + Recall}")
st.markdown("It balances both **precision** and **recall**.")
with st.expander("5️⃣ ROC-AUC"):
st.markdown("""
- πŸ“ˆ ROC: Curve that plots **True Positive Rate** vs **False Positive Rate**
- 🧠 AUC: Area Under Curve β€” closer to 1 = better performance
""")
# Summary
st.header("βœ… Quick Summary")
st.markdown("""
- Logistic Regression is great for **binary and multi-class classification**
- Uses **sigmoid function** to map outputs to probability
- Optimizes using **log loss**
- Simple, fast, and interpretable model β€” best for **linearly separable** data
- Evaluate using metrics like Accuracy, Precision, Recall, F1, and ROC-AUC
""")
st.success("πŸŽ‰ Great job! You've just mastered the basics of Logistic Regression.")