Spaces:
Sleeping
Sleeping
| import streamlit as st | |
| st.set_page_config(page_title="Logistic Regression", page_icon="π", layout="wide") | |
| # Title | |
| st.markdown("<h1 style='text-align: center;'>π Logistic Regression - Made Simple</h1>", unsafe_allow_html=True) | |
| # What is Logistic Regression? | |
| st.header("π What is Logistic Regression?") | |
| st.markdown(""" | |
| Logistic Regression is a **classification algorithm** used to predict **discrete outcomes** (like Yes/No, 0/1, Spam/Not Spam). | |
| Think of it like: | |
| > βBased on your features (like age, income, health), should you get insurance? Yes or No?β | |
| ### Key Facts: | |
| - It's a **supervised learning** algorithm. | |
| - Despite the name, itβs used for **classification**, not regression. | |
| - It outputs a **probability** between 0 and 1 using the **sigmoid function**. | |
| """) | |
| # Use Cases | |
| st.header("π― Real-World Use Cases") | |
| st.markdown(""" | |
| - π§ **Spam Detection**: Is this email spam or not? | |
| - π₯ **Medical Diagnosis**: Does a patient have diabetes or not? | |
| - π₯ **Customer Churn**: Will a customer leave the company? | |
| - π³ **Fraud Detection**: Is this transaction fraudulent? | |
| """) | |
| # How it works | |
| st.header("βοΈ How Does Logistic Regression Work?") | |
| st.markdown("Let's break it down step-by-step:") | |
| with st.expander("π’ Step 1: Make a Linear Equation (just like Linear Regression)"): | |
| st.latex(r"z = w_0 + w_1x_1 + w_2x_2 + \ldots + w_nx_n") | |
| st.markdown(""" | |
| - We calculate a **weighted sum** of the input features. | |
| - It's just a straight line equation where weights decide how important each feature is. | |
| """) | |
| with st.expander("π§ͺ Step 2: Pass it into a Sigmoid Function"): | |
| st.latex(r"\sigma(z) = \frac{1}{1 + e^{-z}}") | |
| st.markdown(""" | |
| - The sigmoid turns the output into a **probability** between 0 and 1. | |
| - If the output is close to 1 β Class 1 | |
| If close to 0 β Class 0 | |
| """) | |
| st.image("https://upload.wikimedia.org/wikipedia/commons/8/88/Logistic-curve.svg", | |
| caption="S-Shaped Sigmoid Curve", use_column_width=True) | |
| with st.expander("π§ Step 3: Make the Final Prediction"): | |
| st.markdown(""" | |
| - If \\( \sigma(z) > 0.5 \\), we predict **Class 1** | |
| - Else, we predict **Class 0** | |
| """) | |
| # Visualize the Flow | |
| st.header("π Visualization of the Process") | |
| st.markdown("Feature inputs β Weighted sum β Sigmoid function β Probability β Class label") | |
| # Loss Function | |
| st.header("β Loss Function - Binary Cross Entropy") | |
| st.markdown("To improve the model, we need to know **how wrong it is**. Thatβs where the loss function comes in.") | |
| st.latex(r"Loss = - [y \cdot \log(\hat{y}) + (1 - y) \cdot \log(1 - \hat{y})]") | |
| st.markdown(""" | |
| - If predicted probability (\\( \hat{y} \\)) is far from actual label (\\( y \\)), the loss is high. | |
| - This helps adjust weights to improve predictions. | |
| """) | |
| # Evaluation Metrics | |
| st.header("π Evaluation Metrics Explained Simply") | |
| with st.expander("1οΈβ£ Accuracy"): | |
| st.latex(r"Accuracy = \frac{TP + TN}{TP + TN + FP + FN}") | |
| st.markdown("**How often** did we get it right?") | |
| with st.expander("2οΈβ£ Precision (When you care about False Positives)"): | |
| st.latex(r"Precision = \frac{TP}{TP + FP}") | |
| st.markdown("Out of all predicted **Positives**, how many were **correct**?") | |
| with st.expander("3οΈβ£ Recall (When you care about False Negatives)"): | |
| st.latex(r"Recall = \frac{TP}{TP + FN}") | |
| st.markdown("Out of all actual **Positives**, how many did we catch?") | |
| with st.expander("4οΈβ£ F1 Score (Balance between Precision and Recall)"): | |
| st.latex(r"F1 = 2 \cdot \frac{Precision \cdot Recall}{Precision + Recall}") | |
| st.markdown("It balances both **precision** and **recall**.") | |
| with st.expander("5οΈβ£ ROC-AUC"): | |
| st.markdown(""" | |
| - π ROC: Curve that plots **True Positive Rate** vs **False Positive Rate** | |
| - π§ AUC: Area Under Curve β closer to 1 = better performance | |
| """) | |
| # Summary | |
| st.header("β Quick Summary") | |
| st.markdown(""" | |
| - Logistic Regression is great for **binary and multi-class classification** | |
| - Uses **sigmoid function** to map outputs to probability | |
| - Optimizes using **log loss** | |
| - Simple, fast, and interpretable model β best for **linearly separable** data | |
| - Evaluate using metrics like Accuracy, Precision, Recall, F1, and ROC-AUC | |
| """) | |
| st.success("π Great job! You've just mastered the basics of Logistic Regression.") | |