Harika22 commited on
Commit
71188bb
Β·
verified Β·
1 Parent(s): de1f57e

Update pages/15_Metrics.py

Browse files
Files changed (1) hide show
  1. pages/15_Metrics.py +125 -0
pages/15_Metrics.py CHANGED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+
3
+ st.set_page_config(page_title="Model Evaluation Metrics", page_icon="πŸ“Š", layout="wide")
4
+
5
+ st.sidebar.title("πŸ“Š Model Evaluation Metrics")
6
+ st.sidebar.markdown("Select a metric category from below.")
7
+
8
+ st.markdown("<h1 style='text-align: center;'>πŸ“ Model Evaluation Metrics</h1>", unsafe_allow_html=True)
9
+
10
+ metric_type = st.radio(
11
+ "Select the type of model evaluation:",
12
+ ["🎯 Classification Metrics", "πŸ“ˆ Regression Metrics"]
13
+ )
14
+
15
+ if metric_type == "🎯 Classification Metrics":
16
+ st.markdown("## 🎯 Classification Metrics")
17
+ st.write("Used when the target variable is **categorical**.")
18
+
19
+ st.markdown("### 1. Accuracy")
20
+ st.write("""
21
+ - **Definition**: Correct predictions out of total predictions
22
+ - **Formula**:
23
+ Accuracy = (TP + TN) / (TP + FP + FN + TN)
24
+ - ⚠️ Avoid using when classes are imbalanced.
25
+ """)
26
+
27
+ st.markdown("### 2. Confusion Matrix")
28
+ st.write("""
29
+ A matrix that compares actual and predicted labels.
30
+ Useful for understanding **true positives**, **false positives**, **true negatives**, and **false negatives**.
31
+
32
+ | | Predicted Positive | Predicted Negative |
33
+ |---------------|--------------------|--------------------|
34
+ | Actual Positive | True Positive (TP) | False Negative (FN) |
35
+ | Actual Negative | False Positive (FP) | True Negative (TN) |
36
+
37
+ - Use for binary and multiclass classification.
38
+ """)
39
+
40
+ st.markdown("### 3. Precision")
41
+ st.latex(r"Precision = \frac{TP}{TP + FP}")
42
+ st.write("Of all predicted positives, how many were correct.")
43
+
44
+ st.markdown("### 4. Recall (Sensitivity)")
45
+ st.latex(r"Recall = \frac{TP}{TP + FN}")
46
+ st.write("Of all actual positives, how many were correctly identified.")
47
+
48
+ st.markdown("### 5. F1 Score")
49
+ st.latex(r"F1 = 2 \cdot \frac{Precision \cdot Recall}{Precision + Recall}")
50
+ st.write("Harmonic mean of precision and recall. Good for imbalanced classes.")
51
+
52
+ st.markdown("### 6. Specificity (True Negative Rate)")
53
+ st.latex(r"Specificity = \frac{TN}{TN + FP}")
54
+ st.write("Measures how well the model identifies negatives.")
55
+
56
+ st.markdown("### 7. ROC Curve and AUC")
57
+ st.write("""
58
+ - **ROC Curve**: Plot of True Positive Rate (Recall) vs False Positive Rate
59
+ - **AUC** (Area Under the Curve): Measures model's ability to distinguish classes.
60
+ - AUC = 1: Perfect
61
+ - AUC = 0.5: Random
62
+ """)
63
+
64
+ st.markdown("### 8. Log Loss (Logarithmic Loss)")
65
+ st.latex(r"LogLoss = -\frac{1}{n} \sum \left[ y \log(\hat{y}) + (1 - y) \log(1 - \hat{y}) \right]")
66
+ st.write("""
67
+ - Evaluates predicted probabilities instead of just labels
68
+ - Lower log loss indicates better performance
69
+ - Especially useful for probabilistic models
70
+ """)
71
+
72
+ elif metric_type == "πŸ“ˆ Regression Metrics":
73
+ st.markdown("## πŸ“ˆ Regression Metrics")
74
+ st.write("Used when the target variable is **continuous**.")
75
+
76
+ st.markdown("### 1. Mean Absolute Error (MAE)")
77
+ st.latex(r"MAE = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y}_i|")
78
+ st.write("Measures average absolute difference between actual and predicted values. More robust to outliers.")
79
+
80
+ st.markdown("### 2. Mean Squared Error (MSE)")
81
+ st.latex(r"MSE = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2")
82
+ st.write("Penalizes large errors more than MAE. Sensitive to outliers.")
83
+
84
+ st.markdown("### 3. Root Mean Squared Error (RMSE)")
85
+ st.latex(r"RMSE = \sqrt{MSE}")
86
+ st.write("Square root of MSE. Easy to interpret since it has same units as output.")
87
+
88
+ st.markdown("### 4. RΒ² Score (Coefficient of Determination)")
89
+ st.latex(r"R^2 = 1 - \frac{SS_{res}}{SS_{tot}}")
90
+ st.write("""
91
+ Indicates how well model explains variation in data:
92
+ - **1.0** β†’ perfect
93
+ - **0.0** β†’ same as predicting mean
94
+ - **< 0** β†’ worse than mean
95
+ """)
96
+
97
+ st.markdown("### 5. Adjusted RΒ² Score")
98
+ st.latex(r"\text{Adjusted } R^2 = 1 - \left( \frac{(1 - R^2)(n - 1)}{n - k - 1} \right)")
99
+ st.write("""
100
+ - Adjusts RΒ² for number of predictors (k)
101
+ - Prevents overestimating performance from adding irrelevant features
102
+ """)
103
+
104
+ st.markdown("### 6. Mean Absolute Percentage Error (MAPE)")
105
+ st.latex(r"MAPE = \frac{100\%}{n} \sum_{i=1}^{n} \left| \frac{y_i - \hat{y}_i}{y_i} \right|")
106
+ st.write("Expresses error as a percentage. Avoid if actual values can be 0.")
107
+
108
+ st.markdown("### 7. Median Absolute Error")
109
+ st.write("Robust metric not influenced by outliers. Takes the median of all absolute differences.")
110
+
111
+ st.markdown("---")
112
+ st.markdown("### βœ… Choosing the Right Metric")
113
+ st.write("""
114
+ - **Classification**:
115
+ - Use **F1-score** for imbalanced data.
116
+ - Use **AUC-ROC** for probabilistic classifiers.
117
+ - Use **Log Loss** if working with predicted probabilities.
118
+ - **Regression**:
119
+ - Use **RMSE** when large errors are more serious.
120
+ - Use **MAE** when all errors matter equally.
121
+ - Use **RΒ²** to evaluate explained variance.
122
+ - Always compare with a **baseline model**.
123
+ """)
124
+
125
+ st.success("Choosing the right metric helps you evaluate and improve your model with confidence!")