Spaces:
Sleeping
Sleeping
Update stages/model_evaluation.py
Browse files- stages/model_evaluation.py +30 -4
stages/model_evaluation.py
CHANGED
|
@@ -1,5 +1,31 @@
|
|
| 1 |
-
import streamlit as st
|
| 2 |
-
|
| 3 |
-
def main():
|
| 4 |
-
st.title("Model Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
main()
|
|
|
|
| 1 |
+
import streamlit as st
|
| 2 |
+
|
| 3 |
+
def main():
|
| 4 |
+
st.title("Step 8: Model Evaluation")
|
| 5 |
+
|
| 6 |
+
st.markdown("""
|
| 7 |
+
### **Model Evaluation** :clipboard:
|
| 8 |
+
|
| 9 |
+
After training your model, it's time to **evaluate** how well it performs. This step is crucial to determine if your model can generalize well to unseen data.
|
| 10 |
+
|
| 11 |
+
**:scales: Why Evaluate the Model?**
|
| 12 |
+
- **Measure Accuracy**: Check how accurately your model makes predictions.
|
| 13 |
+
- **Avoid Overfitting**: Ensure your model performs well not only on training data but also on new, unseen data.
|
| 14 |
+
|
| 15 |
+
**Key Evaluation Metrics**:
|
| 16 |
+
- **Accuracy**: The percentage of correct predictions.
|
| 17 |
+
- **Precision**: How many of the predicted positive cases are actually positive.
|
| 18 |
+
- **Recall**: How many of the actual positive cases are correctly predicted.
|
| 19 |
+
- **F1 Score**: The balance between precision and recall.
|
| 20 |
+
- **Confusion Matrix**: A visual representation of true vs. predicted values.
|
| 21 |
+
|
| 22 |
+
**:key: Evaluation Flow**:
|
| 23 |
+
- If your **evaluation score is less than 90%**, it's time to go **back to Step 6 (Feature Engineering)**. This means the features might need improvement.
|
| 24 |
+
- If the score is still **below 90% after revisiting Step 6**, consider **changing the model**. Sometimes, a different algorithm or model might perform better.
|
| 25 |
+
- If your **score is greater than 90%**, congratulations! You can move forward to **model deployment**.
|
| 26 |
+
|
| 27 |
+
**:rocket: In Short**: Model evaluation helps you assess how well your model performs. Based on the evaluation score, you'll either refine your model or proceed to deployment.
|
| 28 |
+
""")
|
| 29 |
+
|
| 30 |
+
st.divider()
|
| 31 |
main()
|