NLPHub / stages /model_evaluation.py
NeonSamurai's picture
Update stages/model_evaluation.py
d0ef45e verified
import streamlit as st
def main():
st.title("Step 8: Model Evaluation")
st.markdown("""
### **Model Evaluation** :clipboard:
After training your model, it's time to **evaluate** how well it performs. This step is crucial to determine if your model can generalize well to unseen data.
**:scales: Why Evaluate the Model?**
- **Measure Accuracy**: Check how accurately your model makes predictions.
- **Avoid Overfitting**: Ensure your model performs well not only on training data but also on new, unseen data.
**Key Evaluation Metrics**:
- **Accuracy**: The percentage of correct predictions.
- **Precision**: How many of the predicted positive cases are actually positive.
- **Recall**: How many of the actual positive cases are correctly predicted.
- **F1 Score**: The balance between precision and recall.
- **Confusion Matrix**: A visual representation of true vs. predicted values.
**:key: Evaluation Flow**:
- If your **evaluation score is less than 90%**, it's time to go **back to Step 6 (Feature Engineering)**. This means the features might need improvement.
- If the score is still **below 90% after revisiting Step 6**, consider **changing the model**. Sometimes, a different algorithm or model might perform better.
- If your **score is greater than 90%**, congratulations! You can move forward to **model deployment**.
**:rocket: In Short**: Model evaluation helps you assess how well your model performs. Based on the evaluation score, you'll either refine your model or proceed to deployment.
""")
st.divider()
main()