Update app.py
Browse files
app.py
CHANGED
|
@@ -400,11 +400,11 @@ elif st.session_state.current_page == "EDA":
|
|
| 400 |
# Model Building
|
| 401 |
elif st.session_state.current_page == "Model Building":
|
| 402 |
st.markdown("""
|
| 403 |
-
<h2 style='text-align: center; color: #333;'
|
| 404 |
""", unsafe_allow_html=True)
|
| 405 |
|
| 406 |
st.markdown("""
|
| 407 |
-
<
|
| 408 |
<p>In this section, we explore different <b>Ensemble Learning</b> techniques to improve model performance.</p>
|
| 409 |
<p>We implemented three ensemble models:
|
| 410 |
<span style='font-size:16px;'>π₯ <b>Voting Regressor</b> - π― <b>Bagging Regressor</b> - π² <b>Random Forest Regressor</b></span></p>
|
|
@@ -443,7 +443,7 @@ elif st.session_state.current_page == "Model Building":
|
|
| 443 |
st.markdown("<hr style='border:1px solid #ddd;'>", unsafe_allow_html=True)
|
| 444 |
|
| 445 |
st.markdown("""
|
| 446 |
-
<
|
| 447 |
<p>A crucial step to improve ensemble performance is <b>choosing models with different variance levels</b>:</p>
|
| 448 |
<ul>
|
| 449 |
<li><b>Voting Regressor:</b> Uses a combination of <b>high-variance</b> (Decision Tree, KNN with small K) and <b>low-variance</b> (KNN with large K, Decision Tree with depth constraint) models.</li>
|
|
@@ -456,7 +456,7 @@ elif st.session_state.current_page == "Model Building":
|
|
| 456 |
|
| 457 |
# Hyperparameter Tuning
|
| 458 |
st.markdown("""
|
| 459 |
-
<
|
| 460 |
<p>We optimized hyperparameters for <b>KNN, Decision Tree, Bagging Regressor, and Random Forest</b> using <b>Optuna</b>.</p>
|
| 461 |
<p>Below are the <b>optimized parameters</b> for each model:</p>
|
| 462 |
|
|
@@ -494,7 +494,7 @@ elif st.session_state.current_page == "Model Building":
|
|
| 494 |
|
| 495 |
# Model Performance Insights
|
| 496 |
st.markdown("""
|
| 497 |
-
<
|
| 498 |
<p>Hereβs how our ensemble models performed on training and test datasets:</p>
|
| 499 |
""", unsafe_allow_html=True)
|
| 500 |
|
|
@@ -549,7 +549,7 @@ elif st.session_state.current_page == "Model Building":
|
|
| 549 |
|
| 550 |
# Choosing the Best Model
|
| 551 |
st.markdown("""
|
| 552 |
-
<
|
| 553 |
<ul>
|
| 554 |
<li>π We checked for <b>overfitting</b> (high training accuracy, low test accuracy).</li>
|
| 555 |
<li>π We avoided <b>underfitting</b> (low training and test accuracy).</li>
|
|
|
|
| 400 |
# Model Building
|
| 401 |
elif st.session_state.current_page == "Model Building":
|
| 402 |
st.markdown("""
|
| 403 |
+
<h2 style='text-align: center; color: #333;'>Model Building</h2>
|
| 404 |
""", unsafe_allow_html=True)
|
| 405 |
|
| 406 |
st.markdown("""
|
| 407 |
+
<h2>Introduction</h2>
|
| 408 |
<p>In this section, we explore different <b>Ensemble Learning</b> techniques to improve model performance.</p>
|
| 409 |
<p>We implemented three ensemble models:
|
| 410 |
<span style='font-size:16px;'>π₯ <b>Voting Regressor</b> - π― <b>Bagging Regressor</b> - π² <b>Random Forest Regressor</b></span></p>
|
|
|
|
| 443 |
st.markdown("<hr style='border:1px solid #ddd;'>", unsafe_allow_html=True)
|
| 444 |
|
| 445 |
st.markdown("""
|
| 446 |
+
<h2>Combining High & Low Variance Models</h2>
|
| 447 |
<p>A crucial step to improve ensemble performance is <b>choosing models with different variance levels</b>:</p>
|
| 448 |
<ul>
|
| 449 |
<li><b>Voting Regressor:</b> Uses a combination of <b>high-variance</b> (Decision Tree, KNN with small K) and <b>low-variance</b> (KNN with large K, Decision Tree with depth constraint) models.</li>
|
|
|
|
| 456 |
|
| 457 |
# Hyperparameter Tuning
|
| 458 |
st.markdown("""
|
| 459 |
+
<h2>β‘ Hyperparameter Tuning using Optuna</h2>
|
| 460 |
<p>We optimized hyperparameters for <b>KNN, Decision Tree, Bagging Regressor, and Random Forest</b> using <b>Optuna</b>.</p>
|
| 461 |
<p>Below are the <b>optimized parameters</b> for each model:</p>
|
| 462 |
|
|
|
|
| 494 |
|
| 495 |
# Model Performance Insights
|
| 496 |
st.markdown("""
|
| 497 |
+
<h2>π Model Performance Insights</h2>
|
| 498 |
<p>Hereβs how our ensemble models performed on training and test datasets:</p>
|
| 499 |
""", unsafe_allow_html=True)
|
| 500 |
|
|
|
|
| 549 |
|
| 550 |
# Choosing the Best Model
|
| 551 |
st.markdown("""
|
| 552 |
+
<h2>Choosing the Best Model</h2>
|
| 553 |
<ul>
|
| 554 |
<li>π We checked for <b>overfitting</b> (high training accuracy, low test accuracy).</li>
|
| 555 |
<li>π We avoided <b>underfitting</b> (low training and test accuracy).</li>
|