{ "cells": [ { "cell_type": "markdown", "id": "title-cell", "metadata": {}, "source": [ "# Sprint 5 Report – Resampling and Ensemble Techniques\n", "**Course:** Data Intensive Systems (4DV652) \n", "**Lab:** Lab Lecture 5 \n", "**Deadline:** 2026-02-25 \n", "\n", "---\n", "\n", "## Overview\n", "\n", "This sprint focused on applying **heterogeneous ensemble methods** and **stacking** to challenge the current regression and classification champions established in Sprint 4. The team worked across two parallel tracks:\n", "\n", "- **Regression Ensemble** – Predict `AimoScore` (continuous target) using diverse base models with bootstrap sampling and aggregation strategies.\n", "- **Classification Ensemble** – Predict `WeakestLink` (14-class target) using voting classifiers, bagging, and stacking.\n", "\n", "A custom `CorrelationFilter` transformer was also implemented as a reusable sklearn-compatible preprocessing component shared across both tracks.\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "ml-process-cell", "metadata": {}, "source": [ "## 1. ML Process Iteration\n", "\n", "### 1.1 Problem Framing Recap\n", "\n", "The dataset originates from movement quality assessments (NASM Overhead Squat Assessment). Two supervised learning tasks are defined:\n", "\n", "| Task | Target | Type | Sprint 4 Champion Score |\n", "|------|--------|------|-------------------------|\n", "| Regression | `AimoScore` (continuous) | Regression | R² = 0.6356, RMSE = 0.1303 |\n", "| Classification | `WeakestLink` (14 classes) | Multiclass | Weighted F1 = 0.6110 |\n", "\n", "### 1.2 Sprint 5 Goals\n", "\n", "Per the lab assignment:\n", "1. Define ensembles of independent models using bootstrap samples, different feature engineering, and diverse AI approaches.\n", "2. Challenge the Sprint 4 champions using simple aggregation (averaging / majority vote) or stacking.\n", "3. Deploy a new champion if the ensemble outperforms the previous one.\n", "4. Validate improvement using the **corrected resampled t-test** (Nadeau & Bengio, 2003).\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "software-cell", "metadata": {}, "source": [ "## 2. Software Development: CorrelationFilter\n", "\n", "A custom `sklearn`-compatible transformer was implemented and used in both the regression and classification pipelines. It removes highly correlated features before model fitting, reducing redundancy while preserving sklearn `Pipeline` compatibility." ] }, { "cell_type": "code", "execution_count": null, "id": "correlation-filter-code", "metadata": {}, "outputs": [], "source": [ "from sklearn.base import BaseEstimator, TransformerMixin\n", "import pandas as pd\n", "import numpy as np\n", "\n", "class CorrelationFilter(BaseEstimator, TransformerMixin):\n", " \"\"\"\n", " Removes features that are highly correlated with another feature.\n", " Threshold defaults to 0.99 (absolute Pearson correlation).\n", " Sklearn-compatible: can be used in Pipeline objects.\n", " \"\"\"\n", " def __init__(self, threshold=0.99):\n", " self.threshold = threshold\n", " self.keep_cols_ = None\n", "\n", " def fit(self, X, y=None):\n", " Xdf = pd.DataFrame(X) if not isinstance(X, pd.DataFrame) else X\n", " # Compute absolute correlation matrix (upper triangle only)\n", " corr = Xdf.corr(numeric_only=True).abs()\n", " upper = corr.where(np.triu(np.ones(corr.shape), k=1).astype(bool))\n", " to_drop = [col for col in upper.columns if any(upper[col] >= self.threshold)]\n", " self.keep_cols_ = [c for c in Xdf.columns if c not in to_drop]\n", " return self\n", "\n", " def transform(self, X):\n", " Xdf = pd.DataFrame(X) if not isinstance(X, pd.DataFrame) else X\n", " return Xdf[self.keep_cols_].copy()\n", "\n", "# Example usage\n", "print(\"CorrelationFilter: removes columns with |corr| >= threshold.\")\n", "print(\"Used in regression pipeline with threshold=0.95 for RandomForest.\")" ] }, { "cell_type": "markdown", "id": "regression-header", "metadata": {}, "source": [ "---\n", "## 3. Regression Ensemble (A5_Regression_Ensemble)\n", "\n", "### 3.1 Approach\n", "\n", "A heterogeneous ensemble was designed following the sprint lecture pattern:\n", "\n", "- **Bootstrap diversity**: Four different bootstrap-augmented training sets (`dataset2_train_augmented_1..4.csv`).\n", "- **Model diversity**: Four distinct regressors (Lasso, Ridge, RandomForest, GradientBoosting), each trained on a different bootstrap sample.\n", "- **Feature diversity**: Feature subsets were defined (full, angle-only, NASM-only, angle+NASM) to allow further differentiation.\n", "- **Aggregation**: Two strategies — simple averaging and CV-R²-weighted averaging.\n", "\n", "### 3.2 Ensemble Configuration\n", "\n", "| Model | Bootstrap Sample | Feature Set | Algorithm |\n", "|-------|-----------------|-------------|----------|\n", "| Lasso | 1 | Full | `LassoCV` (cv=5) |\n", "| Ridge | 2 | Full | `RidgeCV` (cv=5) |\n", "| RandomForest | 3 | Full | `RandomForestRegressor` (200 trees, depth=15) + CorrelationFilter(0.95) |\n", "| GradientBoosting | 4 | Full | `GradientBoostingRegressor` (150 trees, depth=5, lr=0.1) |" ] }, { "cell_type": "code", "execution_count": null, "id": "regression-config-code", "metadata": {}, "outputs": [], "source": [ "# Ensemble configuration (from A5_Regression_Ensemble.ipynb)\n", "ENSEMBLE_CONFIG = [\n", " {\"name\": \"lasso\", \"bootstrap\": 1, \"features\": \"full\"},\n", " {\"name\": \"ridge\", \"bootstrap\": 2, \"features\": \"full\"},\n", " {\"name\": \"rf\", \"bootstrap\": 3, \"features\": \"full\"},\n", " {\"name\": \"gb\", \"bootstrap\": 4, \"features\": \"full\"},\n", "]\n", "\n", "# Feature subsets available (used for diversity)\n", "FEATURE_SUBSETS = {\n", " \"full\": \"all features\",\n", " \"angle_only\": \"features with 'Angle' in name\",\n", " \"nasm_only\": \"features with 'NASM' in name\",\n", " \"angle_nasm\": \"angle + NASM features (excludes time)\",\n", "}\n", "\n", "print(\"Ensemble configuration loaded.\")\n", "print(f\"Number of base models: {len(ENSEMBLE_CONFIG)}\")" ] }, { "cell_type": "markdown", "id": "regression-results", "metadata": {}, "source": [ "### 3.3 Base Model Results (Test Set)\n", "\n", "Each base model was trained on its assigned bootstrap sample and evaluated on the held-out test set. Cross-validation R² scores were used to compute ensemble weights.\n", "\n", "| Model | Bootstrap | CV R² | Test R² | Test RMSE | Test MAE |\n", "|-------|-----------|-------|---------|-----------|----------|\n", "| Lasso | 1 | — | — | — | — |\n", "| Ridge | 2 | — | — | — | — |\n", "| RandomForest | 3 | — | — | — | — |\n", "| GradientBoosting | 4 | — | — | — | — |\n", "\n", "> *Note: Exact numeric outputs are produced at runtime. The table above is populated when executing the notebook against the dataset.*\n", "\n", "### 3.4 Ensemble Aggregation Results\n", "\n", "| Method | R² | RMSE | MAE | Corr |\n", "|--------|----|------|-----|------|\n", "| Simple Average | runtime | runtime | runtime | runtime |\n", "| Weighted Average (CV R²) | runtime | runtime | runtime | runtime |\n", "| **A4 Champion (baseline)** | **0.6356** | **0.1303** | **0.0972** | **0.8089** |\n", "\n", "Weighted averaging assigns higher influence to models with better cross-validation R² scores:\n", "$$\\hat{y}_{\\text{ensemble}} = \\sum_{i=1}^{4} w_i \\hat{y}_i, \\quad w_i = \\frac{\\text{CV-R}^2_i}{\\sum_j \\text{CV-R}^2_j}$$" ] }, { "cell_type": "markdown", "id": "regression-ttest", "metadata": {}, "source": [ "### 3.5 Statistical Significance – Corrected Resampled t-test\n", "\n", "Standard paired t-tests overstate confidence when models are compared via cross-validation because training folds overlap. The **Nadeau & Bengio (2003) correction** accounts for this by inflating the variance:\n", "\n", "$$\\text{Var}_{\\text{corrected}} = \\left(\\frac{1}{k} + \\frac{n_{\\text{test}}}{n_{\\text{train}}}\\right) \\cdot \\hat{\\sigma}^2_{\\Delta}$$\n", "\n", "For this dataset, $n_{\\text{test}}/n_{\\text{train}} \\approx 0.25$, meaning variance is inflated by ~25%, making it harder to claim statistically significant improvement.\n", "\n", "Hypotheses tested (α = 0.05):\n", "- H₀: Ensemble MSE = Champion MSE \n", "- H₁: Ensemble MSE ≠ Champion MSE (two-tailed)\n", "\n", "| Comparison | t-stat | p-value | Significant? |\n", "|------------|--------|---------|-------------|\n", "| Simple Avg vs Champion | runtime | runtime | runtime |\n", "| Weighted Avg vs Champion | runtime | runtime | runtime |" ] }, { "cell_type": "code", "execution_count": null, "id": "regression-ttest-code", "metadata": {}, "outputs": [], "source": [ "# Corrected resampled t-test implementation (Nadeau & Bengio, 2003)\n", "import numpy as np\n", "from scipy import stats\n", "\n", "def corrected_resampled_ttest(errors_1, errors_2, n_train, n_test):\n", " \"\"\"\n", " Corrected resampled t-test for comparing two models.\n", " Accounts for variance inflation due to overlapping cross-validation folds.\n", " \n", " Args:\n", " errors_1, errors_2: arrays of per-sample squared errors for model 1 and 2\n", " n_train: number of training samples\n", " n_test: number of test samples\n", " Returns:\n", " t_stat, p_value, mean_diff\n", " \"\"\"\n", " diff = errors_1 - errors_2\n", " mean_diff = np.mean(diff)\n", " var_diff = np.var(diff, ddof=1)\n", "\n", " # Correction factor: accounts for n_test/n_train overlap\n", " correction = (1 / len(diff)) + (n_test / n_train)\n", " corrected_var = correction * var_diff\n", "\n", " t_stat = mean_diff / np.sqrt(corrected_var)\n", " p_value = 2 * stats.t.sf(np.abs(t_stat), df=len(diff) - 1)\n", " return t_stat, p_value, mean_diff\n", "\n", "print(\"Corrected resampled t-test function defined.\")\n", "print(\"Positive mean_diff => champion has higher MSE (ensemble is better).\")" ] }, { "cell_type": "markdown", "id": "regression-champion", "metadata": {}, "source": [ "### 3.6 Champion Decision (Regression)\n", "\n", "The best-performing ensemble method (Simple Average or Weighted Average) is compared against the A4 champion. If the ensemble exceeds R² = 0.6356, the model is saved as `aimoscores_ensemble_A5.pkl`.\n", "\n", "The best ensemble selection logic:\n", "- Best aggregation method: `weighted` if weighted R² > average R², else `average`\n", "- Saved only if it outperforms the A4 champion\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "classification-header", "metadata": {}, "source": [ "## 4. Classification Ensemble (A5_Classification_Ensemble)\n", "\n", "### 4.1 Problem Setup\n", "\n", "- **Target**: `WeakestLink` — the movement category with the highest deviation score across 14 NASM categories.\n", "- **Features**: Merged movement features from `aimoscores.csv` + weakest link labels from `scores_and_weaklink.csv`.\n", "- **Class imbalance**: Addressed using `class_weight='balanced'` and `class_weight='balanced_subsample'`.\n", "- **A4 Champion baseline**: Random Forest with weighted F1 = **0.6110**.\n", "\n", "### 4.2 Data Preparation" ] }, { "cell_type": "code", "execution_count": null, "id": "classification-setup", "metadata": {}, "outputs": [], "source": [ "# Classification setup (from A5_Classification_Ensemble.ipynb)\n", "RANDOM_STATE = 42\n", "N_SPLITS = 5\n", "CHAMPION_F1 = 0.6110 # Sprint 4 benchmark\n", "\n", "# 14 WeakestLink categories\n", "weaklink_categories = [\n", " 'ExcessiveForwardLean', 'ForwardHead', 'LeftArmFallForward',\n", " 'LeftAsymmetricalWeightShift', 'LeftHeelRises', 'LeftKneeMovesInward',\n", " 'LeftKneeMovesOutward', 'LeftShoulderElevation', 'RightArmFallForward',\n", " 'RightAsymmetricalWeightShift', 'RightHeelRises', 'RightKneeMovesInward',\n", " 'RightKneeMovesOutward', 'RightShoulderElevation',\n", "]\n", "\n", "# WeakestLink = the category with the highest deviation score\n", "# weaklink_scores_df['WeakestLink'] = weaklink_scores_df[weaklink_categories].idxmax(axis=1)\n", "\n", "print(f\"Number of classes: {len(weaklink_categories)}\")\n", "print(f\"Sprint 4 champion F1: {CHAMPION_F1}\")" ] }, { "cell_type": "markdown", "id": "classification-ensembles", "metadata": {}, "source": [ "### 4.3 Ensemble Strategies\n", "\n", "Four ensemble strategies were designed and evaluated using **5-fold stratified cross-validation**:\n", "\n", "#### Ensemble 1 – Hard Voting\n", "Each base classifier casts a vote; the class with the most votes wins. Base classifiers: Random Forest, Logistic Regression, XGBoost, LightGBM, KNN (k=7), LDA.\n", "\n", "#### Ensemble 2 – Soft Voting \n", "Same base classifiers, but predictions are combined via averaged class probabilities. Generally more accurate than hard voting when calibrated probability estimates are available.\n", "\n", "#### Ensemble 3 – Bootstrap Bagging on LDA \n", "`BaggingClassifier` wrapping `LinearDiscriminantAnalysis` (50 estimators, 80% sample size, 90% feature subset). Demonstrates how bagging can stabilise a weak linear model.\n", "\n", "#### Ensemble 4 – Stacking (LR meta-learner) \n", "Base classifiers: Random Forest, Logistic Regression, KNN, LDA. Meta-learner: `LogisticRegression` trained on out-of-fold predictions (5-fold CV). The meta-learner learns *how to combine* base model outputs, replacing simple voting with a learned aggregation function." ] }, { "cell_type": "code", "execution_count": null, "id": "classification-ensembles-code", "metadata": {}, "outputs": [], "source": [ "# Ensemble model definitions (from A5_Classification_Ensemble.ipynb)\n", "from sklearn.ensemble import (\n", " RandomForestClassifier, VotingClassifier, BaggingClassifier, StackingClassifier\n", ")\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.discriminant_analysis import LinearDiscriminantAnalysis\n", "from sklearn.neighbors import KNeighborsClassifier\n", "# import xgboost as xgb\n", "# import lightgbm as lgb\n", "\n", "# ---- Ensemble 4: Stacking ----\n", "stacking = StackingClassifier(\n", " estimators=[\n", " ('rf', RandomForestClassifier(n_estimators=100, max_depth=15,\n", " min_samples_split=5, min_samples_leaf=2,\n", " class_weight='balanced_subsample',\n", " random_state=RANDOM_STATE, n_jobs=-1)),\n", " ('lr', LogisticRegression(max_iter=1000, class_weight='balanced',\n", " random_state=RANDOM_STATE)),\n", " ('knn', KNeighborsClassifier(n_neighbors=7)),\n", " ('lda', LinearDiscriminantAnalysis()),\n", " ],\n", " final_estimator=LogisticRegression(\n", " C=1.0, max_iter=1000, class_weight='balanced', random_state=RANDOM_STATE\n", " ),\n", " cv=5,\n", " passthrough=False,\n", " n_jobs=-1,\n", ")\n", "\n", "print(\"Stacking classifier defined with LR meta-learner.\")" ] }, { "cell_type": "markdown", "id": "classification-results", "metadata": {}, "source": [ "### 4.4 Cross-Validation Results\n", "\n", "All models were evaluated with 5-fold **StratifiedKFold** cross-validation on the training set, using weighted F1-score as the primary metric.\n", "\n", "| Model | CV F1 (mean) | CV F1 (std) | CV Accuracy | CV Precision | CV Recall |\n", "|-------|-------------|------------|------------|-------------|----------|\n", "| A4 Champion – Random Forest | ~0.6110 | — | — | — | — |\n", "| Hard Voting | runtime | runtime | runtime | runtime | runtime |\n", "| Soft Voting | runtime | runtime | runtime | runtime | runtime |\n", "| Bootstrap Bagging (LDA) | runtime | runtime | runtime | runtime | runtime |\n", "| Stacking (LR meta) | runtime | runtime | runtime | runtime | runtime |\n", "\n", "> *The bar chart below (produced at runtime) visually compares all approaches, with the red dashed line marking the Sprint 4 champion.*\n", "\n", "### 4.5 Statistical Significance Tests\n", "\n", "The corrected resampled t-test was applied for each ensemble vs the A4 champion (same implementation as the regression track).\n", "\n", "| Ensemble | t-stat | p-value | Better than Champion? |\n", "|----------|--------|---------|----------------------|\n", "| Hard Voting | runtime | runtime | runtime |\n", "| Soft Voting | runtime | runtime | runtime |\n", "| Bootstrap Bagging (LDA) | runtime | runtime | runtime |\n", "| Stacking (LR meta) | runtime | runtime | runtime |\n", "\n", "### 4.6 Final Test Set Results\n", "\n", "All models were retrained on the full training set and evaluated on the held-out 20% test split.\n", "\n", "| Model | Test F1 | Test Accuracy | Test Precision | Test Recall |\n", "|-------|---------|--------------|---------------|------------|\n", "| Best Ensemble (champion) | runtime | runtime | runtime | runtime |\n", "| A4 Champion – Random Forest | ~0.6110 | — | — | — |\n", "\n", "### 4.7 Champion Decision (Classification)\n", "\n", "The top-ranked ensemble by CV F1 is selected as the new champion and saved to `models/ensemble_classification_champion.pkl`. The artifact includes the model, scaler, feature columns, CV metrics, test metrics, and improvement percentage vs Sprint 4." ] }, { "cell_type": "markdown", "id": "summary-cell", "metadata": {}, "source": [ "---\n", "## 5. Sprint Summary\n", "\n", "### 5.1 What Was Done\n", "\n", "| Component | Owner Track | Description |\n", "|-----------|------------|-------------|\n", "| `CorrelationFilter.py` | Shared | Custom sklearn transformer removing highly correlated features |\n", "| Regression Ensemble | Regression track | 4 base models (Lasso, Ridge, RF, GB) × 4 bootstrap samples, simple + weighted averaging |\n", "| Classification Ensemble | Classification track | Hard Voting, Soft Voting, Bagging (LDA), Stacking (LR meta) |\n", "| Statistical Testing | Both tracks | Corrected resampled t-test (Nadeau & Bengio 2003) |\n", "| Champion Deployment | Both tracks | Pickle artifacts saved if ensemble outperforms A4 champion |\n", "\n", "### 5.2 Key Design Decisions\n", "\n", "**Regression:** Bootstrap-based diversity was the primary source of independence between base models. Weighted averaging was used as the aggregation method with weights derived from CV-R² scores, giving better-performing models proportionally more influence.\n", "\n", "**Classification:** A broader range of diversity strategies was explored — algorithm diversity (RF, LR, XGB, LGB, KNN, LDA), voting schemes (hard vs soft), and a stacking approach where a meta-learner replaces manual aggregation. Class imbalance was consistently addressed with `class_weight='balanced'`.\n", "\n", "**Statistical rigor:** The Nadeau-Bengio correction was applied in both tracks rather than a naive t-test, accounting for the overlap between cross-validation folds (correction factor ≈ 1.25 for this dataset).\n", "\n", "### 5.3 Limitations and Next Steps\n", "\n", "- Feature subset diversity (angle-only, NASM-only) was defined but ultimately the final configuration used full features for all base models in the regression track. Future iterations could test whether feature-diverse ensembles further reduce error.\n", "- The classification stacking approach used `passthrough=False`, meaning the meta-learner only sees predicted class probabilities, not the original features. Including raw features (`passthrough=True`) could be explored.\n", "- More ensemble members (e.g., 8–10 base models) could be evaluated to assess the accuracy-variance tradeoff more thoroughly." ] }, { "cell_type": "code", "execution_count": null, "id": "cebc7f8e-92b4-4938-abeb-53c6e294a2cf", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.8" } }, "nbformat": 4, "nbformat_minor": 5 }