Introduction to Feature Engineering

Feature Engineering is the process of transforming raw data into meaningful inputs that boost machine-learning model performance. A well-crafted feature set can improve accuracy by 10-30% without changing the underlying algorithm.

Key Idea: 💡 Thoughtful features provide the model with clearer patterns, like lenses sharpening a blurry picture.
setup.py - Pandas Basics
import pandas as pd
import numpy as np

# Load the dataset
df = pd.read_csv('housing_data.csv')

# Inspect raw data types and missing values
df.info()

# View summary statistics
display(df.describe())

Handling Missing Data

Missing values come in three flavors: MCAR (Missing Completely At Random), MAR (Missing At Random), and MNAR (Missing Not At Random). Each demands different treatment to avoid bias.

Real Example: A hospital's patient records often have absent cholesterol values because certain tests were not ordered for healthy young adults.
💡 Mean/Median work best when data is MCAR or MAR.
⚠️ Using mean imputation on skewed data can distort distributions.
✅ Always impute after splitting into train and test to avoid leakage.

🧠 Under the Hood: Imputation Math

KNN Imputation predicts missing values by finding the $k$ closest neighbors using a distance metric like Euclidean distance. For two samples $x$ and $y$ with $n$ features, ignoring missing dimensions:

$$ d(x, y) = \sqrt{\sum_{i=1}^{n} w_i (x_i - y_i)^2} $$

Once the $k$ neighbors are found, their values are averaged (or weighted by distance) to fill the missing slot. This preserves local cluster distributions better than global mean imputation.

missing_data.py - Scikit-Learn Imputers
from sklearn.impute import SimpleImputer, KNNImputer

# 1. Simple Imputation (Mean/Median/Most Frequent)
# Good for MCAR (Missing Completely At Random)
mean_imputer = SimpleImputer(strategy='mean')
df['age_imputed'] = mean_imputer.fit_transform(df[['age']])

# 2. KNN Imputation (Distance-based)
# Good for MAR (Missing At Random) when variables are correlated
knn_imputer = KNNImputer(n_neighbors=5, weights='distance')
df_imputed = knn_imputer.fit_transform(df)

# Note: Tree-based models like XGBoost can handle NaNs natively!

Handling Outliers

Outliers are data points that deviate markedly from others. Detecting and treating them prevents skewed models.

💡 The IQR method is robust to non-normal data.
⚠️ Removing legitimate extreme values can erase important signals.

🧠 Under the Hood: Outlier Math

Z-Score measures how many standard deviations $\sigma$ a point is from the mean $\mu$. It assumes the data is normally distributed:

$$ z = \frac{x - \mu}{\sigma} \quad \text{(Threshold: } |z| > 3 \text{)} $$

IQR (Interquartile Range) is non-parametric. It defines fences based on the 25th ($Q1$) and 75th ($Q3$) percentiles: $[Q1 - 1.5 \times \text{IQR},\ Q3 + 1.5 \times \text{IQR}]$. Winsorization caps values at these percentiles instead of dropping them.

outliers.py - Z-Score and Winsorization
import numpy as np
from scipy import stats

# 1. Z-Score Method (Dropping Outliers)
z_scores = np.abs(stats.zscore(df['income']))
# Keep only rows where z-score is less than 3
df_clean = df[z_scores < 3]

# 2. IQR Method (Winsorization / Capping)
# Capping at 5th and 95th percentiles to retain data points
lower_limit = df['income'].quantile(0.05)
upper_limit = df['income'].quantile(0.95)

df['income_capped'] = np.clip(df['income'], lower_limit, upper_limit)

Feature Scaling

Algorithms that rely on distance, like KNN, demand comparable feature magnitudes.

🧠 Under the Hood: Scaling Math

Min-Max Scaling (Normalization) scales data to a fixed range, usually $[0, 1]$:

$$ X_{norm} = \frac{X - X_{min}}{X_{max} - X_{min}} $$

Standardization (Z-Score Scaling) centers the data around a mean of 0 with a standard deviation of 1. It does not bound data to a specific range, handling outliers better than Min-Max:

$$ X_{std} = \frac{X - \mu}{\sigma} $$

Robust Scaling uses statistics that are robust to outliers, like the median and Interquartile Range (IQR): $X_{robust} = \frac{X - \text{median}}{Q3 - Q1}$.

scaling.py - Scikit-Learn Scalers
from sklearn.preprocessing import MinMaxScaler, StandardScaler, RobustScaler

# 1. Min-Max Scaler (Best for Neural Networks/Images)
minmax = MinMaxScaler()
df[['age_minmax', 'income_minmax']] = minmax.fit_transform(df[['age', 'income']])

# 2. Standard Scaler (Best for PCA, SVM, Logistic Regression)
standard = StandardScaler()
df_scaled = standard.fit_transform(df)

# 3. Robust Scaler (Best when dataset has many outliers)
robust = RobustScaler()
df_robust = robust.fit_transform(df)

Data Encoding

Transform categorical variables into numbers so models can interpret them.

🧠 Under the Hood: Target Encoding Math

One-Hot Encoding creates $N$ sparse binary columns for $N$ categories, which can cause the "Curse of Dimensionality" for high-cardinality features.

Target Encoding replaces a categorical value with the average target value for that category. To prevent overfitting (especially on rare categories), a Bayesian Smoothing average is applied:

$$ S = \lambda \cdot \bar{y}_{cat} + (1 - \lambda) \cdot \bar{y}_{global} $$

Where $\bar{y}_{cat}$ is the mean of the target for the specific category, $\bar{y}_{global}$ is the global target mean, and $\lambda$ is a weight between 0 and 1 determined by the category's frequency.

encoding.py - Category Encoders
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
from category_encoders import TargetEncoder

# 1. One-Hot Encoding (Best for nominal variables with few categories)
ohe = OneHotEncoder(sparse_output=False, drop='first') # drop='first' avoids multicollinearity
color_encoded = ohe.fit_transform(df[['color']])

# Pandas alternative (easy but not ideal for pipelines):
# pd.get_dummies(df, columns=['color'], drop_first=True)

# 2. Target Encoding (Best for high-cardinality nominal variables like zipcodes)
# Requires 'category_encoders' library
te = TargetEncoder(smoothing=10) # Higher smoothing pulls estimates closer to global mean
df['zipcode_encoded'] = te.fit_transform(df['zipcode'], df['target'])

Feature Selection

Pick features that matter, drop those that don't.

🧠 Under the Hood: Selection Math

Feature selection can be filter-based, wrapper-based, or intrinsic.

Filter Method (ANOVA F-Value): Scikit-Learn's `f_classif` computes the ANOVA F-value between numerical features and a categorical target. The F-statistic measures the ratio of variance between groups to the variance within groups:

$$ F = \frac{\text{Between-group variability}}{\text{Within-group variability}} $$

Wrapper Method (RFE): Recursive Feature Elimination fits a model (e.g., Logistic Regression or Random Forest), ranks features by importance coefficients, drops the weakest feature, and repeats until the desired $N$ features remain.

selection.py - Feature Selection
from sklearn.feature_selection import SelectKBest, f_classif, RFE
from sklearn.linear_model import LogisticRegression

X = df.drop('target', axis=1)
y = df['target']

# 1. Filter Method: SelectKBest (ANOVA F-value)
# Keeps the 5 features with the highest ANOVA F-scores
selector = SelectKBest(score_func=f_classif, k=5)
X_top_5 = selector.fit_transform(X, y)
selected_columns = X.columns[selector.get_support()]

# 2. Wrapper Method: Recursive Feature Elimination (RFE)
# Uses a model's intrinsic feature importance assigning to prune
estimator = LogisticRegression()
rfe = RFE(estimator, n_features_to_select=5, step=1)
X_rfe = rfe.fit_transform(X, y)
rfe_columns = X.columns[rfe.support_]

Handling Imbalanced Data

Class imbalance leads to biased predictions. Balancing techniques can fix this.

🧠 Under the Hood: SMOTE Math

SMOTE (Synthetic Minority Over-sampling Technique) doesn't just duplicate data (like Random Over-Sampling). It creates novel synthetic examples by interpolating between existing minority instances.

For a minority class point $x_i$, SMOTE finds its $k$-nearest minority neighbors. It picks one neighbor $x_{zi}$ and generates a synthetic point $x_{new}$ along the line segment joining them:

$$ x_{new} = x_i + \lambda \times (x_{zi} - x_i) $$

Where $\lambda$ is a random number between 0 and 1. This creates a denser, more generalized decision region for the minority class.

imbalanced.py - Imblearn Resampling
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
from imblearn.pipeline import Pipeline

# 1. SMOTE (Over-sampling the minority class)
smote = SMOTE(sampling_strategy='auto', k_neighbors=5, random_state=42)
X_smote, y_smote = smote.fit_resample(X, y)

# 2. Random Under-Sampling (Reducing the majority class)
rus = RandomUnderSampler(sampling_strategy='auto', random_state=42)
X_rus, y_rus = rus.fit_resample(X, y)

# 3. Best Practice Pipeline: Under-sample majority THEN SMOTE minority
# Prevents creating too many synthetic points if the imbalance is extreme
resample_pipe = Pipeline([
    ('rus', RandomUnderSampler(sampling_strategy=0.1)), # Reduce majority until minority is 10%
    ('smote', SMOTE(sampling_strategy=0.5))             # SMOTE minority until it's 50%
])
X_resampled, y_resampled = resample_pipe.fit_resample(X, y)

Exploratory Data Analysis (EDA)

Exploratory Data Analysis (EDA) is a critical step in the machine learning pipeline that comes BEFORE feature engineering. EDA helps you understand your data, discover patterns, identify anomalies, detect outliers, test hypotheses, and check assumptions through summary statistics and graphical representations.

Key Questions EDA Answers:
  • How many columns are numerical vs. categorical?
  • What does the data distribution look like?
  • Are there missing values?
  • Are there outliers?
  • Is the data imbalanced (for classification problems)?
  • What are the correlations between features?
  • Are there any trends or patterns?
Real-World Example: Imagine you're analyzing customer data for a bank to predict loan defaults. EDA helps you understand:
  • Age distribution of customers (histogram)
  • Income levels (box plot for outliers)
  • Correlation between income and loan amount (scatter plot)
  • Missing values in employment history
  • Class imbalance (5% defaults vs 95% non-defaults)

Two Main Types of EDA

1. Descriptive Statistics

Purpose: Summarize and visualize what the data looks like

A. Central Tendency:
Mean (Average): μ = Σxᵢ / n
  Example: Average income = $50,000 (Sensitive to outliers)
Median: Middle value when sorted
  Example: Median income = $45,000 (Robust to outliers)
Mode: Most frequent value
  Example: Most common age = 35 years

B. Variability (Spread):
Variance: σ² = Σ(xᵢ - μ)² / n (Measures how spread out data is)
Standard Deviation: σ = √variance
  68% of data within 1σ, 95% within 2σ, 99.7% within 3σ (for normal distribution)
Interquartile Range (IQR): Q3 - Q1
  Middle 50% of data, robust to outliers

C. Correlation & Associations:
Pearson Correlation: r = Cov(X,Y) / (σₓ × σᵧ)
  Range: -1 to +1
  r = +1: Perfect positive correlation
  r = 0: No linear correlation
  r = -1: Perfect negative correlation
Thresholds: |r| > 0.7: Strong, |r| = 0.5-0.7: Moderate, |r| < 0.3: Weak

2. Inferential Statistics

Purpose: Make inferences or generalizations about the population from the sample

Key Question: Can we claim this effect exists in the larger population, or is it just by chance?

A. Hypothesis Testing:
Null Hypothesis (H₀): No effect exists (e.g., "Mean of Group A = Mean of Group B")
Alternative Hypothesis (H₁): Effect exists (e.g., "Mean of Group A ≠ Mean of Group B")
P-value: Probability of observing data if H₀ is true
  p < 0.05: Reject H₀ (effect is statistically significant)
  p > 0.05: Fail to reject H₀ (not enough evidence)

Example:
• H₀: "There is no difference between positive and negative movie review lengths"
• H₁: "Negative reviews are longer than positive reviews"
• After t-test: p = 0.003 (< 0.05)
• Conclusion: Reject H₀ → Negative reviews ARE significantly longer

B. Confidence Intervals:
• Range where true population parameter likely lies
• 95% CI: We're 95% confident the true value is within this range
• Example: "Average customer age is 35 ± 2 years (95% CI: [33, 37])"

C. Effect Size:
• Cohen's d = (mean₁ - mean₂) / pooled_std
• Small effect: d = 0.2, Medium: d = 0.5, Large: d = 0.8

Algorithm Steps for EDA

1. Load and Inspect Data: df.head(), df.info(), df.describe()
2. Handle Missing Values: Identify (df.isnull().sum()), Visualize, Decide
3. Analyze Distributions: Histograms, count plots, box plots
4. Check for Imbalance: Count target classes, plot distribution
5. Correlation Analysis: Correlation matrix, heatmap, identify multicollinearity
6. Statistical Testing: Compare groups (t-test, ANOVA), test assumptions, calculate effect sizes

Interactive EDA Dashboard

💡 EDA typically takes 30-40% of total project time. Good EDA reveals which features to engineer.
⚠️ Common Mistakes: Skipping EDA, not checking outliers before scaling, ignoring missing value patterns, overlooking class imbalance, ignoring multicollinearity.
✅ Best Practices: ALWAYS start with EDA, visualize EVERY feature, check correlations with target, document insights, use both descriptive and inferential statistics.

🧠 Under the Hood: Skewness & Kurtosis

Beyond mean and variance, we examine the geometric shape of our distributions using the 3rd and 4th statistical moments.

Skewness ($s$) measures asymmetry. Positive means right-tailed, negative means left-tailed:

$$ s = \frac{\frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^3}{\sigma^3} $$

Kurtosis ($k$) measures "tailedness" (presence of outliers). A normal distribution has a kurtosis of 3. High kurtosis means heavy tails:

$$ k = \frac{\frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^4}{\sigma^4} $$
eda.py - Automated & Visual EDA
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt

# 1. Deep Descriptive Stats (includes skewness)
display(df.describe().T)
print("Skewness:\n", df.skew())
print("\nMissing Values:\n", df.isnull().sum())

# 2. Visual Distributions (Pairplot)
# Plots histograms on the diagonal and scatter plots for every relationship
sns.pairplot(df, hue='target_class', diag_kind='kde', corner=True)
plt.show()

# 3. Correlation Heatmap
plt.figure(figsize=(10, 8))
corr_matrix = df.corr(method='spearman') # Spearman is robust to non-linear relationships
sns.heatmap(corr_matrix, annot=True, cmap='coolwarm', vmin=-1, vmax=1)
plt.title("Spearman Correlation Heatmap")
plt.show()

Use Cases and Applications

  • Healthcare: Analyzing patient data before building disease prediction models
  • Finance: Understanding customer demographics before credit scoring
  • E-commerce: Analyzing purchase patterns before recommendation systems
  • Marketing: Understanding customer segments before targeted campaigns
  • Time Series: Checking for seasonality and trends in sales data

Summary & Key Takeaways

Exploratory Data Analysis is the foundation of any successful machine learning project. It combines descriptive statistics (mean, median, variance, correlation) with inferential statistics (hypothesis testing, confidence intervals) to understand data deeply.

Descriptive EDA answers: "What is happening in the dataset?"
Inferential EDA answers: "Can we claim this effect exists in the larger population?"

Remember: Data → EDA → Feature Engineering → ML → Deployment

Feature Transformation

Feature transformation creates new representations of data to capture non-linear patterns. Techniques like polynomial features, binning, and mathematical transformations unlock hidden relationships.

Real Example: Predicting house prices with polynomial features (adding x² terms) improves model fit for non-linear relationships between square footage and price.

Mathematical Foundations

Polynomial Features: Transform (x₁, x₂) → (1, x₁, x₂, x₁², x₁x₂, x₂²)
• Degree 2 example: For features (x, y) → (1, x, y, x², xy, y²)
• 2 features with degree=2 creates 6 features total

Binning: Convert continuous → categorical
• Equal-width: Divide range into equal intervals
• Quantile: Each bin has equal number of samples
• Example: Age (0-100) → [0-18], [19-35], [36-60], [61+]

Mathematical Transformations:
• Square Root: √x (reduces right skew)
• Log Transform: log(1 + x)
• Box-Cox: λ = 0: log(x), λ ≠ 0: (x^λ - 1)/λ
💡 Polynomial features capture curve fitting, but degree=3 on 10 features creates 286 features!
⚠️ Always scale features after polynomial transformation to prevent magnitude issues.
✅ Start with degree=2 and visualize distributions before/after transformation.

🧠 Under the Hood: Power Transforms

When log transformations $\ln(1+x)$ aren't enough to fix severe skewness, we use parametric Power Transformations like Box-Cox (requires $x > 0$) or Yeo-Johnson (supports negative values). They automatically find the optimal $\lambda$ parameter using Maximum Likelihood Estimation.

Box-Cox Transformation Formula:

$$ x^{(\lambda)} = \begin{cases} \frac{x^\lambda - 1}{\lambda} & \text{if } \lambda \neq 0 \\ \ln(x) & \text{if } \lambda = 0 \end{cases} $$

These transforms stretch and compress the variable to map it as closely to a Gaussian (Normal) distribution as mathematically possible.

transformation.py - Power Transforms & Binning
import numpy as np
from sklearn.preprocessing import PowerTransformer, KBinsDiscretizer

# 1. Power Transformation (Yeo-Johnson)
# Attempts to map skewed feature to a Gaussian distribution
pt = PowerTransformer(method='yeo-johnson', standardize=True)
df['income_gaussian'] = pt.fit_transform(df[['income']])

# 2. Log Transformation (np.log1p handles zeros safely by doing log(1+x))
df['revenue_log'] = np.log1p(df['revenue'])

# 3. Discretization / Binning
# Converts continuous age into 5 categorical bins (quantiles ensures equal frequency per bin)
binner = KBinsDiscretizer(n_bins=5, encode='ordinal', strategy='quantile')
df['age_group'] = binner.fit_transform(df[['age']])

Use Cases

  • Polynomial features for non-linear house price prediction
  • Binning age into groups for marketing segmentation
  • Log transformation for right-skewed income data

Feature Creation

Creating new features from existing ones based on domain knowledge. Interaction terms, ratios, and domain-specific calculations enhance model performance.

Real Example: E-commerce revenue = price × quantity. Profit margin = (selling_price - cost_price) / cost_price. These derived features often have stronger predictive power than raw features.

Mathematical Foundations

Interaction Terms: feature₁ × feature₂
• Example: advertising_budget × seasonality → total_impact
• Why: Captures how one feature's effect depends on another

Ratio Features: feature₁ / feature₂
• Example: price/sqft, income/age

Domain-Specific Features:
• BMI = weight(kg) / height²(m²)
• Speed = distance / time
• Profit margin = (revenue - cost) / cost

Time-Based Features:
• Extract: year, month, day, weekday, hour
• Create: is_weekend, is_holiday, season
💡 Interaction terms are especially powerful in linear models - neural networks learn them automatically.
⚠️ Creating features without domain knowledge leads to meaningless combinations.
✅ Always check correlation between new and existing features to avoid redundancy.

🧠 Under the Hood: Polynomial Combinations

Scikit-Learn's `PolynomialFeatures` generates a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree.

For two features $X = [x_1, x_2]$ and a degree of 2, the expanded polynomial vector is:

$$ [1,\; x_1,\; x_2,\; x_1^2,\; x_1 \cdot x_2,\; x_2^2] $$

Notice the $x_1 \cdot x_2$ term. This is an interaction term, which lets a linear model learn conditional relationships (e.g., "if $x_1$ is high, the effect of $x_2$ changes").

creation.py - Automated Polynomial Features
from sklearn.preprocessing import PolynomialFeatures
import pandas as pd

# Assume df has two features: 'length' and 'width'
X = df[['length', 'width']]

# Create polynomial and interaction features up to degree 2
# include_bias=False prevents adding a column of 1s (intercept)
poly = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly.fit_transform(X)

# Get the names of the new features (e.g., 'length^2', 'length width')
feature_names = poly.get_feature_names_out(['length', 'width'])
df_poly = pd.DataFrame(X_poly, columns=feature_names)

print(df_poly.head())

Use Cases

  • BMI from height and weight in healthcare prediction
  • Click-through rate = clicks / impressions in digital marketing
  • Revenue = price × quantity in retail analytics

Dimensionality Reduction

Reducing the number of features while preserving information. PCA (Principal Component Analysis) projects high-dimensional data onto lower dimensions by finding directions of maximum variance.

Real Example: Image compression and genome analysis with thousands of genes benefit from PCA. First 2-3 principal components often capture 80%+ of variance.

PCA Mathematical Foundations

Algorithm Steps:
1. Standardize data: $X_{scaled} = \frac{X - \mu}{\sigma}$
2. Compute covariance matrix: $\Sigma = \frac{1}{n-1} X^T X$
3. Calculate eigenvalues $\lambda$ and eigenvectors $v$
4. Sort eigenvectors by eigenvalues (descending)
5. Select top $k$ eigenvectors (principal components)
6. Transform: $X_{new} = X \times v_k$

Explained Variance: $\frac{\lambda_i}{\sum \lambda_j}$
Cumulative Variance: Shows total information preserved

Why PCA Works:
• Removes correlated features
• Captures maximum variance in fewer dimensions
• Components are orthogonal (no correlation)
💡 PCA is unsupervised - it doesn't use the target variable. First PC always captures most variance.
⚠️ Not standardizing before PCA is a critical error - features with large scales will dominate.
✅ Aim for 95% cumulative explained variance when choosing number of components.

🧠 Under the Hood: PCA Math

PCA finds the directions (Principal Components) that maximize the variance of the data. Mathematically, it works by computing the covariance matrix of the standardized dataset $X$:

$$ \Sigma = \frac{1}{n-1} X^T X $$

Then, we solve for the eigenvectors $V$ and eigenvalues $\lambda$ solving $\Sigma V = \lambda V$.

  • Eigenvectors ($v_i$) are the axes of the new feature space (the directions).
  • Eigenvalues ($\lambda_i$) represent the magnitude of variance captured by each vector.
pca.py - Principal Component Analysis
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import numpy as np

# 1. ALWAYS scale data before PCA
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# 2. Fit PCA without specifying components to see all variance
pca_full = PCA()
pca_full.fit(X_scaled)

# 3. Plot Cumulative Explained Variance
cumulative_variance = np.cumsum(pca_full.explained_variance_ratio_)
plt.plot(cumulative_variance, marker='o')
plt.axhline(y=0.95, color='r', linestyle='--') # 95% threshold
plt.xlabel('Number of Components')
plt.ylabel('Cumulative Explained Variance')
plt.show()

# 4. Apply PCA retaining 95% variance
# Float between 0 and 1 selects components covering that % of variance
pca = PCA(n_components=0.95)
X_pca = pca.fit_transform(X_scaled)
print(f"Reduced from {X.shape[1]} to {X_pca.shape[1]} features.")

Use Cases

  • Image compression (reduce pixel dimensions)
  • Genomics (thousands of genes → few principal components)
  • Visualization (project high-D data to 2D for plotting)
  • Speed up training (fewer features = faster models)

Common Mistakes

  • ⚠️ Applying PCA before train-test split (data leakage)
  • ⚠️ Using PCA with categorical features (PCA is for numerical data)
  • ⚠️ Losing interpretability (PCs are linear combinations)

Text Data (NLP Basics)

Real-world tabular data often contains unstructured text (e.g., reviews, titles). Algorithms require numbers, so we must vectorize this text into numerical representations.

Real Example: Converting thousands of Amazon product reviews into numeric features allows a classification model to predict positive vs. negative sentiment.

Mathematical Foundations

Bag of Words (BoW): Represents text by counting the frequency of each word, ignoring grammar and order.

TF-IDF (Term Frequency - Inverse Document Frequency):
Penalizes frequent, uninformative words (like "the", "and") while boosting rare, meaningful words.

$$ \text{TF-IDF}(t, d, D) = \text{TF}(t, d) \times \text{IDF}(t, D) $$
• $\text{TF}$: (count of term $t$ in document $d$) / (total terms in $d$)
• $\text{IDF}$: $\log \left( \frac{\text{Total Documents } N}{\text{Documents containing term } t} \right)$
text_features.py - Scikit-Learn Vectorizers
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
import pandas as pd

# Sample text column
corpus = [
    "Machine learning is amazing",
    "Deep learning is the future of learning",
    "Data science and artificial intelligence"
]

# 1. Bag of Words (CountVectorizer)
# Creates a column for every unique word in the corpus
vectorizer = CountVectorizer(stop_words='english')
X_bow = vectorizer.fit_transform(corpus)

# 2. TF-IDF (TfidfVectorizer)
# Converts words to continuous weights between 0 and 1
tfidf = TfidfVectorizer(stop_words='english', max_features=100)
X_tfidf = tfidf.fit_transform(corpus)

# Quick way to view features as a DataFrame
tfidf_df = pd.DataFrame(X_tfidf.toarray(), columns=tfidf.get_feature_names_out())
print(tfidf_df.head())

Meta-Features

Before throwing text into a vectorizer, you can extract powerful meta-features using pure Python or Pandas:

  • Word count: df['text'].apply(lambda x: len(str(x).split()))
  • Character count: df['text'].apply(lambda x: len(str(x)))
  • Count of punctuation/capitals: (Often strongly correlated with SPAM or fake reviews).

Time-Series Feature Engineering

Time-series data assumes that past values influence future values. We cannot simply shuffle rows; order matters. We must engineer features that capture chronological patterns.

Mathematical Foundations

Lag Features: Shifting the target variable back by $t$ steps. "What was yesterday's sales?"
$X_{lag\_1} = Y_{t-1}$

Rolling Windows: Computing statistics over a moving window of past data. Smoothes out short-term fluctuations to reveal trends.
• Simple Moving Average (SMA) for window $w$:
$$ SMA_t = \frac{1}{w} \sum_{i=1}^{w} Y_{t-i} $$
Expanding Windows: Computes statistics from the very beginning of the dataset up to the current point $t$ (e.g., cumulative sum or cumulative max).
time_series.py - Lags and Rolling Windows
import pandas as pd

# Assuming 'df' is sorted chronologically and indexed by Date
# 1. Lag Features (Looking back in time)
# What was the value 1 day ago? 7 days ago?
df['sales_lag_1'] = df['sales'].shift(1)
df['sales_lag_7'] = df['sales'].shift(7)

# 2. Rolling Window Features
# The average and standard deviation over the last 7 days
df['sales_rolling_mean_7d'] = df['sales'].rolling(window=7).mean()
df['sales_rolling_std_7d'] = df['sales'].rolling(window=7).std()

# 3. Expanding Window Features
# Year-to-date maximum sales
df['sales_expanding_max'] = df['sales'].expanding().max()

# Drop NaNs generated by shifting/rolling
df.dropna(inplace=True)

Target Leakage (Data Leakage)

Data Leakage occurs when information from outside the training dataset is used to create the model. This guarantees amazing performance during training/validation, but total failure in the real world.

⚠️ The most common cause of leakage is performing feature engineering (Scaling, Imputing, TF-IDF) on the ENTIRE dataset before calling train_test_split.

🧠 Under the Hood: The Contamination Problem

Imagine using StandardScaler on your entire dataset. The scaler calculates the global $\mu$ (mean) and $\sigma$ (standard deviation) to scale the data.

If you split the data after scaling, your Training Data has been transformed using the mean of the Test Data. The Test Data is supposed to be completely unseen, but you just "leaked" its summary statistics into the training process.

leakage.py - The Golden Rule of Fit vs Transform
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

# ❌ BAD PRACTICE (Creates Leakage)
scaler_bad = StandardScaler()
X_scaled_bad = scaler_bad.fit_transform(X) # Entire dataset sees the scaler
X_train_bad, X_test_bad = train_test_split(X_scaled_bad)

# ✅ GOOD PRACTICE (No Leakage)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

scaler = StandardScaler()
# Fit ONLY on the training data to learn parameters (mean, std)
X_train_scaled = scaler.fit_transform(X_train) 

# Transform test data using the parameters learned from the training data
X_test_scaled = scaler.transform(X_test)
✅ The easiest way to mathematically prevent leakage in production is to package all your feature engineering steps inside a Scikit-Learn Pipeline.

Automated Feature Engineering

In complex, multi-table relational databases, manually creating features is incredibly tedious. Automated Feature Engineering relies on algorithms to automatically synthesize hundreds of new features from relational datasets.

Deep Feature Synthesis (DFS)

DFS stacks mathematical primitives (like computing sums, counts, averages, and time-since-last-event) across entity relationships (e.g., Customers $\xrightarrow{\text{1 to M}}$ Orders $\xrightarrow{\text{1 to M}}$ Order_Items).

Real Example: From a raw database of e-commerce transactions, DFS can automatically generate complex features like: "The average value of a customer's orders over the last 30 days" or "The standard deviation of time between a user's logins."
autofe.py - Featuretools Library
import featuretools as ft

# Assume we have three Pandas DataFrames: clients, loans, and payments
# Step 1: Create an EntitySet (a representation of your database)
es = ft.EntitySet(id="banking")

# Step 2: Add dataframes to the EntitySet with primary keys
es = es.add_dataframe(dataframe_name="clients", dataframe=clients_df, index="client_id")
es = es.add_dataframe(dataframe_name="loans", dataframe=loans_df, index="loan_id")

# Step 3: Define relational joins (Foreign Keys)
es = es.add_relationship("clients", "client_id", "loans", "client_id")

# Step 4: Run Deep Feature Synthesis!
# Automatically generates agg features for clients based on their loans history
feature_matrix, feature_defs = ft.dfs(
    entityset=es,
    target_dataframe_name="clients",
    agg_primitives=["mean", "sum", "mode", "std"],
    trans_primitives=["month", "hour"],
    max_depth=2 # Stacks primitives up to 2 layers deep
)

print(f"Automatically generated {len(feature_defs)} features!")