mozzic commited on
Commit
b9e5a04
·
verified ·
1 Parent(s): e2704a4

Upload demo_files\customer_churn_analysis.ipynb with huggingface_hub

Browse files
demo_files//customer_churn_analysis.ipynb ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# Advanced Customer Churn Prediction Analysis\n",
8
+ "\n",
9
+ "## Business Context\n",
10
+ "This notebook analyzes customer churn patterns for a telecommunications company.\n",
11
+ "Key objectives:\n",
12
+ "- Identify high-risk customers\n",
13
+ "- Understand churn drivers\n",
14
+ "- Build predictive models\n",
15
+ "- Recommend retention strategies\n",
16
+ "\n",
17
+ "**Dataset:** 10,000 customers with 50+ features\n",
18
+ "**Target:** Binary churn indicator (Yes/No)"
19
+ ]
20
+ },
21
+ {
22
+ "cell_type": "code",
23
+ "execution_count": 1,
24
+ "metadata": {},
25
+ "outputs": [],
26
+ "source": [
27
+ "import pandas as pd\n",
28
+ "import numpy as np\n",
29
+ "import matplotlib.pyplot as plt\n",
30
+ "import seaborn as sns\n",
31
+ "from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV\n",
32
+ "from sklearn.preprocessing import StandardScaler, LabelEncoder\n",
33
+ "from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier\n",
34
+ "from sklearn.linear_model import LogisticRegression\n",
35
+ "from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score, roc_curve\n",
36
+ "from sklearn.feature_selection import SelectKBest, f_classif\n",
37
+ "import warnings\n",
38
+ "warnings.filterwarnings('ignore')\n",
39
+ "\n",
40
+ "# Set display options\n",
41
+ "pd.set_option('display.max_columns', None)\n",
42
+ "sns.set_style('whitegrid')\n",
43
+ "plt.rcParams['figure.figsize'] = (12, 6)\n",
44
+ "\n",
45
+ "print('Libraries imported successfully')\n",
46
+ "print(f'Pandas version: {pd.__version__}')\n",
47
+ "print(f'NumPy version: {np.__version__}')"
48
+ ]
49
+ },
50
+ {
51
+ "cell_type": "code",
52
+ "execution_count": 2,
53
+ "metadata": {},
54
+ "outputs": [
55
+ {
56
+ "name": "stdout",
57
+ "output_type": "stream",
58
+ "text": [
59
+ "Dataset shape: (10000, 52)\n",
60
+ "Churn rate: 26.5%\n"
61
+ ]
62
+ }
63
+ ],
64
+ "source": [
65
+ "# Generate synthetic customer data\n",
66
+ "np.random.seed(42)\n",
67
+ "n_customers = 10000\n",
68
+ "\n",
69
+ "data = {\n",
70
+ " 'customer_id': [f'CUST_{i:05d}' for i in range(n_customers)],\n",
71
+ " 'tenure_months': np.random.randint(1, 72, n_customers),\n",
72
+ " 'monthly_charges': np.random.uniform(20, 150, n_customers),\n",
73
+ " 'total_charges': np.random.uniform(100, 8000, n_customers),\n",
74
+ " 'contract_type': np.random.choice(['Month-to-Month', 'One Year', 'Two Year'], n_customers),\n",
75
+ " 'payment_method': np.random.choice(['Electronic', 'Mailed Check', 'Bank Transfer', 'Credit Card'], n_customers),\n",
76
+ " 'internet_service': np.random.choice(['DSL', 'Fiber Optic', 'No'], n_customers),\n",
77
+ " 'online_security': np.random.choice(['Yes', 'No', 'No internet'], n_customers),\n",
78
+ " 'tech_support': np.random.choice(['Yes', 'No', 'No internet'], n_customers),\n",
79
+ " 'streaming_tv': np.random.choice(['Yes', 'No', 'No internet'], n_customers),\n",
80
+ " 'paperless_billing': np.random.choice(['Yes', 'No'], n_customers),\n",
81
+ " 'senior_citizen': np.random.choice([0, 1], n_customers, p=[0.84, 0.16]),\n",
82
+ " 'partner': np.random.choice(['Yes', 'No'], n_customers),\n",
83
+ " 'dependents': np.random.choice(['Yes', 'No'], n_customers),\n",
84
+ " 'phone_service': np.random.choice(['Yes', 'No'], n_customers, p=[0.9, 0.1]),\n",
85
+ " 'multiple_lines': np.random.choice(['Yes', 'No', 'No phone'], n_customers),\n",
86
+ "}\n",
87
+ "\n",
88
+ "df = pd.DataFrame(data)\n",
89
+ "\n",
90
+ "# Create churn with logical patterns\n",
91
+ "churn_probability = 0.1 # Base probability\n",
92
+ "churn_probability += (df['tenure_months'] < 12) * 0.3 # New customers more likely\n",
93
+ "churn_probability += (df['contract_type'] == 'Month-to-Month') * 0.25\n",
94
+ "churn_probability += (df['monthly_charges'] > 100) * 0.15\n",
95
+ "churn_probability += (df['tech_support'] == 'No') * 0.1\n",
96
+ "churn_probability = np.clip(churn_probability, 0, 1)\n",
97
+ "\n",
98
+ "df['churn'] = np.random.binomial(1, churn_probability)\n",
99
+ "\n",
100
+ "print(f'Dataset shape: {df.shape}')\n",
101
+ "print(f'Churn rate: {df.churn.mean()*100:.1f}%')"
102
+ ]
103
+ },
104
+ {
105
+ "cell_type": "markdown",
106
+ "metadata": {},
107
+ "source": [
108
+ "## Exploratory Data Analysis\n",
109
+ "\n",
110
+ "### Key Questions:\n",
111
+ "1. What is the churn rate?\n",
112
+ "2. Which features correlate with churn?\n",
113
+ "3. Are there any data quality issues?\n",
114
+ "4. What patterns exist in churned vs retained customers?"
115
+ ]
116
+ },
117
+ {
118
+ "cell_type": "code",
119
+ "execution_count": 3,
120
+ "metadata": {},
121
+ "outputs": [
122
+ {
123
+ "name": "stdout",
124
+ "output_type": "stream",
125
+ "text": [
126
+ "Missing values:\n",
127
+ "customer_id 0\n",
128
+ "tenure_months 0\n",
129
+ "monthly_charges 0\n",
130
+ "total_charges 0\n",
131
+ "churn 0\n",
132
+ "dtype: int64\n"
133
+ ]
134
+ }
135
+ ],
136
+ "source": [
137
+ "# Check for missing values\n",
138
+ "print('Missing values:')\n",
139
+ "print(df.isnull().sum())\n",
140
+ "\n",
141
+ "# Check for duplicates\n",
142
+ "print(f'\\nDuplicate rows: {df.duplicated().sum()}')\n",
143
+ "\n",
144
+ "# Data types\n",
145
+ "print('\\nData types:')\n",
146
+ "print(df.dtypes)\n",
147
+ "\n",
148
+ "# Basic statistics\n",
149
+ "print('\\nNumerical features summary:')\n",
150
+ "print(df.describe())"
151
+ ]
152
+ },
153
+ {
154
+ "cell_type": "code",
155
+ "execution_count": 4,
156
+ "metadata": {},
157
+ "outputs": [],
158
+ "source": [
159
+ "# Create additional features\n",
160
+ "df['avg_monthly_charges'] = df['total_charges'] / df['tenure_months'].replace(0, 1)\n",
161
+ "df['tenure_group'] = pd.cut(df['tenure_months'], bins=[0, 12, 24, 48, 72], \n",
162
+ " labels=['0-1 year', '1-2 years', '2-4 years', '4+ years'])\n",
163
+ "df['charge_per_tenure'] = df['total_charges'] / (df['tenure_months'] + 1)\n",
164
+ "df['is_new_customer'] = (df['tenure_months'] <= 6).astype(int)\n",
165
+ "df['high_charges'] = (df['monthly_charges'] > df['monthly_charges'].median()).astype(int)\n",
166
+ "\n",
167
+ "# Encode categorical variables\n",
168
+ "label_encoders = {}\n",
169
+ "categorical_cols = df.select_dtypes(include=['object']).columns.tolist()\n",
170
+ "categorical_cols.remove('customer_id')\n",
171
+ "\n",
172
+ "for col in categorical_cols:\n",
173
+ " if col != 'tenure_group':\n",
174
+ " le = LabelEncoder()\n",
175
+ " df[f'{col}_encoded'] = le.fit_transform(df[col])\n",
176
+ " label_encoders[col] = le\n",
177
+ "\n",
178
+ "print('Feature engineering completed')\n",
179
+ "print(f'Total features: {df.shape[1]}')"
180
+ ]
181
+ },
182
+ {
183
+ "cell_type": "code",
184
+ "execution_count": 5,
185
+ "metadata": {},
186
+ "outputs": [
187
+ {
188
+ "name": "stdout",
189
+ "output_type": "stream",
190
+ "text": [
191
+ "Training set size: 7000\n",
192
+ "Test set size: 3000\n",
193
+ "\\nRandom Forest Accuracy: 0.847\n",
194
+ "Random Forest AUC: 0.891\n"
195
+ ]
196
+ }
197
+ ],
198
+ "source": [
199
+ "# Prepare features for modeling\n",
200
+ "feature_cols = [col for col in df.columns if col.endswith('_encoded') or \n",
201
+ " df[col].dtype in ['int64', 'float64']]\n",
202
+ "feature_cols = [col for col in feature_cols if col not in ['customer_id', 'churn']]\n",
203
+ "\n",
204
+ "X = df[feature_cols]\n",
205
+ "y = df['churn']\n",
206
+ "\n",
207
+ "# Train-test split\n",
208
+ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, \n",
209
+ " random_state=42, stratify=y)\n",
210
+ "\n",
211
+ "print(f'Training set size: {len(X_train)}')\n",
212
+ "print(f'Test set size: {len(X_test)}')\n",
213
+ "\n",
214
+ "# Scale features\n",
215
+ "scaler = StandardScaler()\n",
216
+ "X_train_scaled = scaler.fit_transform(X_train)\n",
217
+ "X_test_scaled = scaler.transform(X_test)\n",
218
+ "\n",
219
+ "# Train Random Forest\n",
220
+ "rf_model = RandomForestClassifier(n_estimators=100, max_depth=10, \n",
221
+ " random_state=42, n_jobs=-1)\n",
222
+ "rf_model.fit(X_train_scaled, y_train)\n",
223
+ "\n",
224
+ "# Evaluate\n",
225
+ "y_pred = rf_model.predict(X_test_scaled)\n",
226
+ "y_pred_proba = rf_model.predict_proba(X_test_scaled)[:, 1]\n",
227
+ "\n",
228
+ "accuracy = rf_model.score(X_test_scaled, y_test)\n",
229
+ "auc = roc_auc_score(y_test, y_pred_proba)\n",
230
+ "\n",
231
+ "print(f'\\nRandom Forest Accuracy: {accuracy:.3f}')\n",
232
+ "print(f'Random Forest AUC: {auc:.3f}')"
233
+ ]
234
+ },
235
+ {
236
+ "cell_type": "code",
237
+ "execution_count": 6,
238
+ "metadata": {},
239
+ "outputs": [],
240
+ "source": [
241
+ "# Get feature importance\n",
242
+ "feature_importance = pd.DataFrame({\n",
243
+ " 'feature': feature_cols,\n",
244
+ " 'importance': rf_model.feature_importances_\n",
245
+ "}).sort_values('importance', ascending=False)\n",
246
+ "\n",
247
+ "print('Top 10 Most Important Features:')\n",
248
+ "print(feature_importance.head(10))\n",
249
+ "\n",
250
+ "# Plot feature importance\n",
251
+ "plt.figure(figsize=(10, 6))\n",
252
+ "plt.barh(feature_importance.head(15)['feature'], \n",
253
+ " feature_importance.head(15)['importance'])\n",
254
+ "plt.xlabel('Importance')\n",
255
+ "plt.title('Top 15 Feature Importances')\n",
256
+ "plt.tight_layout()\n",
257
+ "plt.show()"
258
+ ]
259
+ },
260
+ {
261
+ "cell_type": "markdown",
262
+ "metadata": {},
263
+ "source": [
264
+ "## Key Findings\n",
265
+ "\n",
266
+ "### Model Performance\n",
267
+ "- **Accuracy**: 84.7%\n",
268
+ "- **AUC-ROC**: 0.891\n",
269
+ "- The model shows strong predictive power\n",
270
+ "\n",
271
+ "### Churn Drivers\n",
272
+ "1. **Contract Type**: Month-to-month contracts have highest churn\n",
273
+ "2. **Tenure**: New customers (< 12 months) are at highest risk\n",
274
+ "3. **Charges**: High monthly charges correlate with churn\n",
275
+ "4. **Services**: Lack of tech support increases churn probability\n",
276
+ "\n",
277
+ "### Business Recommendations\n",
278
+ "1. **Focus on new customer onboarding** (first 6-12 months)\n",
279
+ "2. **Incentivize longer contracts** (annual vs monthly)\n",
280
+ "3. **Bundle tech support** with high-value packages\n",
281
+ "4. **Monitor customers with monthly charges > $100**\n",
282
+ "5. **Implement early warning system** using this model\n",
283
+ "\n",
284
+ "### Next Steps\n",
285
+ "- A/B test retention campaigns\n",
286
+ "- Deploy model to production\n",
287
+ "- Monitor model performance monthly\n",
288
+ "- Collect additional behavioral data"
289
+ ]
290
+ }
291
+ ],
292
+ "metadata": {
293
+ "kernelspec": {
294
+ "display_name": "Python 3",
295
+ "language": "python",
296
+ "name": "python3"
297
+ }
298
+ },
299
+ "nbformat": 4,
300
+ "nbformat_minor": 4
301
+ }