markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Data Loading
# Load the json files for processing portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True) profile = pd.read_json('data/profile.json', orient='records', lines=True) transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Data Exploration Portfolio
portfolio.head() items, attributes = portfolio.shape print("Portfolio dataset has {} records and {} attributes".format(items, attributes)) portfolio.info() portfolio.describe(include='all') plt.figure(figsize=[5,5]) fig, ax = plt.subplots() category_count = portfolio.offer_type.value_counts() category_count.plot(ki...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Profile
profile.head(5) items, attributes = profile.shape print("Portfolio dataset has {} records and {} attributes".format(items, attributes)) profile.info() profile.describe(include="all") #check for null values profile.isnull().sum() profile.duplicated().sum() # age distribution profile.age.hist(); sns.boxplot(profile['age...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Age 118 seems outlier. Lets explore it further.
profile[profile['age']== 118].age.count() profile[profile.age == 118][['gender','income']]
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
As per above analysis we see that wherever age is 118, the values in Gender and income is null. And also 2175 is count of such of rows. Also we saw that 2175 instances had gender and income was null. So we will drop all instances where age equals 118 as these are errorneous record.
## Gender-wise age distribution sns.distplot(profile[profile.gender=='M'].age,label='Male') sns.distplot(profile[profile.gender=='F'].age,label='Female') sns.distplot(profile[profile.gender=='O'].age,label='Other') plt.legend() plt.show() # distribution of income profile.income.hist(); profile['income'].mean() # Gender...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Transcript
transcript.head() items, attributes = transcript.shape print("Transcript dataset has {} records and {} attributes".format(items, attributes)) transcript.info() #check for null values transcript.isnull().sum() transcript['event'].value_counts() keys = transcript['value'].apply(lambda x: list(x.keys())) possible_keys = s...
{'offer id', 'amount', 'offer_id', 'reward'}
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
For the **value** attribute have 3 possible value.1. offer id/ offer_id2. amount3. reward Data cleaning & Transformation Portfolio Renaming columns for better understanding and meaningfulness
#Rename columns new_cols_name = {'difficulty':'offer_difficulty' , 'id':'offer_id', 'duration':'offer_duration', 'reward': 'offer_reward'} portfolio = portfolio.rename(columns=new_cols_name )
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Exploding the channel attribute into four separate attribute - (email, mobile, social, web)
dummy = pd.get_dummies(portfolio.channels.apply(pd.Series).stack()).sum(level=0) portfolio = pd.concat([portfolio, dummy], axis=1) portfolio.drop(columns='channels', inplace=True) portfolio.head()
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Profile Renaming columns for better understaning & meaningfulness
#Rename columns cols_profile = {'id':'customer_id' , 'income':'customer_income'} profile = profile.rename(columns=cols_profile)
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Removing rows with missing values. We saw above that all nulls belong to age 118 which are outliers.
#drop all rows which has null value profile = profile.loc[profile['gender'].isnull() == False]
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Classifying ages into groups for better understanding in Exploratory Data Analysis later:* Under 20* 21 - 35* 35 - 50* 50 - 65* Above 65
#Convert ages into age group profile.loc[(profile.age <= 20) , 'Age_group'] = 'Under 20' profile.loc[(profile.age >= 21) & (profile.age <= 35) , 'Age_group'] = '21-35' profile.loc[(profile.age >= 36) & (profile.age <= 50) , 'Age_group'] = '36-50' profile.loc[(profile.age >= 51) & (profile.age <= 65) , 'Age_group'] = '5...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Classifying income into income_groups for better understanding in Exploratory Data Analysis later:* 30-50K* 50-80K* 80-110K* Above 110K
#Convert income into income group profile.loc[(profile.customer_income >= 30000) & (profile.customer_income <= 50000) , 'Income_group'] = '30-50K' profile.loc[(profile.customer_income >= 50001) & (profile.customer_income <= 80000) , 'Income_group'] = '50-80K' profile.loc[(profile.customer_income >= 80001) & (profile.cu...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Converting became_member_on to a more quantitative term member_since_days. This will depict how long the customer has been member of the program.
#Convert joining date to duration in days for which the customer is member profile['became_member_on'] = pd.to_datetime(profile['became_member_on'], format='%Y%m%d') baseline_date = max(profile['became_member_on']) profile['member_since_days'] = profile['became_member_on'].apply(lambda x: (baseline_date - x).days) prof...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Transcript Renaming columns for better understaning & meaningfulness
#Rename columns transcript_cols = {'person':'customer_id'} transcript = transcript.rename(columns=transcript_cols)
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Removing space in event as when we explode, its easier to maintain columns name without space.
transcript['event'] = transcript['event'].str.replace(' ', '-')
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Split the value column into three columns as the keys of the dictionary which represents offer_id, reward, amount. Also we will merge offer_id and "offer id" into single attribute offer_id.
transcript['offer_id'] = transcript['value'].apply(lambda x: x.get('offer_id')) transcript['offer id'] = transcript['value'].apply(lambda x: x.get('offer id')) transcript['reward'] = transcript['value'].apply(lambda x: x.get('reward')) transcript['amount'] = transcript['value'].apply(lambda x: x.get('amount')) transcr...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Preparing data for Analysis Merging the three tables
merged_df = pd.merge(portfolio, transcript, on='offer_id') merged_df = pd.merge(merged_df, profile, on='customer_id') merged_df.head() merged_df.groupby(['event','offer_type'])['offer_type'].count() merged_df['event'] = merged_df['event'].map({'offer-received':1, 'offer-viewed':2, 'offer-completed':3})
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Generating the target variable When a customer completes the offer against an offer_id we will label that as a success. If the status is not in Offer-completed then the cust_id, order_id detail we be considerd as unsuccessful ad targeting.
#Create a target variable from event merged_df['Offer_Encashed'] = 0 for row in range(merged_df.shape[0]): current_event = merged_df.at[row,'event'] if current_event == 3: merged_df.at[row,'Offer_Encashed'] = 1 merged_df.Offer_Encashed.value_counts() merged_df['offer_type'].value_counts().plot.barh(titl...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Buy One Get One & discount Offer type have similar distribution.
merged_df['Age_group'].value_counts().plot.barh(title=' Distribution of age groups')
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
It is quite surprising to see that customers Above 60 use Starbucks application the most, those with age 40-60 are on the second. One would usually think that customers between age 20-45 use app the most, but this is not the case here.
merged_df['event'].value_counts().plot.barh(title=' Event distribution')
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
From distribution it follows the sales funnel. Offer received > Offer Viewed > Offer completed.
plt.figure(figsize=(15, 5)) sns.countplot(x="Age_group", hue="gender", data=merged_df) sns.set(style="whitegrid") plt.title('Gender distribution in different age groups') plt.ylabel('No of instances') plt.xlabel('Age Group') plt.legend(title='Gender')
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
The male customers are more than the female ones in each age group. Buut in above 60 range the distribution is almost 50-50
plt.figure(figsize=(15, 5)) sns.countplot(x="event", hue="gender", data=merged_df) plt.title('Distribution of Event Type by Gender ') plt.ylabel('No of instances') plt.xlabel('Event Type') plt.legend(title='Gender') plt.figure(figsize=(15, 5)) sns.countplot(x="event", hue="offer_type", data=merged_df) plt.title('Distri...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
From the graph we can infer that the discount offer type once viewed are very likely to be completed.
plt.figure(figsize=(15, 5)) sns.countplot(x="Age_group", hue="event", data=merged_df) plt.title('Event type distribution by age group') plt.ylabel('No of instances') plt.xlabel('Age Group') plt.legend(title='Event Type')
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
iv) Build a Machine Learning model to predict response of a customer to an offer 1. Data Preparation and Cleaning II Tasks1. Encode categorical data such as gender, offer type and age groups.2. Encode the 'event' data to numerical values: * offer received ---> 1 * offer viewed ---> 2 * offer completed ---> ...
dummy = pd.get_dummies(merged_df.offer_type.apply(pd.Series).stack()).sum(level=0) merged_df = pd.concat([merged_df, dummy], axis=1) merged_df.drop(columns='offer_type', inplace=True) dummy = pd.get_dummies(merged_df.gender.apply(pd.Series).stack()).sum(level=0) merged_df = pd.concat([merged_df, dummy], axis=1) merged_...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Distribution of encashemnt of offer by Age group and gender.
sns.set_style('whitegrid') bar_color= ['r', 'g', 'y', 'c', 'm'] fig,ax= plt.subplots(1,3,figsize=(15,5)) fig.tight_layout() merged_df[merged_df['Offer_Encashed']==1][['F','M','O']].sum().plot.bar(ax=ax[0], fontsize=10,color=bar_color) ax[0].set_title(" Offer Encashed - Gender Wise") ax[0].set_xlabel("Gender") ax[0].s...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
2. Split train and test data Final data is ready after tasks 1-5. We will now split the data (both features and their labels) into training and test sets, taking 60% of data for training and 40% for testing.
data = merged_df.drop('Offer_Encashed', axis=1) label = merged_df['Offer_Encashed'] X_train, X_test, y_train, y_test = train_test_split(data, label, test_size = 0.3, random_state = 4756) print("Train: {} Test {}".format(X_train.shape[0], X_test.shape[0]))
Train: 52300 Test 22415
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Model training and testing Metrics We will consider the F1 score as the model metric to assess the quality of the approach and determine which model gives the best results. It can be interpreted as the weighted average of the precision and recall. The traditional or balanced F-score (F1 score) is the harmonic mean of...
def get_model_scores(classifier): train_prediction = (classifier.fit(X_train, y_train)).predict(X_train) test_predictions = (classifier.fit(X_train, y_train)).predict(X_test) f1_train = accuracy_score(y_train, train_prediction)*100 f1_test = fbeta_score(y_test, test_predictions, beta = 0.5, average='m...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
LogisticRegression (Benchmark) I am using LogisticRegression classifier to build the benchmark, and evaluate the model result by the F1 score metric.
lr_clf = LogisticRegression(random_state = 10) lr_f1_train, lr_f1_test, lr_model = get_model_scores(lr_clf) linear = {'Benchmark Model': [ lr_model], 'F1-Score(Training)':[lr_f1_train], 'F1-Score(Test)': [lr_f1_test]} benchmark = pd.DataFrame(linear) benchmark
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
RandomForestClassifier
rf_clf = RandomForestClassifier(random_state = 10, criterion='gini', min_samples_leaf=10, min_samples_split=2, n_estimators=100) rf_f1_train, rf_f1_test, rf_model = get_model_scores(rf_clf)
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
DecisionTreeClassifier
dt_clf = DecisionTreeClassifier(random_state = 10) dt_f1_train, dt_f1_test, dt_model = get_model_scores(dt_clf)
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
K Nearest Neighbors
knn_clf = KNeighborsClassifier(n_neighbors = 5) knn_f1_train, knn_f1_test, knn_model = get_model_scores(knn_clf)
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
Classifier Evaluation Summary
performance_summary = {'Classifier': [lr_model, rf_model, dt_model, knn_model], 'F1-Score':[lr_f1_train, rf_f1_train, dt_f1_train, knn_f1_train] } performance_summary = pd.DataFrame(performance_summary) performance_summary
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone
About the Dataset
#nextcell ratings = pd.read_csv('/Users/ankitkothari/Documents/gdrivre/UMD/MSML-602-DS/final_project/ratings_small.csv') movies = pd.read_csv('/Users/ankitkothari/Documents/gdrivre/UMD/MSML-602-DS/final_project/movies_metadata_features.csv')
_____no_output_____
MIT
Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb
ankit-kothari/data_science_journey
Data Cleaning Dropping Columns
movies.drop(columns=['Unnamed: 0'],inplace=True) ratings = pd.merge(movies,ratings).drop(['genres','timestamp','imdb_id','overview','popularity','production_companies','production_countries','release_date','revenue','runtime','vote_average','year','vote_count','original_language'],axis=1) usri = int(input()) #587 #15 ...
15
MIT
Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb
ankit-kothari/data_science_journey
Finding Similarity Matrix Creating a Pivot Table of Title against userId for ratings
userRatings = ratings.pivot_table(index=['title'],columns=['userId'],values='rating') userRatings = userRatings.dropna(thresh=10, axis=1).fillna(0,axis=1) corrMatrix = userRatings.corr(method='pearson') #corrMatrix = userRatings.corr(method='spearman') #corrMatrix = userRatings.corr(method='kendall')
_____no_output_____
MIT
Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb
ankit-kothari/data_science_journey
Creating Similarity Matrix using Pearson Correlation method
def get_similar(usrid): similar_ratings = corrMatrix[usrid] similar_ratings = similar_ratings.sort_values(ascending=False) return similar_ratings
_____no_output_____
MIT
Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb
ankit-kothari/data_science_journey
Recommendation
moidofotus = [0,0,0,0] s_m = pd.DataFrame() s_m = s_m.append(get_similar(usri), ignore_index=True) for c in range(0,4): moidofotus[c]=s_m.columns[c] if moidofotus[0] == usri: moidofotus.pop(0) print(moidofotus) movie_match=[] for i in moidofotus: select_user = ratings.loc[ratings['userId'] == i] #prin...
_____no_output_____
MIT
Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb
ankit-kothari/data_science_journey
Performance Evaluation
movies_suggested_and_he_watched=0 total_suggest_movies = 0 for movies in movie_match: total_suggest_movies=total_suggest_movies+len(movies) for movie in movies: if movie in select_user['title'].to_list(): movies_suggested_and_he_watched=movies_suggested_and_he_watched+1 print(movies_suggeste...
27 30
MIT
Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb
ankit-kothari/data_science_journey
Uninove Data: 17/02/2022Professor: Leandro Romualdo da SilvaDisciplina: Inteligência ArtificialMatéria: Algoritmos de Busca Resumo: O código abaixo cria o ambiente do labirinto usando a biblioteca turtle e o agente precisa encontrar o caminho de saida do labirinto, a busca com objetivo de encontrar a saida utiliza alg...
import turtle ''' Parâmetros que delimitam o labirinto, indicam os obstaculos, caminhos livres para seguir, saida do labirinto e caminho correto identificado. PART_OF_PART - O caminho correto é sinalizado retornando ao ponto de partida. TRIED - Caminho percorrido pelo agente. Sinaliza o caminho que ele esta buscand...
15 8 15 7 14 7 14 6 14 5 14 4 13 4 13 5 13 6 12 6 12 7 12 8 12 9
MIT
busca v0.5.ipynb
carvalhoandre/interpretacao_dados
Exercise 6: Collect data using APIsUse Exchange Rates API to get USD to other currency rate for today: https://www.exchangerate-api.com/
import json import pprint import requests import pandas as pd r = requests.get("https://api.exchangerate-api.com/v4/latest/USD") data = r.json() pprint.pprint(data) df = pd.DataFrame(data) df.head()
_____no_output_____
MIT
Chapter04/Exercise 4.06/Exercise 4.06.ipynb
abhishekr128/The-Natural-Language-Processing-Workshop
Dataset Used : Titanic ( https://www.kaggle.com/c/titanic )This dataset basically includes information regarding all the passengers on Titanic . Various attributes of passengers like age , sex , class ,etc. is recorded and final label 'survived' determines whether or the passenger survived or not .
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline titanic_data_df = pd.read_csv('titanic-data.csv')
_____no_output_____
MIT
Section 5/Bivariate Analysis - Titanic.ipynb
kamaleshreddy/Exploratory-Data-Analysis-with-Pandas-and-Python-3.x
1. **Survived:** Outcome of survival (0 = No; 1 = Yes)2. **Pclass:** Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)3. **Name:** Name of passenger4. **Sex:** Sex of the passenger5. **Age:** Age of the passenger (Some entries contain NaN)6. **SibSp:** Number of siblings and spouses of t...
g = sns.countplot(x='Sex', hue='Survived', data=titanic_data_df) g = sns.catplot(x="Embarked", col="Survived", data=titanic_data_df, kind="count", height=4, aspect=.7); g = sns.countplot(x='Embarked', hue='Survived', data=titanic_data_df) g = sns.countplot(x='Embarked', hue='Pclass', d...
_____no_output_____
MIT
Section 5/Bivariate Analysis - Titanic.ipynb
kamaleshreddy/Exploratory-Data-Analysis-with-Pandas-and-Python-3.x
Add a new column - Family size I will be adding a new column 'Family Size' which will be the SibSp and Parch + 1
#Function to add new column 'FamilySize' def add_family(df): df['FamilySize'] = df['SibSp'] + df['Parch'] + 1 return df titanic_data_df = add_family(titanic_data_df) titanic_data_df.head(10) g = sns.countplot(x="FamilySize", hue="Survived", data=titanic_data_df); g = sns.countplot(x="FamilySi...
_____no_output_____
MIT
Section 5/Bivariate Analysis - Titanic.ipynb
kamaleshreddy/Exploratory-Data-Analysis-with-Pandas-and-Python-3.x
Add a new column - Age Group
age_df = titanic_data_df[~titanic_data_df['Age'].isnull()] #Make bins and group all passengers into these bins and store those values in a new column 'ageGroup' age_bins = ['0-9', '10-19', '20-29', '30-39', '40-49', '50-59', '60-69', '70-79'] age_df['ageGroup'] = pd.cut(titanic_data_df.Age, range(0, 81, 10), right=Fals...
_____no_output_____
MIT
Section 5/Bivariate Analysis - Titanic.ipynb
kamaleshreddy/Exploratory-Data-Analysis-with-Pandas-and-Python-3.x
Formulas: Fitting models using R-style formulas Since version 0.5.0, ``statsmodels`` allows users to fit statistical models using R-style formulas. Internally, ``statsmodels`` uses the [patsy](http://patsy.readthedocs.org/) package to convert formulas and data to the matrices that are used in model fitting. The formul...
import numpy as np # noqa:F401 needed in namespace for patsy import statsmodels.api as sm
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Import convention You can import explicitly from statsmodels.formula.api
from statsmodels.formula.api import ols
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Alternatively, you can just use the `formula` namespace of the main `statsmodels.api`.
sm.formula.ols
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Or you can use the following conventioin
import statsmodels.formula.api as smf
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
These names are just a convenient way to get access to each model's `from_formula` classmethod. See, for instance
sm.OLS.from_formula
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
All of the lower case models accept ``formula`` and ``data`` arguments, whereas upper case ones take ``endog`` and ``exog`` design matrices. ``formula`` accepts a string which describes the model in terms of a ``patsy`` formula. ``data`` takes a [pandas](https://pandas.pydata.org/) data frame or any other data structur...
dta = sm.datasets.get_rdataset("Guerry", "HistData", cache=True) df = dta.data[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna() df.head()
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Fit the model:
mod = ols(formula='Lottery ~ Literacy + Wealth + Region', data=df) res = mod.fit() print(res.summary())
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Categorical variablesLooking at the summary printed above, notice that ``patsy`` determined that elements of *Region* were text strings, so it treated *Region* as a categorical variable. `patsy`'s default is also to include an intercept, so we automatically dropped one of the *Region* categories.If *Region* had been a...
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region)', data=df).fit() print(res.params)
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Patsy's mode advanced features for categorical variables are discussed in: [Patsy: Contrast Coding Systems for categorical variables](contrasts.html) OperatorsWe have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix. Removing ...
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region) -1 ', data=df).fit() print(res.params)
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Multiplicative interactions":" adds a new column to the design matrix with the interaction of the other two columns. "*" will also include the individual columns that were multiplied together:
res1 = ols(formula='Lottery ~ Literacy : Wealth - 1', data=df).fit() res2 = ols(formula='Lottery ~ Literacy * Wealth - 1', data=df).fit() print(res1.params, '\n') print(res2.params)
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Many other things are possible with operators. Please consult the [patsy docs](https://patsy.readthedocs.org/en/latest/formulas.html) to learn more. FunctionsYou can apply vectorized functions to the variables in your model:
res = smf.ols(formula='Lottery ~ np.log(Literacy)', data=df).fit() print(res.params)
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Define a custom function:
def log_plus_1(x): return np.log(x) + 1. res = smf.ols(formula='Lottery ~ log_plus_1(Literacy)', data=df).fit() print(res.params)
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
Any function that is in the calling namespace is available to the formula. Using formulas with models that do not (yet) support themEven if a given `statsmodels` function does not support formulas, you can still use `patsy`'s formula language to produce design matrices. Those matrices can then be fed to the fitting fu...
import patsy f = 'Lottery ~ Literacy * Wealth' y,X = patsy.dmatrices(f, df, return_type='matrix') print(y[:5]) print(X[:5])
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
To generate pandas data frames:
f = 'Lottery ~ Literacy * Wealth' y,X = patsy.dmatrices(f, df, return_type='dataframe') print(y[:5]) print(X[:5]) print(sm.OLS(y, X).fit().summary())
_____no_output_____
BSD-3-Clause
examples/notebooks/formulas.ipynb
diego-mazon/statsmodels
CH6EJ3 Extracción Componentes Principales Procedimiento Cargamos y/o instalamos las librerias necesarios
if(!require(devtools)){ install.packages('devtools',dependencies =c("Depends", "Imports"),repos='http://cran.es.r-project.org') require(devtools) } if(!require(ggbiplot)){ install.packages('ggbiplot',dependencies =c("Depends", "Imports"),repos='http://cran.es.r-project.org') require(ggbiplot) } if(!requ...
Loading required package: devtools Warning message: "package 'devtools' was built under R version 3.3.3"Loading required package: ggbiplot Warning message: "package 'ggbiplot' was built under R version 3.3.3"Loading required package: ggplot2 Warning message: "package 'ggplot2' was built under R version 3.3.3"Loading re...
MIT
05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb
quiquegv/NEOLAND-DS2020-datalabs
Cargamos los datos de un directorio local.
Alumnos_usos_sociales <- read.csv("B2.332_Students.csv", comment.char="#") # X contiene las variables que queremos trabajar R <- Alumnos_usos_sociales[,c(31:34)] head(R)
_____no_output_____
MIT
05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb
quiquegv/NEOLAND-DS2020-datalabs
Cálculo de la Singular value decomposition y de los valores que lo caracterizan.
# Generamos SVD R.order <- R R.svd <-svd(R.order[,c(1:3)]) # D, U y V R.svd$d head(R.svd$u) R.svd$v
_____no_output_____
MIT
05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb
quiquegv/NEOLAND-DS2020-datalabs
Calculo de la varianza acumulada en el primer factor
sum(R.svd$d) var=sum(R.svd$d[1]) var var/sum(R.svd$d)
_____no_output_____
MIT
05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb
quiquegv/NEOLAND-DS2020-datalabs
Porcentaje de la varianza explicada por los svd generados
plot(R.svd$d^2/sum(R.svd$d^2),type="l",xlab="Singular vector",ylab="Varianza explicada")
_____no_output_____
MIT
05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb
quiquegv/NEOLAND-DS2020-datalabs
Porcentaje de la varianza acumulada explicada
plot(cumsum(R.svd$d^2/sum(R.svd$d^2)),type="l",xlab="Singular vector",ylab="Varianza explicada acumulada")
_____no_output_____
MIT
05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb
quiquegv/NEOLAND-DS2020-datalabs
Creamos un gráfico con el primer y segundo vector asignando colores. Rojo no supera, verde supera
# Dibujamos primero todos los scores de comp2 y comp1 Y <- R.order[,4] plot(R.svd$u[,1],R.svd$u[,2]) # Asignamos rojo a no supera y verde a si supera points(R.svd$u[Y=="No",1],R.svd$u[Y=="No",2],col="red") points(R.svd$u[Y=="Si",1],R.svd$u[Y=="Si",2],col="green")
_____no_output_____
MIT
05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb
quiquegv/NEOLAND-DS2020-datalabs
Reconstrucción de la imagen de los datos a partir de los SVD
R.recon1=R.svd$u[,1]%*%diag(R.svd$d[1],length(1),length(1))%*%t(R.svd$v[,1]) R.recon2=R.svd$u[,2]%*%diag(R.svd$d[2],length(2),length(2))%*%t(R.svd$v[,2]) R.recon3=R.svd$u[,3]%*%diag(R.svd$d[3],length(3),length(3))%*%t(R.svd$v[,3]) par(mfrow=c(2,2)) image(as.matrix(R.order[,c(1:3)]),main="Matriz Original") image(R.recon...
_____no_output_____
MIT
05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb
quiquegv/NEOLAND-DS2020-datalabs
Introduction
import ipyscales # Make a default scale, and list its trait values: scale = ipyscales.LinearScale() print(', '.join('%s: %s' % (key, getattr(scale, key)) for key in sorted(scale.keys) if not key.startswith('_')))
clamp: False, domain: (0.0, 1.0), interpolator: interpolate, range: (0.0, 1.0)
BSD-3-Clause
examples/introduction.ipynb
vidartf/jupyter-scales
ToDo- probably make candidate 10 sentences per letter and pick sentences with sentence transformer trained with Next Sentence Prediction Task?- Filter out similar sentences based on levenstein distance or sentence bert- remove curse words, person words with pororo or other tools -> either from dataset or inference pro...
# https://github.com/lovit/levenshtein_finder
_____no_output_____
MIT
inference_finetuned_35000-step.ipynb
snoop2head/KoGPT-Joong-2
Distributed XGBoost (CPU)Scaling out on AmlCompute is simple! The code from the previous notebook has been modified and adapted in [src/run.py](src/run.py). In particular, changes include:- use ``dask_mpi`` to initialize Dask on MPI- use ``argparse`` to allow for command line argument inputs- use ``mlflow`` logging Th...
from azureml.core import Workspace ws = Workspace.from_config() ws
_____no_output_____
MIT
python-sdk/experimental/using-xgboost/2.distributed-cpu.ipynb
msftcoderdjw/azureml-examples
Distributed RemotelySimply use ``MpiConfiguration`` with the desired node count. **Important**: see the [``dask-mpi`` documentation](http://mpi.dask.org/en/latest/) for details on how the Dask workers and scheduler are started.By default with the Azuer ML MPI configuration, two nodes are used for the scheduler and scr...
nodes = 8 + 2 # number of workers + 2 needed for scheduler and script process cpus_per_node = 4 # number of vCPUs per node; to initialize one thread per CPU print(f"Nodes: {nodes}\nCPUs/node: {cpus_per_node}") arguments = [ "--cpus_per_node", cpus_per_node, "--num_boost_round", 100, "--learning_r...
_____no_output_____
MIT
python-sdk/experimental/using-xgboost/2.distributed-cpu.ipynb
msftcoderdjw/azureml-examples
View WidgetOptionally, view the output in the run widget.
from azureml.widgets import RunDetails RunDetails(run).show()
_____no_output_____
MIT
python-sdk/experimental/using-xgboost/2.distributed-cpu.ipynb
msftcoderdjw/azureml-examples
for testing, wait for the run to complete
run.wait_for_completion(show_output=True)
_____no_output_____
MIT
python-sdk/experimental/using-xgboost/2.distributed-cpu.ipynb
msftcoderdjw/azureml-examples
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_BTC.ipynb) **Detect Entities in Twitter texts** 1. Col...
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash !pip install --ignore-installed spark-nlp-display import pandas as pd import numpy as np import json from pyspark.ml import Pipeline from pyspark.sql import SparkSession import pyspark.sql.functions as F from sparknlp.annotator import * from sparknlp.base import ...
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/NER_BTC.ipynb
Laurasgmt/spark-nlp-workshop
2. Start Spark Session
spark = sparknlp.start()
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/NER_BTC.ipynb
Laurasgmt/spark-nlp-workshop
3. Some sample examples
text_list = test_sentences = ["""Wengers big mistakes is not being ruthless enough with bad players.""", """Aguero goal . From being someone previously so reliable , he 's been terrible this year .""", """Paul Scholes approached Alex Ferguson about making a comeback . Ferguson clearl...
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/NER_BTC.ipynb
Laurasgmt/spark-nlp-workshop
4. Define Spark NLP pipeline
document = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") tokenizer = Tokenizer()\ .setInputCols("document")\ .setOutputCol("token") tokenClassifier = BertForTokenClassification.pretrained("bert_token_classifier_ner_btc", "en")\ .setInputCols("token", "document")\ .setOutputCol...
bert_token_classifier_ner_btc download started this may take some time. Approximate size to download 385.3 MB [OK!]
Apache-2.0
tutorials/streamlit_notebooks/NER_BTC.ipynb
Laurasgmt/spark-nlp-workshop
5. Run the pipeline
model = pipeline.fit(spark.createDataFrame(pd.DataFrame({'text': ['']}))) result = model.transform(spark.createDataFrame(pd.DataFrame({'text': text_list})))
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/NER_BTC.ipynb
Laurasgmt/spark-nlp-workshop
6. Visualize results
result.select(F.explode(F.arrays_zip('document.result', 'ner_chunk.result',"ner_chunk.metadata")).alias("cols")) \ .select( F.expr("cols['1']").alias("chunk"), F.expr("cols['2'].entity").alias('result')).show(truncate=False) from sparknlp_display import NerVisualizer for i in range(len(text_list)): ...
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/NER_BTC.ipynb
Laurasgmt/spark-nlp-workshop
Population Segmentation with SageMakerIn this notebook, you'll employ two, unsupervised learning algorithms to do **population segmentation**. Population segmentation aims to find natural groupings in population data that reveal some feature-level similarities between different regions in the US.Using **principal comp...
# data managing and display libs import pandas as pd import numpy as np import os import io import matplotlib.pyplot as plt import matplotlib %matplotlib inline # sagemaker libraries import boto3 import sagemaker
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Loading the Data from Amazon S3This particular dataset is already in an Amazon S3 bucket; you can load the data by pointing to this bucket and getting a data file by name. > You can interact with S3 using a `boto3` client.
# boto3 client to get S3 data s3_client = boto3.client('s3') bucket_name='aws-ml-blog-sagemaker-census-segmentation'
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Take a look at the contents of this bucket; get a list of objects that are contained within the bucket and print out the names of the objects. You should see that there is one file, 'Census_Data_for_SageMaker.csv'.
# get a list of objects in the bucket obj_list=s3_client.list_objects(Bucket=bucket_name) # print object(s)in S3 bucket files=[] for contents in obj_list['Contents']: files.append(contents['Key']) print(files) # there is one file --> one key file_name=files[0] print(file_name)
Census_Data_for_SageMaker.csv
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Retrieve the data file from the bucket with a call to `client.get_object()`.
# get an S3 object by passing in the bucket and file name data_object = s3_client.get_object(Bucket=bucket_name, Key=file_name) # what info does the object contain? display(data_object) # information is in the "Body" of the object data_body = data_object["Body"].read() print('Data type: ', type(data_body))
Data type: <class 'bytes'>
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
This is a `bytes` datatype, which you can read it in using [io.BytesIO(file)](https://docs.python.org/3/library/io.htmlbinary-i-o).
# read in bytes data data_stream = io.BytesIO(data_body) # create a dataframe counties_df = pd.read_csv(data_stream, header=0, delimiter=",") counties_df.head()
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Exploratory Data Analysis (EDA)Now that you've loaded in the data, it is time to clean it up, explore it, and pre-process it. Data exploration is one of the most important parts of the machine learning workflow because it allows you to notice any initial patterns in data distribution and features that may inform how y...
counties_df.shape # print out stats about data counties_df.shape # drop any incomplete rows of data, and create a new df clean_counties_df = counties_df.dropna() clean_counties_df.shape
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Create a new DataFrame, indexed by 'State-County'Eventually, you'll want to feed these features into a machine learning model. Machine learning models need numerical data to learn from and not categorical data like strings (State, County). So, you'll reformat this data such that it is indexed by region and y...
# index data by 'State-County' clean_counties_df.index= clean_counties_df.State + '-' + clean_counties_df.County clean_counties_df.head(1) # drop the old State and County columns, and the CensusId column # clean df should be modified or created anew columns_to_drop = ['State', 'County','CensusId'] clean_counties_df = c...
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Now, what features do you have to work with?
# features features_list = clean_counties_df.columns.values print('Features: \n', features_list)
Features: ['TotalPop' 'Men' 'Women' 'Hispanic' 'White' 'Black' 'Native' 'Asian' 'Pacific' 'Citizen' 'Income' 'IncomeErr' 'IncomePerCap' 'IncomePerCapErr' 'Poverty' 'ChildPoverty' 'Professional' 'Service' 'Office' 'Construction' 'Production' 'Drive' 'Carpool' 'Transit' 'Walk' 'OtherTransp' 'WorkAtHome' 'MeanCommut...
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Visualizing the DataIn general, you can see that features come in a variety of ranges, mostly percentages from 0-100, and counts that are integer values in a large range. Let's visualize the data in some of our feature columns and see what the distribution, over all counties, looks like.The below cell displays **histo...
# transportation (to work) transport_list = ['Drive', 'Carpool', 'Transit', 'Walk', 'OtherTransp'] n_bins = 30 # can decrease to get a wider bin (or vice versa) for column_name in transport_list: ax=plt.subplots(figsize=(6,3)) # get data by column_name and display a histogram ax = plt.hist(clean_counties_d...
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Create histograms of your ownCommute transportation method is just one category of features. If you take a look at the 34 features, you can see data on profession, race, income, and more. Display a set of histograms that interest you!
# create a list of features that you want to compare or examine my_list = ['Hispanic', 'White', 'Black', 'Native', 'Asian', 'Pacific'] n_bins = 50 # define n_bins # histogram creation code is similar to above for column_name in my_list: ax=plt.subplots(figsize=(6,3)) # get data by column_name and display a his...
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Normalize the dataYou need to standardize the scale of the numerical columns in order to consistently compare the values of different features. You can use a [MinMaxScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) to transform the numerical values so that the...
# scale numerical features into a normalized range, 0-1 from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() # store them in this dataframe counties_scaled = pd.DataFrame(scaler.fit_transform(clean_counties_df.astype(float))) counties_scaled.columns=clean_counties_df.columns counties_scaled.index=cle...
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
--- Data ModelingNow, the data is ready to be fed into a machine learning model!Each data point has 34 features, which means the data is 34-dimensional. Clustering algorithms rely on finding clusters in n-dimensional feature space. For higher dimensions, an algorithm like k-means has a difficult time figuring out which...
from sagemaker import get_execution_role session = sagemaker.Session() # store the current SageMaker session # get IAM role role = get_execution_role() print(role) # get default bucket bucket_name = session.default_bucket() print(bucket_name) print()
sagemaker-eu-central-1-730357687813
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Define a PCA ModelTo create a PCA model, I'll use the built-in SageMaker resource. A SageMaker estimator requires a number of parameters to be specified; these define the type of training instance to use and the model hyperparameters. A PCA model requires the following constructor arguments:* role: The IAM role, which...
# define location to store model artifacts prefix = 'counties' output_path='s3://{}/{}/'.format(bucket_name, prefix) print('Training artifacts will be uploaded to: {}'.format(output_path)) # define a PCA model from sagemaker import PCA # this is current features - 1 # you'll select only a portion of these to use, la...
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Convert data into a RecordSet formatNext, prepare the data for a built-in model by converting the DataFrame to a numpy array of float values.The *record_set* function in the SageMaker PCA model converts a numpy array into a **RecordSet** format that is the required format for the training input data. This is a require...
# convert df to np array train_data_np = counties_scaled.values.astype('float32') # convert to RecordSet format formatted_train_data = pca_SM.record_set(train_data_np)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Train the modelCall the fit function on the PCA model, passing in our formatted, training data. This spins up a training instance to perform the training job.Note that it takes the longest to launch the specified training instance; the fitting itself doesn't take much time.
%%time # train the PCA mode on the formatted data pca_SM.fit(formatted_train_data)
2020-05-23 05:40:14 Starting - Starting the training job... 2020-05-23 05:40:16 Starting - Launching requested ML instances......... 2020-05-23 05:41:46 Starting - Preparing the instances for training...... 2020-05-23 05:43:02 Downloading - Downloading input data 2020-05-23 05:43:02 Training - Downloading the training ...
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Accessing the PCA Model AttributesAfter the model is trained, we can access the underlying model parameters. Unzip the Model DetailsNow that the training job is complete, you can find the job under **Jobs** in the **Training** subsection in the Amazon SageMaker console. You can find the job name listed in the traini...
# Get the name of the training job, it's suggested that you copy-paste # from the notebook or from a specific job in the AWS console training_job_name='pca-2020-05-22-09-14-18-586' # where the model is saved, by default model_key = os.path.join(prefix, training_job_name, 'output/model.tar.gz') print(model_key) # dow...
counties/pca-2020-05-22-09-14-18-586/output/model.tar.gz
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
MXNet ArrayMany of the Amazon SageMaker algorithms use MXNet for computational speed, including PCA, and so the model artifacts are stored as an array. After the model is unzipped and decompressed, we can load the array using MXNet.You can take a look at the MXNet [documentation, here](https://aws.amazon.com/mxnet/).
import mxnet as mx # loading the unzipped artifacts pca_model_params = mx.ndarray.load('model_algo-1') # what are the params print(pca_model_params)
{'s': [1.7896362e-02 3.0864021e-02 3.2130770e-02 3.5486195e-02 9.4831578e-02 1.2699370e-01 4.0288666e-01 1.4084760e+00 1.5100485e+00 1.5957943e+00 1.7783760e+00 2.1662524e+00 2.2966361e+00 2.3856051e+00 2.6954880e+00 2.8067985e+00 3.0175958e+00 3.3952675e+00 3.5731301e+00 3.6966958e+00 4.1890211e+00 4.3457499e+00 ...
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
PCA Model AttributesThree types of model attributes are contained within the PCA model.* **mean**: The mean that was subtracted from a component in order to center it.* **v**: The makeup of the principal components; (same as ‘components_’ in an sklearn PCA model).* **s**: The singular values of the components for the ...
# get selected params s=pd.DataFrame(pca_model_params['s'].asnumpy()) v=pd.DataFrame(pca_model_params['v'].asnumpy())
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Data VarianceOur current PCA model creates 33 principal components, but when we create new dimensionality-reduced training data, we'll only select a few, top n components to use. To decide how many top components to include, it's helpful to look at how much **data variance** the components capture. For our original, h...
# looking at top 5 components n_principal_components = 5 start_idx = N_COMPONENTS - n_principal_components # 33-n # print a selection of s print(s.iloc[start_idx:, :])
0 28 7.991313 29 10.180052 30 11.718245 31 13.035975 32 19.592180
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Calculate the explained varianceIn creating new training data, you'll want to choose the top n principal components that account for at least 80% data variance. Complete a function, `explained_variance` that takes in the entire array `s` and a number of top principal components to consider. Then return the a...
# Calculate the explained variance for the top n principal components # you may assume you have access to the global var N_COMPONENTS def explained_variance(s, n_top_components): '''Calculates the approx. data variance that n_top_components captures. :param s: A dataframe of singular values for top component...
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Test CellTest out your own code by seeing how it responds to different inputs; does it return a reasonable value for the single, top component? What about for the top 5 components?
# test cell n_top_components = 7 # select a value for the number of top components # calculate the explained variance exp_variance = explained_variance(s, n_top_components) print('Explained variance: ', exp_variance)
Explained variance: 0.80167246
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
As an example, you should see that the top principal component accounts for about 32% of our data variance! Next, you may be wondering what makes up this (and other components); what linear combination of features make these components so influential in describing the spread of our data?Below, let's take a look at our ...
# features features_list = counties_scaled.columns.values print('Features: \n', features_list)
Features: ['TotalPop' 'Men' 'Women' 'Hispanic' 'White' 'Black' 'Native' 'Asian' 'Pacific' 'Citizen' 'Income' 'IncomeErr' 'IncomePerCap' 'IncomePerCapErr' 'Poverty' 'ChildPoverty' 'Professional' 'Service' 'Office' 'Construction' 'Production' 'Drive' 'Carpool' 'Transit' 'Walk' 'OtherTransp' 'WorkAtHome' 'MeanCommut...
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies