markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Draw a figure to illustrate the within group distributions and the total distribution. | sbn.set_palette("deep")
bins = np.linspace(-3, 3, 10)
fig, axes = plt.subplots(1, 5, figsize=(20,5), sharex=True, sharey=True)
for i, sample in enumerate(samples):
axes[i].hist(sample, bins=bins, histtype='stepfilled',
linewidth=1.5, label='sample {}'.format(i+1))
axes[i].legend()
axes[-1].hist(allobs, bins=bins, histtype='stepfilled', linewidth=2, label='all data')
axes[-1].legend()
axes[-1].set_title("All data combined", fontsize=20)
axes[0].set_ylabel("Frequency", fontsize=20)
axes[2].set_xlabel("X", fontsize=20)
fig.tight_layout()
pass | 2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Calculate the sums of squares | SSbtw = sum_squares_between(samples)
SSwin = sum_squares_within(samples)
SStot = sum_squares_total(samples)
print("SS between:", SSbtw)
print("SS within:", SSwin)
print("SS total:", SStot) | 2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Generate ANOVA table: | ANOVA_oneway(samples) | 2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Simulating ANOVA under $H_A$ | groupmeans = [0, 0, -1, 1]
k = len(groupmeans) # number of groups
groupstds = [1] * k # standard deviations equal across groups
n = 25 # sample size
# generate samples
samples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)]
allobs = np.concatenate(samples)
sbn.set_palette("deep")
bins = np.linspace(-3, 3, 10)
fig, axes = plt.subplots(1, 5, figsize=(20,5), sharex=True, sharey=True)
for i, sample in enumerate(samples):
axes[i].hist(sample, bins=bins, histtype='stepfilled',
linewidth=1.5, label='sample {}'.format(i+1))
axes[i].legend()
axes[-1].hist(allobs, bins=bins, histtype='stepfilled', linewidth=2, label='all data')
axes[-1].legend()
axes[-1].set_title("All data combined", fontsize=20)
axes[0].set_ylabel("Frequency", fontsize=20)
axes[2].set_xlabel("X", fontsize=20)
fig.tight_layout()
pass
SSbtw = sum_squares_between(samples)
SSwin = sum_squares_within(samples)
SStot = sum_squares_total(samples)
print("SS between:", SSbtw)
print("SS within:", SSwin)
print("SS total:", SStot)
tbl = ANOVA_oneway(samples)
tbl | 2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Calculate the value of $R^2$ for our ANOVA model: | ANOVA_R2(tbl) | 2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
scipy.stats has an f_oneway function for calculating the f statistic and assocated p-value for a one-way ANOVA. Let's compare oure result above to the implementation in scipy.stats | # f_oneway expects the samples to be passed in as individual arguments
# this *samples notation "unpacks" the list of samples, treating each as
# an argument to the function.
# see: https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists
stats.f_oneway(*samples) | 2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Anderson's Iris Data revisited | irisurl = "https://raw.githubusercontent.com/Bio204-class/bio204-datasets/master/iris.csv"
iris = pd.read_csv(irisurl)
iris.head()
sbn.distplot(iris["Sepal.Length"])
pass
sbn.violinplot(x="Species", y="Sepal.Length", data=iris)
pass
setosa = iris[iris.Species =='setosa']
versicolor = iris[iris.Species=='versicolor']
virginica = iris[iris.Species == 'virginica']
ANOVA_oneway([setosa['Sepal.Length'], versicolor['Sepal.Length'],
virginica['Sepal.Length']]) | 2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Unit Test
The following unit test is expected to fail until you solve the challenge. | # %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self):
assert_equal(list_of_chars(None), None)
assert_equal(list_of_chars(['']), [''])
assert_equal(list_of_chars(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def main():
test = TestReverse()
test.test_reverse()
if __name__ == '__main__':
main() | interactive-coding-challenges/arrays_strings/reverse_string/reverse_string_challenge-Copy1.ipynb | ThunderShiviah/code_guild | mit |
Now for a bit of exploratory data analysis so we can get to know our data: | display(train_data[:10])
display(test_data[:10])
train_data.describe()
test_data.describe()
train_data.head()
train_data.tail()
train_data.sample(5)
train_data.keys()
test_data.keys()
test_data.keys() | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
Plot the data | feature_columns = train_data[['P1', 'P2', 'P3', 'P4',
'P5', 'P6', 'P7', 'P8', 'P9', 'P10', 'P11', 'P12', 'P13', 'P14', 'P15',
'P16', 'P17', 'P18', 'P19', 'P20', 'P21', 'P22', 'P23', 'P24', 'P25',
'P26', 'P27', 'P28', 'P29', 'P30', 'P31', 'P32', 'P33', 'P34', 'P35',
'P36', 'P37']]
feature_columns.plot.box(figsize=(20, 20)) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
I'm sure there are more creative and informative ways to plot the data, but for now it's time to move on.
Massage, munge, preprocess, and visualize the data | # Cribbed from https://www.kaggle.com/ani310/restaurant-revenue-prediction/restaurant-revenue
# Format the data so that dates are easier to work with.
# Create a column that contains data about the number of days the restaurant has been open.
# Remove the column that has the restaurant's opening date.
train_data['Open Date'] = pd.to_datetime(train_data['Open Date'], format='%m/%d/%Y')
test_data['Open Date'] = pd.to_datetime(test_data['Open Date'], format='%m/%d/%Y')
train_data['OpenDays'] = ""
test_data['OpenDays'] = ""
date_last_train = pd.DataFrame({'Date':np.repeat(['01/01/2015'], [len(train_data)])})
date_last_test = pd.DataFrame({'Date':np.repeat(['01/01/2015'], [len(test_data)])})
date_last_train['Date'] = pd.to_datetime(date_last_train['Date'], format='%m/%d/%Y')
date_last_test['Date'] = pd.to_datetime(date_last_test['Date'], format='%m/%d/%Y')
train_data['OpenDays'] = date_last_train['Date'] - train_data['Open Date']
test_data['OpenDays'] = date_last_test['Date'] - test_data['Open Date']
train_data['OpenDays'] = train_data['OpenDays'].astype('timedelta64[D]').astype(int)
test_data['OpenDays'] = test_data['OpenDays'].astype('timedelta64[D]').astype(int)
train_data = train_data.drop('Open Date', axis=1)
test_data = test_data.drop('Open Date', axis=1)
# Compare the revenue generated by the restaurants in Big Cities vs Other:
city_perc = train_data [["City Group", "revenue"]].groupby(['City Group'], as_index=False).mean()
sns.barplot(x='City Group', y='revenue', data=city_perc)
plt.title("Revenue by city size")
# Convert data from 'City Group' and create columns of indicator variables for 'Big Cities' or 'Other':
city_group_dummy = pd.get_dummies(train_data['City Group'])
train_data = train_data.join(city_group_dummy)
city_group_dummy_test = pd.get_dummies(test_data['City Group'])
test_data = test_data.join(city_group_dummy_test)
train_data = train_data.drop('City Group', axis=1)
test_data = test_data.drop('City Group', axis=1)
# Create scatterplot showing how long a restaurant has been open impacts revenue.
# This will also show any outliers.
plt.scatter(train_data['OpenDays'], train_data['revenue'])
plt.xlabel("Days Open")
plt.ylabel("Revenue")
plt.title("Restaurant revenue by location age") | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
Find the relevant features | from sklearn.feature_selection import SelectFromModel
# from sklearn.linear_model import LassoCV
from sklearn.ensemble import ExtraTreesClassifier
X_train = train_data.iloc[:, 2:]
y = train_data['revenue']
print("X_train.shape: {}".format(X_train.shape))
clf = ExtraTreesClassifier()
clf = clf.fit(X_train, y)
print("clf.feature_.importances_: \n{}".format(clf.feature_importances_))
model = SelectFromModel(clf, prefit=True)
print(model)
X_train_new = model.transform(X_train)
print("X_train_new.shape: {}".format(X_train_new.shape))
X_train_new = pd.DataFrame(X_train_new)
print(X_train_new[:5]) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
Try various machine learning algorithms
Let's try on some algorithms and see how they fit and predict:
sklearn.ensemble.RandomForestRegressor
Also cribbed from Kaggle. | from sklearn.ensemble import RandomForestRegressor
# Tweak seaborn visualizations and adapt to Jupyter notebooks:
sns.set_context("notebook", font_scale=1.1)
sns.set_style("ticks")
# Make dataframes for train and test:
X_train = pd.DataFrame({'OpenDaysLog':train_data['OpenDays'].apply(np.log),
'Big Cities':train_data['Big Cities'], 'Other':train_data['Other'],
'P2':train_data['P2'], 'P8':train_data['P8'], 'P22':train_data['P22'],
'P24':train_data['P24'], 'P28':train_data['P28'], 'P26':train_data['P26']})
y_train = train_data['revenue'].apply(np.log)
X_test = pd.DataFrame({'OpenDaysLog':test_data['OpenDays'].apply(np.log),
'Big Cities':test_data['Big Cities'], 'Other':test_data['Other'],
'P2':test_data['P2'], 'P8':test_data['P8'], 'P22':test_data['P22'],
'P24':test_data['P24'], 'P28':test_data['P28'], 'P26':test_data['P26']})
# Time to build the models and make some predictions:
from sklearn import linear_model
cls = RandomForestRegressor(n_estimators=150)
cls.fit(X_train, y_train)
pred = cls.predict(X_test)
pred = np.exp(pred)
pred
cls.score(X_train, y_train) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
How to format the data for the Kaggle contest submission based on the sampleSubmission.csv file: | test_data = pd.read_csv("test.csv")
submission = pd.DataFrame({
"Id": test_data["Id"],
"Prediction": pred
})
# submission.to_csv('RandomForestSimple.csv', header=True, index=False)
| kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
sklearn.neighbors.KNeighborsRegressor | from sklearn.neighbors import KNeighborsRegressor
# Use dataframes from sklearn.ensemble.RandomForestRegressor example above.
knn_cls = KNeighborsRegressor(n_neighbors=2)
knn_cls.fit(X_train, y_train)
knn_pred = knn_cls.predict(X_test)
knn_pred = np.exp(knn_pred)
knn_cls.score(X_train, y_train) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
sklearn.linear_model.LinearRegression | from sklearn.linear_model import LinearRegression
# Use dataframes from sklearn.ensemble.RandomForestRegressor example above.
lr_cls = LinearRegression()
lr_cls.fit(X_train, y_train)
lr_pred = lr_cls.predict(X_test)
lr_pred = np.exp(lr_pred)
lr_cls.score(X_train, y_train) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
sklearn.neural_network.MLPClassifier | from sklearn.neural_network import MLPRegressor
mlp_cls = MLPRegressor(solver='lbfgs')
mlp_cls.fit(X_train, y_train)
mlp_pred = mlp_cls.predict(X_test)
mlp_pred = np.exp(mlp_pred)
mlp_cls.score(X_train, y_train) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
Restaurant Revenue Prediction Kaggle solution
This next exercise is from a blog post (linked above) by Bikash Agrawal. | import datetime
%pylab inline
from sklearn.model_selection import LeaveOneOut
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import ExtraTreesClassifier
# Regressors considered:
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
# Regressor chosen by the author for final submission:
from sklearn.linear_model import Ridge
# Kaggle added ~311.5 "fake" data points to the test for each real data point.
# Dividing by this number gives more accurate counts of the "real" data in the test set.
FAKE_DATA_RATIO = 311.5
# Set a random seed:
SEED = 0
# Read in the data provided by Kaggle:
train = pd.read_csv('train.csv', index_col=0, parse_dates=[1])
test = pd.read_csv('test.csv', index_col=0, parse_dates=[1])
print("Training data dimensions: \n{}".format(train.shape))
print("Test data dimensions: \n{}".format(test.shape)) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
Concatenate the train and test data together into a single dataframe to pre-process and featurize both consistently: | df = pd.concat((test, train), ignore_index=True)
df.describe()
# Convert date strings to "days open" numerical value:
df["Open Date"] = df["Open Date"].apply(pd.to_datetime)
last_date = df["Open Date"].max()
# Create a datetime delta object:
df["Open Date"] = last_date - df["Open Date"]
# Convert the delta object to an int:
df["Open Date"] = df["Open Date"].dt.days + 1
# Scale "days since opening" so that the marginal impact decreases over time.
# This and the similar log transform of City Count below are the modifications
# that were not in the official competition submission.
df["Log Days Opened"] = df["Open Date"].apply(np.log)
df = df.drop(["Open Date"], axis=1)
# Resize plots:
pylab.rcParams['figure.figsize'] = (8, 6)
df[["Log Days Opened", "revenue"]].plot(x="Log Days Opened", y="revenue",
kind='scatter', title="Log (Days Opened) vs Revenue")
# There is a certain set of columns that are either all zero or all non-zero.
# We have added a feature to mark this -- the 'zeros' feature will be 17 for
# these rows and 0 or 1 for the rows which are rarely or never zero.
# Here are the features with the notable zero behavior:
zero_cols = ['P14', 'P15', 'P16', 'P17', 'P18', 'P24', 'P25', 'P26', 'P27',
'P30', 'P31', 'P32', 'P33', 'P34', 'P35', 'P36', 'P37']
# We make a feature that holds this count of zero columns in the above list:
df['zeros'] = (df[zero_cols] == 0).sum(1)
pylab.rcParams['figure.figsize'] = (20, 8)
fig, axs = plt.subplots(1,2)
fig.suptitle("Distribution of new Zeros features:", fontsize=18)
# There is only one row with a zero count between 0 and 17 in the training set:
df['zeros'].ix[pd.notnull(df.revenue)].value_counts().plot(
title="Training Set", kind='bar', ax=axs[0])
# In the test set, however, there are many rows with an intermediate count of zeros.
# This is probably an artifact of how the fake test data was generated, and might
# indicate that conditional dependence between columns was not preserved.
df['zeros'].ix[pd.isnull(df.revenue)].value_counts().plot(
title="Test Set", kind='bar', ax=axs[1], color='red')
# Here we convert two categorical variables, "Restaurant Type", and "City
# Group (Size)" to dummy variables:
pylab.rcParams['figure.figsize'] = (6, 4)
# The two categories of City Group both appear very frequently:
train["City Group"].value_counts().plot(
title="City Group Distribution in the Training Set", kind='bar')
# Two of the four Restaurant Types (DT and MB) are very rare:
train["Type"].value_counts().plot(
title="Restaurant Type Distribution in the Training Set", kind='bar')
(test["Type"].value_counts() / FAKE_DATA_RATIO).plot(
title="Approximate Restaurant Type Distribution in True Test Set",
kind='bar', color='red')
df = df.join(pd.get_dummies(df['City Group'], prefix="CG"))
df = df.join(pd.get_dummies(df['Type'], prefix="T"))
# Since only n-1 columns are needed to binarize n categories, drop one
# of the new columns and drop the original columns.
# In addition, drop the rare restaurant types.
df = df.drop(["City Group", "Type", "CG_Other", "T_MB", "T_DT"], axis=1)
print(df.shape)
df.describe(include='all')
# Replace city names with the count of their frequency in the training +
# estimated frequency in the test set.
city_counts = (test["City"].value_counts() /
FAKE_DATA_RATIO).add(train["City"].value_counts(), fill_value=0)
df["City"] = df["City"].replace(city_counts)
print("Some example estimated counts of restaurants per city: \n{}".format(
city_counts.head()))
# Take the natural logarithm of city count so that the marginal effect decreases:
df["Log City Count"] = df["City"].apply(np.log)
df = df.drop(["City"], axis=1)
# The last vertical spread of points below are restaurants in Istanbul.
pylab.rcParams['figure.figsize'] = (8, 6)
df[["Log City Count", "revenue"]].plot(x="Log City Count", y="revenue",
kind='scatter', title="Log City Count vs Revenue") | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
Now is the time for us to impute values for the rare restaurant types (DT and MB).
Instead of trying to predict with values that appear only 1 or 0 times in the training set, we will replace them with one of the other commonly appearing categories by fitting a model that predicts which common category they "should" be. | # tofit are the rows in the training set that belong to one of the common restaurant types:
tofit = df.ix[((df.T_FC==1) | (df.T_IL==1)) & (pd.notnull(df.revenue))]
# tofill are rows in either train or test that belong to one of the rare types:
tofill = df.ix[((df.T_FC==0) & (df.T_IL==0))]
print("Type training set shape: \n{}".format(tofit.shape))
print("Data to impute: \n{}".format(tofill.shape))
# Restaurants with type FC are labeled 1, those with type IL are labeled 0.
y = tofit.T_FC
# Drop the label columns and revenue (which is not in the test set):
X = tofit.drop(["T_FC", "T_IL", "revenue"], axis=1) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
Here we can define and train a model to impute restaurant type.
The grid below just has a range of values that the author has found to work well with random forest type models (of which ExtraTrees is one). | model_grid = {'max_depth': [None, 8], 'min_samples_split': [4,9,16],
'min_samples_leaf': [1,4], 'max_features': ['sqrt', 0.5, None]}
type_model = ExtraTreesClassifier(n_estimators=25, random_state=SEED)
grid = RandomizedSearchCV(type_model, model_grid, n_iter=10, cv=5, scoring="roc_auc")
grid.fit(X, y)
print("Best parameters for Type Model: \n{}".format(grid.best_params_))
type_model.set_params(**grid.best_params_)
type_model.fit(X, y)
imputations = type_model.predict(tofill.drop(["T_FC", "T_IL", "revenue"], axis=1))
df.loc[(df.T_FC==0) & (df.T_IL==0), "T_FC"] = imputations
df = df.drop(["T_IL"], axis=1)
df[:7]
print("% labeled FC in the training set: \n{}".format(df.T_FC.mean()))
print("% of imputed values labeled FC: \n{}".format(np.mean(imputations))) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
Now we can binarize the "P" columns with dummy variables: | print("Pre-binarizing columns: {}".format(len(df.columns)))
for col in df.columns:
if col[0] == 'P':
print(col, len(df[col].unique()), "Unique Values")
df = df.join(pd.get_dummies(df[col], prefix=col))
df = df.drop([col, df.columns[-1]], axis=1)
print("Post-binarizing columns: {}".format(len(df.columns))) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
To finish up our data preprocessing, we need to scale all input features to between 0 and 1 (this is especially important for KNN or SVM(SVR) models.
However, we don't want to scale the output, so we'll temporarily 'drop' it. | min_max_scaler = MinMaxScaler()
rev = df.revenue
df = df.drop(['revenue'], axis=1)
df = pd.DataFrame(data=min_max_scaler.fit_transform(df), columns=df.columns, index=df.index)
df = df.join(rev)
# Now that preprocessing is finished, let's have a look at the data before modeling with it:
df.describe()
# Recover the original train and test rows based on revenue (which is null for test rows)
train = df.ix[pd.notnull(df.revenue)]
test = df.ix[pd.isnull(df.revenue)].drop(['revenue'], axis=1)
# Scale revenue by sqrt.
# The reason is to decrease the influence of the few very large revenue values.
y = train.revenue.apply(np.sqrt)
X = train.drop(["revenue"], axis=1) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
Now we can define and train a Ridge Regression model.
The author tested others from the sklearn library, including SVR, RandomForest, K-nearest Neighbors, but found that Ridge consistently gave the strongest leaderboard results.
One takeaway -- when the training data is small, simplest is often best. | model_grid = [{'normalize': [True, False], 'alpha': np.logspace(0,10)}]
model = Ridge()
# Use a grid search and leave-one-out CV on the train set to find the best regularization parameter to use.
grid = GridSearchCV(model, model_grid, scoring='neg_mean_squared_error')
grid.fit(X, y)
print("Best parameters set found on development set: \n{}".format(
grid.best_params_))
# Retrain model on the full training set using the best parameters found in the last step:
model.set_params(**grid.best_params_)
model.fit(X, y)
# Predict on the test set using the trained model:
submission = pd.DataFrame(columns=['Prediction'], index=test.index,
data=model.predict(test))
# Convert back to revenue from sqrt(revenue):
submission.Prediction = submission.Prediction.apply(np.square)
submission.Prediction[:7] | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
So, now we're ready for our final submission to Kaggle: | # Add required column name for Kaggle's submission parser:
submission.index.name='Id'
# Write out the submission:
# submission.to_csv("TFI_Ridge.csv")
# Quick sanity check on the submission:
submission.describe().astype(int)
# Revenue from training set for comparison:
train[['revenue']].describe().astype(int) | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
One last quick comparison.
Note the x-axis scale change: the predictions are more conservative and tend to be closer to the mean than the real revenues.
This is pretty standard behavior when using RMSE -- there are big penalties for being very wrong, so the model will tend towards more moderate predictions. | train[['revenue']].plot(kind='kde', title="Training Set Revenue Distribution")
submission.columns = ["predicted revenue"]
submission.plot(kind='kde', title="Prediction Revenue Distribution", color='red') | kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb | bgroveben/python3_machine_learning_projects | mit |
그러나 NumPy는 벡터화 연산을 지원하므로 다음과 같이 덧셈 연산 하나로 끝난다. 위에서 보인 선형 대수의 벡터 기호를 사용한 연산과 코드가 완전히 동일하다. | %%time
z = x + y
z | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
연산 속도도 벡터화 연산이 훨씬 빠른 것을 볼 수 있다.
Element-Wise 연산
NumPy의 벡터화 연산은 같은 위치의 원소끼리 연산하는 element-wise 연산이다. NumPy의 ndarray를 선형 대수의 벡터나 행렬이라고 했을 때 덧셈, 뺄셈은 NumPy 연산과 일치한다
스칼라와 벡터의 곱도 마찬가지로 선형 대수에서 사용하는 식과 NumPy 코드가 일치한다. | x = np.arange(10)
x
a = 100
a * x | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
NumPy 곱셉의 경우에는 행렬의 곱, 즉 내적(inner product, dot product)의 정의와 다르다. 따라서 이 경우에는 별도로 dot이라는 명령 혹은 메서드를 사용해야 한다. | x = np.arange(10)
y = np.arange(10)
x * y
np.dot(x, y)
x.dot(y) | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
비교 연산도 마찬가지로 element-wise 연산이다. 따라서 벡터 혹은 행렬 전체의 원소가 모두 같아야 하는 선형 대수의 비교 연산과는 다르다. | a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
a == b
a >= b | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
만약 배열 전체를 비교하고 싶다면 array_equal 명령을 사용한다. | a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
c = np.array([1, 2, 3, 4])
np.array_equal(a, b)
np.array_equal(a, c) | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
만약 NumPy 에서 제공하는 지수 함수, 로그 함수 등의 수학 함수를 사용하면 element-wise 벡터화 연산을 지원한다. | a = np.arange(5)
a
np.exp(a)
10**a
np.log(a)
np.log10(a) | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
만약 NumPy에서 제공하는 함수를 사용하지 않으면 벡터화 연산은 불가능하다. | import math
a = [1, 2, 3]
math.exp(a) | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
브로드캐스팅
선형 대수의 행렬 덧셈 혹은 뺄셈을 하려면 두 행렬의 크기가 같아야 한다. 그러나 NumPy에서는 서로 다른 크기를 가진 두 ndarray 배열의 사칙 연산도 지원한다. 이 기능을 브로드캐스팅(broadcasting)이라고 하는데 크기가 작은 배열을 자동으로 반복 확장하여 크기가 큰 배열에 맞추는 방벙이다.
예를 들어 다음과 같이 벡터와 스칼라를 더하는 경우를 생각하자. 선형 대수에서는 이러한 연산이 불가능하다.
$$
x = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix}, \;\;\;\;
x + 1 = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + 1 = ?
$$
그러나 NumPy는 브로드캐스팅 기능을 사용하여 스칼라를 벡터와 같은 크기로 확장시켜서 덧셈 계산을 한다.
$$
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} \overset{\text{numpy}}+ 1 =
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + \begin{bmatrix}1 \ 1 \ 1 \ 1 \ 1 \end{bmatrix} =
\begin{bmatrix}1 \ 2 \ 3 \ 4 \ 5 \end{bmatrix}
$$ | x = np.arange(5)
y = np.ones_like(x)
x + y
x + 1 | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
브로드캐스팅은 더 차원이 높은 경우에도 적용된다. 다음 그림을 참조하라.
<img src="https://datascienceschool.net/upfiles/dbd3775c3b914d4e8c6bbbb342246b6a.png" style="width: 60%; margin: 0 auto 0 auto;"> | a = np.tile(np.arange(0, 40, 10), (3, 1)).T
a
b = np.array([0, 1, 2])
b
a + b
a = np.arange(0, 40, 10)[:, np.newaxis]
a
a + b | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
차원 축소 연산
ndarray의 하나의 행에 있는 원소를 하나의 데이터 집합으로 보고 평균을 구하면 각 행에 대해 하나의 숫자가 나오게 된다. 예를 들어 10x5 크기의 2차원 배열에 대해 행-평균을 구하면 10개의 숫자를 가진 1차원 벡터가 나오게 된다. 이러한 연산을 차원 축소(dimension reduction) 연산이라고 한다.
ndarray 는 다음과 같은 차원 축소 연산 명령 혹은 메서드를 지원한다.
최대/최소: min, max, argmin, argmax
통계: sum, mean, median, std, var
불리언: all, any | x = np.array([1, 2, 3, 4])
x
np.sum(x)
x.sum()
x = np.array([1, 3, 2])
x.min()
x.max()
x.argmin() # index of minimum
x.argmax() # index of maximum
x = np.array([1, 2, 3, 1])
x.mean()
np.median(x)
np.all([True, True, False])
np.any([True, True, False])
a = np.zeros((100, 100), dtype=np.int)
a
np.any(a != 0)
np.all(a == a)
a = np.array([1, 2, 3, 2])
b = np.array([2, 2, 3, 2])
c = np.array([6, 4, 4, 5])
((a <= b) & (b <= c)).all() | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
연산의 대상이 2차원 이상인 경우에는 어느 차원으로 계산을 할 지를 axis 인수를 사용하여 지시한다. axis=0인 경우는 행 연산, axis=1인 경우는 열 연산 등으로 사용한다. 디폴트 값은 0이다.
<img src="https://datascienceschool.net/upfiles/edfaf93a7f124f359343d1dcfe7f29fc.png", style="margin: 0 auto 0 auto;"> | x = np.array([[1, 1], [2, 2]])
x
x.sum()
x.sum(axis=0) # columns (first dimension)
x.sum(axis=1) # rows (second dimension)
y = np.array([[1, 2, 3], [5, 6, 1]])
np.median(y, axis=-1) # last axis | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
정렬
sort 명령이나 메서드를 사용하여 배열 안의 원소를 크기에 따라 정렬하여 새로운 배열을 만들 수도 있다. 2차원 이상인 경우에는 마찬가지로 axis 인수를 사용하여 방향을 결정한다. | a = np.array([[4, 3, 5], [1, 2, 1]])
a
np.sort(a)
np.sort(a, axis=1) | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
sort 메서드는 해당 객체의 자료 자체가 변화하는 in-place 메서드이므로 사용할 때 주의를 기울여야 한다. | a.sort(axis=1)
a | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
만약 자료를 정렬하는 것이 아니라 순서만 알고 싶다면 argsort 명령을 사용한다. | a = np.array([4, 3, 1, 2])
j = np.argsort(a)
j
a[j] | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | junhwanjang/DataSchool | mit |
Q.2. Write an algorithm to determine if a number n is "happy".
A happy number is a number defined by the following process: Starting with any positive integer, replace the number by the sum of the squares of its digits, and repeat the process until the number equals 1 (where it will stay), or it loops endlessly in a cycle which does not include 1. Those numbers for which this process ends in 1 are happy numbers.
Return True if n is a happy number, and False if not. | class Solution(object):
def ifHappy(self, n: int) -> bool:
"""
:type n: int
:rtype: bool
"""
l = 0
while (n != 1):
add = 0
for i in str(n):
add += int(i) ** 2
n = add
l += 1
if l > 100:
return False
return True
print(Solution().ifHappy(19)) | Python/leetcode/easy/strings-arrays.ipynb | codehacken/CodingProwess | mit |
Q.3. Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum. | class Solution(object):
def maxSubArray(self, nums: List[int]) -> int:
"""
:type nums: List[int]
:rtype int
"""
# Special case is when all values in num are negative.
if max(nums) < 0:
return max(nums)
max_sum = 0; curr = 0
for i in range(len(nums)):
if curr + nums[i] > 0:
curr = curr + nums[i]
else:
curr = 0 # Reset the sum.
if curr > max_sum:
max_sum = curr
return max_sum
print(Solution().maxSubArray([-2,1,-3,4,-1,2,1,-5,4])) | Python/leetcode/easy/strings-arrays.ipynb | codehacken/CodingProwess | mit |
Q.4. Given an array nums, write a function to move all 0's to the end of it while maintaining the relative order of the non-zero elements. | class Solution(object):
def moveZeroes(self, nums: List[int]) -> None:
"""
:type nums: List[int]
:rtype None
Perform inplace ordering.
Method: Apply a form of insert sort that moves each non-negative value
to its right place in the list.
"""
for i in range(len(nums)):
if nums[i] != 0:
j = i
while j > 0 and nums[j - 1] == 0:
nums[j], nums[j-1] = nums[j-1], nums[j]
j -= 1
nums = [0,1,0,3,12]
Solution().moveZeroes(nums)
print(nums) | Python/leetcode/easy/strings-arrays.ipynb | codehacken/CodingProwess | mit |
Q.5. Say you have an array prices for which the ith element is the price of a given stock on day i. Design an algorithm to find the maximum profit. You may complete as many transactions as you like (i.e., buy one and sell one share of the stock multiple times).
Note: You may not engage in multiple transactions at the same time (i.e., you must sell the stock before you buy again). | class Solution(object):
def maxProfit(self, prices: List[int]) -> int:
"""
:type prices: List[int]
:rtype: int
Maximum Profit is the cumulation of all positive differences.
"""
profit = 0
for i in range(1, len(prices)):
diff = prices[i] - prices[i-1]
if diff > 0:
profit += diff
return profit
print(Solution().maxProfit([7,6,4,3,1])) | Python/leetcode/easy/strings-arrays.ipynb | codehacken/CodingProwess | mit |
Q.6. Given an array of strings, group anagrams together. | class Solution(object):
def groupAnagrams(self, strs: List[str]) -> List[List[str]]:
"""
:type strs: List[str]
:rtype: List[List[str]]
Method: Build a dictionary of words creating a bag of characters representation.
Generate a has for that representation and add words with a similar hash.
"""
words = {}
# Build a dictionary of words.
for word in strs:
boc_vec = [0 for i in range(26)]
for char in word:
boc_vec[ord(char) - 97] += 1
# Check if the representation if present in the dict.
hval = hash(tuple(boc_vec))
if hval not in words:
words[hval] = [word]
else:
words[hval].append(word)
# Once, the dictionary is built, generate list.
fin = []
for key in words.keys():
fin.append(words[key])
return fin
print(Solution().groupAnagrams(["eat", "tea", "tan", "ate", "nat", "bat"])) | Python/leetcode/easy/strings-arrays.ipynb | codehacken/CodingProwess | mit |
Q.7. Given an integer array arr, count how many elements x there are, such that x + 1 is also in arr. If there're duplicates in arr, count them seperately. | class Solution(object):
def countElements(self, arr: List[int]) -> int:
"""
:type arr: List[int]
:rtype: int
Method: Build a dictionary of all numbers in the list and then separately
verify if (n+1) number exists in the dictionary for every n.
"""
nums = {}
for n in arr:
if n not in nums:
nums[n] = 1
cnt = 0
for n in arr:
if n+1 in nums:
cnt += 1
return cnt
print(Solution().countElements([1,3,2,3,5,0]))
if root.left is None and root.right is None:
return 0
def get_longest_path(root):
if root.left is None and root.right is None:
return 0
elif root.left is None:
return 1 + get_longest_path(root.right)
elif root.right is None:
return 1 + get_longest_path(root.left)
else:
return max(1 + get_longest_path(root.left), 1 + get_longest_path(root.right))
return get_longest_path(root.left) + get_longest_path(root.right)
| Python/leetcode/easy/strings-arrays.ipynb | codehacken/CodingProwess | mit |
Input pipeline
Zastosowano tu następującą strategie:
do trenowania używamy kolejki zarządzanej przez tensorflow
to testowania, pobieramy z kolejki dane w postaci tablic numpy i przekazujemy je do tensorflow z użyciem feed_dict
net | def read_data(filename_queue):
reader = tf.TFRecordReader()
_, se = reader.read(filename_queue)
f = tf.parse_single_example(se,features={'image/encoded':tf.FixedLenFeature([],tf.string),
'image/class/label':tf.FixedLenFeature([],tf.int64),
'image/height':tf.FixedLenFeature([],tf.int64),
'image/width':tf.FixedLenFeature([],tf.int64)})
image = tf.image.decode_png(f['image/encoded'],channels=3)
image.set_shape( (32,32,3) )
return image,f['image/class/label']
tf.reset_default_graph()
fq = tf.train.string_input_producer([dane_train])
image_data, label = read_data(filename_queue=fq)
batch_size = 128
images, sparse_labels = tf.train.shuffle_batch( [image_data,label],batch_size=batch_size,
num_threads=2,
capacity=1000+3*batch_size,
min_after_dequeue=1000
)
images = (tf.cast(images,tf.float32)-128.0)/33.0 | ML_SS2017/zajecia_MJ_21.4.2017.ipynb | marcinofulus/teaching | gpl-3.0 |
test queue | fq_test = tf.train.string_input_producer([dane])
test_image_data, test_label = read_data(filename_queue=fq_test)
batch_size = 128
test_images, test_sparse_labels = tf.train.batch( [test_image_data,test_label],batch_size=batch_size,
num_threads=2,
capacity=1000+3*batch_size,
)
test_images = (tf.cast(test_images,tf.float32)-128.0)/33.0
net = tf.contrib.layers.conv2d( images, 32, 3, padding='VALID')
net = tf.contrib.layers.max_pool2d( net, 2, 2, padding='VALID')
net = tf.contrib.layers.conv2d( net, 32, 3, padding='VALID')
net = tf.contrib.layers.max_pool2d( net, 2, 2, padding='VALID')
net = tf.contrib.layers.conv2d( net, 32, 3, padding='VALID')
net = tf.contrib.layers.max_pool2d( net, 2, 2, padding='VALID')
net = tf.contrib.layers.fully_connected(tf.reshape(net,[-1,2*2*32]), 32)
net = tf.contrib.layers.fully_connected(net, 10, activation_fn=None)
logits = net
xent = tf.losses.sparse_softmax_cross_entropy(sparse_labels,net)
loss = tf.reduce_mean( xent)
opt = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train_op = opt.minimize(loss)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.InteractiveSession(config=config)
tf.global_variables_initializer().run()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess,coord=coord)
!ls cifar_convet.ckpt*
global_step = 0
if global_step>0:
saver=tf.train.Saver()
saver.restore(sess,'cifar_convet.ckpt-%d'%global_step)
%%time
lvals = []
for i in range(global_step,global_step+200000):
l, _ = sess.run([loss,train_op])
if i%10==0:
clear_output(wait=True)
print(l,i+1)
if i%100==0:
Images,Labels = sess.run([test_images,test_sparse_labels])
predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)
r_test = np.sum(predicted==Labels)/Labels.size
Images,Labels = sess.run([images,sparse_labels])
predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)
r = np.sum(predicted==Labels)/Labels.size
lvals.append([i,l,r,r_test])
global_step = i+1
global_step
lvals = np.array(lvals)
plt.plot(lvals[:,0],lvals[:,1])
plt.plot(lvals[:,0],lvals[:,3])
plt.plot(lvals[:,0],lvals[:,2])
saver.restore(sess,'cifar_convet.ckpt')
sess.run(test_sparse_labels).shape
sess.run(test_sparse_labels).shape
label2txt = ["airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck" ]
| ML_SS2017/zajecia_MJ_21.4.2017.ipynb | marcinofulus/teaching | gpl-3.0 |
Testing
Możemy wykorzystać feed_dict by wykonać graf operacji na ndanych testowych. | Images,Labels = sess.run([test_images,test_sparse_labels])
predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)
np.sum(predicted==Labels)/Labels.size
for ith in range(5):
print (label2txt[Labels[ith]],(label2txt[predicted[ith]]))
plt.imshow((Images[ith]*33+128).astype(np.uint8))
plt.show()
%%time
l_lst =[]
for i in range(1):
Images,Labels = sess.run([test_images,test_sparse_labels])
predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)
rlst = np.sum(predicted==Labels)/Labels.size
print(rlst)
saver = tf.train.Saver()
saver.save(sess,'cifar_convet.ckpt',global_step=global_step) | ML_SS2017/zajecia_MJ_21.4.2017.ipynb | marcinofulus/teaching | gpl-3.0 |
Note PatientId column have exponential values in it.
Note No-show displays No if the patient visited and Yes if the Patient did not visited.
Data Cleanup
From the data description and questions to answer, I've determined that some of the dataset columns are not necessary for the analysis process and will therefore be removed. This will help to process the Data Analysis Faster.
PatientId
ScheduledDay
Sms_received
AppointmentID
AppointmentDay
i'll take a 3 step approach to data cleanup
Identify and remove duplicate entries
Remove unnecessary columns
Fix missing and data format issues
Step 1 - Removing Duplicate entries
Concluded that no duplicates entries exists, based on the tests below | # Identify and remove duplicate entries
appointment_data_duplicates = appointment_data.duplicated()
print 'Number of duplicate entries is/are {}'.format(appointment_data_duplicates.sum())
# Lets make sure that this is working
duplication_test = appointment_data.duplicated('Age').head()
print 'Number of entries with duplicate age in top entries are {}'.format(duplication_test.sum())
appointment_data.head() | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
Step 2 - Remove unnecessary columns
Columns(PatientId, ScheduledDay, Sms_received, AppointmentID, AppointmentDay) removed | # Create new dataset without unwanted columns
clean_appointment_data = appointment_data.drop(['PatientId','ScheduledDay','SMS_received','AppointmentID','AppointmentDay'], axis=1)
clean_appointment_data.head() | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
Step 3 - Fix any missing or data format issues
Concluded that there is no missing data | # Calculate number of missing values
clean_appointment_data.isnull().sum()
# Taking a look at the datatypes
clean_appointment_data.info() | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
Data Exploration And Visualization | # Looking at some typical descriptive statistics
clean_appointment_data.describe()
# Age minimum at -1.0 looks a bit weird so give a closer look
clean_appointment_data[clean_appointment_data['Age'] == -1]
# Fixing the negative value and creating a new column named Fixed_Age.
clean_appointment_data['Fixed_Age'] = clean_appointment_data['Age'].abs()
# Checking whether the negative value is still there or is it removed and changed into a positive value.
clean_appointment_data[clean_appointment_data['Fixed_Age'] == -1] | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
The Fixed_Age column is created in order to replace the negative value available in the Age column. The newly created Fixed_Age column will help in the proper calculation in the further questions and the results will be perfect and clear too. The negative value (-1) is changed in to a positive value (1) by using the .abd() function. | # Create AgeGroups for further Analysis
'''bins = [0, 25, 50, 75, 100, 120]
group_names = ['0-25', '25-50', '50-75', '75-100', '100-120']
clean_appointment_data['age-group'] = pd.cut(clean_appointment_data['Fixed_Age'], bins, labels=group_names)
clean_appointment_data.head()'''
clean_appointment_data['Age_rounded'] = np.round(clean_appointment_data['Fixed_Age'], -1)
categories_dict = {0: '0-5',
10: '5-15',
20: '15-25',
30 : '25-35',
40 : '35-45',
50 : '45-55',
60: '55-65',
70 : '65-75',
80 : '75-85',
90: '85-95',
100: '95-105',
120: '105-115'}
clean_appointment_data['age_group'] = clean_appointment_data['Age_rounded'].map(categories_dict)
clean_appointment_data['age_group'] | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
Creation and Addition of Age_Group in the data set will help in the Q1 - How many Female and male of different Age Group in the Dataset missed the Appointments ? | # Simplifying the analysis by Fixing Yes and No issue in the No-show
# The issue is that in the No-show No means that the person visited at the time of their appointment and Yes means that they did not visited.
# First I will change Yes to 0 and No to 1 so that there is no confusion
clean_appointment_data['people_showed_up'] = clean_appointment_data['No-show'].replace(['Yes', 'No'], [0, 1])
clean_appointment_data
# Taking a look at the age of people who showed up and those who missed the appointment
youngest_to_showup = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['Fixed_Age'].min()
youngest_to_miss = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['Fixed_Age'].min()
oldest_to_showup = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['Fixed_Age'].max()
oldest_to_miss = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['Fixed_Age'].max()
print 'Youngest to Show up: {} \nYoungest to Miss: {} \nOldest to Show Up: {} \nOldest to Miss: {}'.format(
youngest_to_showup, youngest_to_miss, oldest_to_showup, oldest_to_miss) | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
Question 1
How many Female and male of different Age Group in the Dataset missed the Appointments ? | # Returns the percentage of male and female who visited the
# hospital on their appointment day with their Age
def people_visited(age_group, gender):
grouped_by_total = clean_appointment_data.groupby(['age_group', 'Gender']).size()[age_group,gender].astype('float')
grouped_by_visiting_gender = \
clean_appointment_data.groupby(['age_group', 'people_showed_up', 'Gender']).size()[age_group,1,gender].astype('float')
visited_gender_pct = (grouped_by_visiting_gender / grouped_by_total * 100).round(2)
return visited_gender_pct
# Get the actual numbers grouped by Age, No-show, Gender
groupedby_visitors = clean_appointment_data.groupby(['age_group','people_showed_up','Gender']).size()
# Print - Grouped by Age Group, Patients showing up on thier appointments and Gender
print groupedby_visitors
print '0-5 - Female Appointment Attendence: {}%'.format(people_visited('0-5','F'))
print '0-5 - Male Appointment Attendence: {}%'.format(people_visited('0-5','M'))
print '5-15 - Female Appointment Attendence: {}%'.format(people_visited('5-15','F'))
print '5-15 - Male Appointment Attendence: {}%'.format(people_visited('5-15','M'))
print '15-25 - Female Appointment Attendence: {}%'.format(people_visited('15-25','F'))
print '15-25 - Male Appointment Attendence: {}%'.format(people_visited('15-25','M'))
print '25-35 - Female Appointment Attendence: {}%'.format(people_visited('25-35','F'))
print '25-35 - Male Appointment Attendence: {}%'.format(people_visited('25-35','M'))
print '35-45 - Female Appointment Attendence: {}%'.format(people_visited('35-45','F'))
print '35-45 - Male Appointment Attendence: {}%'.format(people_visited('35-45','M'))
print '45-55 - Female Appointment Attendence: {}%'.format(people_visited('45-55','F'))
print '45-55 - Male Appointment Attendence: {}%'.format(people_visited('45-55','M'))
print '55-65 - Female Appointment Attendence: {}%'.format(people_visited('55-65','F'))
print '55-65 - Male Appointment Attendence: {}%'.format(people_visited('55-65','M'))
print '65-75 - Female Appointment Attendence: {}%'.format(people_visited('65-75','F'))
print '65-75 - Male Appointment Attendence: {}%'.format(people_visited('65-75','M'))
print '75-85 - Female Appointment Attendence: {}%'.format(people_visited('75-85','F'))
print '75-85 - Male Appointment Attendence: {}%'.format(people_visited('75-85','M'))
print '85-95 - Female Appointment Attendence: {}%'.format(people_visited('85-95','F'))
print '85-95 - Male Appointment Attendence: {}%'.format(people_visited('85-95','M'))
print '95-105 - Female Appointment Attendence: {}%'.format(people_visited('95-105','F'))
print '95-105 - Male Appointment Attendence: {}%'.format(people_visited('95-105','M'))
print '105-115 - Female Appointment Attendence: {}%'.format(people_visited('105-115','F'))
# Graph - Grouped by class, survival and sex
g = sns.factorplot(x="Gender", y="people_showed_up", col="age_group", data=clean_appointment_data,
saturation=4, kind="bar", ci=None, size=12, aspect=.35)
# Fix up the labels
(g.set_axis_labels('', 'People Visited')
.set_xticklabels(["Men", "Women"], fontsize = 30)
.set_titles("Age Group {col_name}")
.set(ylim=(0, 1))
.despine(left=True, bottom=True))
| Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
The graph above shows the number of people who attended their appointment and those who did not attended their appointments acccording to the Gender of the people having the appointment in the hospital.
- According to the graph above, women are more concious about their health regardless of the age group. | # Graph - Actual count of passengers by survival, group and sex
g = sns.factorplot('people_showed_up', col='Gender', hue='age_group', data=clean_appointment_data, kind='count', size=15, aspect=.6)
# Fix up the labels
(g.set_axis_labels('People Who Attended', 'No. of Appointment')
.set_xticklabels(["False", "True"], fontsize=20)
.set_titles('{col_name}')
)
titles = ['Men', 'Women']
for ax, title in zip(g.axes.flat, titles):
ax.set_title(title)
| Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
The graph above shows the number of people who attended their appointment and those who did not attended their appointments.
- False denotes that the people did not attended the appointments.
- True denotes that the people did attended the appointments.
The graphs is categorized according to the Age Group.
Based on the raw numbers it would appear that the age group of 65-75 is the most health cautious Age Group because they have the highest percentage of Appointment attendence followed by the Age Group of 55-65 which is just about 1% less than the 65-75 Age Group in the Appointment Attendence.
The Age group with the least percentage of Appointment Attendence is 15-25.
Note 105-115 is not the least percentage age group because the number of patients in that Age group are too low. So, the comparision is not possible.
Question 2
Did Age, regardless of Gender, determine the patients missing the Appointments ? | # Find the total number of people who showed up and those who missed their appointments
number_showed_up = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['people_showed_up'].count()
number_missed = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['people_showed_up'].count()
# Find the average number of people who showed up and those who missed their appointments
mean_age_showed_up = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['Age'].mean()
mean_age_missed = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['Age'].mean()
# Displaying a few Totals
print 'Total number of People Who Showed Up {} \n\
Total number of People who missed the appointment {} \n\
Mean age of people who Showed up {} \n\
Mean age of people who missed the appointment {} \n\
Oldest to show up {} \n\
Oldest to miss the appointment {}' \
.format(number_showed_up, number_missed, np.round(mean_age_showed_up),
np.round(mean_age_missed), oldest_to_showup, oldest_to_miss)
# Graph of age of passengers across sex of those who survived
g = sns.factorplot(x="people_showed_up", y="Fixed_Age", hue='Gender', data=clean_appointment_data, kind="box", size=7, aspect=.8)
# Fixing the labels
(g.set_axis_labels('Appointment Attendence', 'Age of Patients')
.set_xticklabels(["False", "True"])
) | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
Based on the boxplot and the calculated data above, it would appear that:
Regardless of the Gender, age was not a deciding factor in the appointment attendence rate of the Patients
The number of female who attended the appointment as well as who missed the appointment is more than the number of male
Question 3
Did women and children preferred to attend their appointments ?
Assumption: With 'child' not classified in the data, I'll need to assume a cutoff point. Therefore, I'll be using today's standard of under 18 as those to be considered as a child vs adult. | # Create Category and Categorize people
clean_appointment_data.loc[
((clean_appointment_data['Gender'] == 'F') &
(clean_appointment_data['Age'] >= 18)),
'Category'] = 'Woman'
clean_appointment_data.loc[
((clean_appointment_data['Gender'] == 'M') &
(clean_appointment_data['Age'] >= 18)),
'Category'] = 'Man'
clean_appointment_data.loc[
(clean_appointment_data['Age'] < 18),
'Category'] = 'Child'
# Get the totals grouped by Men, Women and Children
print clean_appointment_data.groupby(['Category', 'people_showed_up']).size()
# Graph - Comapre the number of Men, Women and Children who showed up on their appointments
g = sns.factorplot('people_showed_up', col='Category', data=clean_appointment_data, kind='count', size=7, aspect=0.8)
# Fix up the labels
(g.set_axis_labels('Appointment Attendence', 'No. of Patients')
.set_xticklabels(['False', 'True'])
)
titles = ['Women', 'Men', 'Children']
for ax, title in zip(g.axes.flat, titles):
ax.set_title(title) | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
Based on the calculated data and the Graphs, it would appear that:
- The appointment attendence of the women is significantly higher than that of the men and children
- The number of Men and children who attended the appointment is almost the same, the difference between the number of men and children is about :- 967
Question 4
Did the Scholarship of the patients helped in the attendence of their appointments? | # Determine the number of Man, Woman and Children who had scholarship
man_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Man') &
(clean_appointment_data['Scholarship'] == 1)]
man_without_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Man') &
(clean_appointment_data['Scholarship'] == 0)]
woman_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Woman') &
(clean_appointment_data['Scholarship'] == 1)]
woman_without_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Woman') &
(clean_appointment_data['Scholarship'] == 0)]
children_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Child') &
(clean_appointment_data['Scholarship'] == 1)]
children_without_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Child') &
(clean_appointment_data['Scholarship'] == 0)]
# Graph - Compare how many man, woman and children with or without scholarship attended thier apoointments
g = sns.factorplot('Scholarship', col='Category', data=clean_appointment_data, kind='count', size=8, aspect=0.3)
# Fix up the labels
(g.set_axis_labels('Scholarship', 'No of Patients')
.set_xticklabels(['Missed', 'Attended'])
)
titles = ['Women', 'Men', 'Children']
for ax, title in zip(g.axes.flat, titles):
ax.set_title(title) | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
According to the Bar graph above :-
- The number of people with scholarship did not affected the number of people visiting the hospital on their appointment.
- Women with Scholarship attended the appointments the most followed by Children. Men visited the hospital on their appointments the least.
The conclusion is that the Scholarship did not encouraged the number of people attending their appointments regardless of their age or gender. | # Determine the Total Number of Men, Women and Children with Scholarship
total_male_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Man') &
(clean_appointment_data['Scholarship'] < 2)]
total_female_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Woman') &
(clean_appointment_data['Scholarship'] < 2)]
total_child_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Child') &
(clean_appointment_data['Scholarship'] < 2)]
total_man_with_scholarship = total_male_with_scholarship.Scholarship.count()
total_woman_with_scholarship = total_female_with_scholarship.Scholarship.count()
total_children_with_scholarship = total_child_with_scholarship.Scholarship.count()
# Determine the number of Men, Women and Children with scholarship who Attended the Appointments
man_with_scholarship_attendence = man_with_scholarship.Scholarship.count()
woman_with_scholarship_attendence = woman_with_scholarship.Scholarship.sum()
children_with_scholarship_attendence = children_with_scholarship.Scholarship.sum()
# Determine the Percentage of Men, Women and Children with Scholarship who Attended or Missed the Appointments
pct_man_with_scholarship_attendence = ((float(man_with_scholarship_attendence)/total_man_with_scholarship)*100)
pct_man_with_scholarship_attendence = np.round(pct_man_with_scholarship_attendence,2)
pct_woman_with_scholarship_attendence = ((float(woman_with_scholarship_attendence)/total_woman_with_scholarship)*100)
pct_woman_with_scholarship_attendence = np.round(pct_woman_with_scholarship_attendence,2)
pct_children_with_scholarship_attendence = ((float(children_with_scholarship_attendence)/total_children_with_scholarship)*100)
pct_children_with_scholarship_attendence = np.round(pct_children_with_scholarship_attendence,2)
# Determine the Average Age of Men, Women and Children with Scholarship who Attended or Missed the Appointments
man_with_scholarship_avg_age = np.round(man_with_scholarship.Age.mean())
woman_with_scholarship_avg_age = np.round(woman_with_scholarship.Age.mean())
children_with_scholarship_avg_age = np.round(children_with_scholarship.Age.mean())
# Display Results
print '1. Total number of Men with Scholarship: {}\n\
2. Total number of Women with Scholarship: {}\n\
3. Total number of Children with Scholarship: {}\n\
4. Men with Scholarship who attended the Appointment: {}\n\
5. Women with Scholarship who attended the Appointment: {}\n\
6. Children with Scholarship who attended the Appointment: {}\n\
7. Men with Scholarship who missed the Appointment: {}\n\
8. Women with Scholarship who missed the Appointment: {}\n\
9. Children with Scholarship who missed the Appointment: {}\n\
10. Percentage of Men with Scholarship who attended the Appointment: {}%\n\
11. Percentage of Women with Scholarship who attended the Appointment: {}%\n\
12. Percentage of Children with Scholarship who attended the Appointment: {}%\n\
13. Average Age of Men with Scholarship who attended the Appointment: {}\n\
14. Average Age of Women with Scholarship who attended the Appointment: {}\n\
15. Average Age of Children with Scholarship who attended the Appointment: {}'\
.format(total_man_with_scholarship, total_woman_with_scholarship, total_children_with_scholarship,
man_with_scholarship_attendence, woman_with_scholarship_attendence, children_with_scholarship_attendence,
total_man_with_scholarship-man_with_scholarship_attendence, total_woman_with_scholarship-woman_with_scholarship_attendence,
total_children_with_scholarship-children_with_scholarship_attendence,
pct_man_with_scholarship_attendence, pct_woman_with_scholarship_attendence, pct_children_with_scholarship_attendence,
man_with_scholarship_avg_age, woman_with_scholarship_avg_age, children_with_scholarship_avg_age) | Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb | PreetinderKalsi/Investigate-A-Dataset | gpl-3.0 |
1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
$\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions. | # GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
### START CODE HERE ### (approx. 1 line)
J = np.dot(theta, x)
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J)) | Improving Deep Neural networks- Hyperparameter Tuning - Regularization and Optimization/Gradient Checking.ipynb | anukarsh1/deep-learning-coursera | mit |
Now, run backward propagation. | def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1. / m * np.dot(dZ3, A2.T)
db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1. / m * np.dot(dZ2, A1.T) * 2 # Should not multiply by 2
db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1. / m * np.dot(dZ1, X.T)
db1 = 4. / m * np.sum(dZ1, axis=1, keepdims=True) # Should not multiply by 4
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients | Improving Deep Neural networks- Hyperparameter Tuning - Regularization and Optimization/Gradient Checking.ipynb | anukarsh1/deep-learning-coursera | mit |
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
Exercise: Implement gradient_check_n().
Instructions: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute J_plus[i]:
1. Set $\theta^{+}$ to np.copy(parameters_values)
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )).
- To compute J_minus[i]: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$ | # GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 1e-7:
print("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y) | Improving Deep Neural networks- Hyperparameter Tuning - Regularization and Optimization/Gradient Checking.ipynb | anukarsh1/deep-learning-coursera | mit |
从泰坦尼克号的数据样本中,我们可以看到船上每位旅客的特征
Survived:是否存活(0代表否,1代表是)
Pclass:社会阶级(1代表上层阶级,2代表中层阶级,3代表底层阶级)
Name:船上乘客的名字
Sex:船上乘客的性别
Age:船上乘客的年龄(可能存在 NaN)
SibSp:乘客在船上的兄弟姐妹和配偶的数量
Parch:乘客在船上的父母以及小孩的数量
Ticket:乘客船票的编号
Fare:乘客为船票支付的费用
Cabin:乘客所在船舱的编号(可能存在 NaN)
Embarked:乘客上船的港口(C 代表从 Cherbourg 登船,Q 代表从 Queenstown 登船,S 代表从 Southampton 登船)
因为我们感兴趣的是每个乘客或船员是否在事故中活了下来。可以将 Survived 这一特征从这个数据集移除,并且用一个单独的变量 outcomes 来存储。它也做为我们要预测的目标。
运行该代码,从数据集中移除 Survived 这个特征,并将它存储在变量 outcomes 中。 | # Store the 'Survived' feature in a new variable and remove it from the dataset
# 从数据集中移除 'Survived' 这个特征,并将它存储在一个新的变量中。
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
# 显示已移除 'Survived' 特征的数据集
display(data.head()) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
这个例子展示了如何将泰坦尼克号的 Survived 数据从 DataFrame 移除。注意到 data(乘客数据)和 outcomes (是否存活)现在已经匹配好。这意味着对于任何乘客的 data.loc[i] 都有对应的存活的结果 outcome[i]。
为了验证我们预测的结果,我们需要一个标准来给我们的预测打分。因为我们最感兴趣的是我们预测的准确率,既正确预测乘客存活的比例。运行下面的代码来创建我们的 accuracy_score 函数以对前五名乘客的预测来做测试。
思考题:从第六个乘客算起,如果我们预测他们全部都存活,你觉得我们预测的准确率是多少? | def accuracy_score(truth, pred):
""" Returns accuracy score for input truth and predictions. """
# Ensure that the number of predictions matches number of outcomes
# 确保预测的数量与结果的数量一致
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
# 计算预测准确率(百分比)
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
# 测试 'accuracy_score' 函数
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
提示:如果你保存 iPython Notebook,代码运行的输出也将被保存。但是,一旦你重新打开项目,你的工作区将会被重置。请确保每次都从上次离开的地方运行代码来重新生成变量和函数。
预测
如果我们要预测泰坦尼克号上的乘客是否存活,但是我们又对他们一无所知,那么最好的预测就是船上的人无一幸免。这是因为,我们可以假定当船沉没的时候大多数乘客都遇难了。下面的 predictions_0 函数就预测船上的乘客全部遇难。 | def predictions_0(data):
""" Model with no features. Always predicts a passenger did not survive. """
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
# 预测 'passenger' 的生还率
predictions.append(0)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_0(data) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
问题1
对比真实的泰坦尼克号的数据,如果我们做一个所有乘客都没有存活的预测,你认为这个预测的准确率能达到多少?
提示:运行下面的代码来查看预测的准确率。 | print accuracy_score(outcomes, predictions) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
回答: Predictions have an accuracy of 61.62%
我们可以使用 survival_stats 函数来看看 Sex 这一特征对乘客的存活率有多大影响。这个函数定义在名为 titanic_visualizations.py 的 Python 脚本文件中,我们的项目提供了这个文件。传递给函数的前两个参数分别是泰坦尼克号的乘客数据和乘客的 生还结果。第三个参数表明我们会依据哪个特征来绘制图形。
运行下面的代码绘制出依据乘客性别计算存活率的柱形图。 | survival_stats(data, outcomes, 'Sex') | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
观察泰坦尼克号上乘客存活的数据统计,我们可以发现大部分男性乘客在船沉没的时候都遇难了。相反的,大部分女性乘客都在事故中生还。让我们在先前推断的基础上继续创建:如果乘客是男性,那么我们就预测他们遇难;如果乘客是女性,那么我们预测他们在事故中活了下来。
将下面的代码补充完整,让函数可以进行正确预测。
提示:您可以用访问 dictionary(字典)的方法来访问船上乘客的每个特征对应的值。例如, passenger['Sex'] 返回乘客的性别。 | def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# 移除下方的 'pass' 声明
# and write your prediction conditions here
# 输入你自己的预测条件
if passenger['Sex'] == 'female':
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_1(data) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
问题2
当我们预测船上女性乘客全部存活,而剩下的人全部遇难,那么我们预测的准确率会达到多少?
提示:运行下面的代码来查看我们预测的准确率。 | print accuracy_score(outcomes, predictions) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
回答: Predictions have an accuracy of 78.68%.
仅仅使用乘客性别(Sex)这一特征,我们预测的准确性就有了明显的提高。现在再看一下使用额外的特征能否更进一步提升我们的预测准确度。例如,综合考虑所有在泰坦尼克号上的男性乘客:我们是否找到这些乘客中的一个子集,他们的存活概率较高。让我们再次使用 survival_stats 函数来看看每位男性乘客的年龄(Age)。这一次,我们将使用第四个参数来限定柱形图中只有男性乘客。
运行下面这段代码,把男性基于年龄的生存结果绘制出来。 | survival_stats(data, outcomes, 'Age', ["Sex == 'male'"]) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
仔细观察泰坦尼克号存活的数据统计,在船沉没的时候,大部分小于10岁的男孩都活着,而大多数10岁以上的男性都随着船的沉没而遇难。让我们继续在先前预测的基础上构建:如果乘客是女性,那么我们就预测她们全部存活;如果乘客是男性并且小于10岁,我们也会预测他们全部存活;所有其它我们就预测他们都没有幸存。
将下面缺失的代码补充完整,让我们的函数可以实现预测。
提示: 您可以用之前 predictions_1 的代码作为开始来修改代码,实现新的预测函数。 | def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# 移除下方的 'pass' 声明
# and write your prediction conditions here
# 输入你自己的预测条件
if passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Age'] < 10:
predictions.append(1)
else :
predictions.append(0)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_2(data) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
问题3
当预测所有女性以及小于10岁的男性都存活的时候,预测的准确率会达到多少?
提示:运行下面的代码来查看预测的准确率。 | print accuracy_score(outcomes, predictions) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
回答: Predictions have an accuracy of 79.35%.
添加年龄(Age)特征与性别(Sex)的结合比单独使用性别(Sex)也提高了不少准确度。现在该你来做预测了:找到一系列的特征和条件来对数据进行划分,使得预测结果提高到80%以上。这可能需要多个特性和多个层次的条件语句才会成功。你可以在不同的条件下多次使用相同的特征。Pclass,Sex,Age,SibSp 和 Parch 是建议尝试使用的特征。
使用 survival_stats 函数来观测泰坦尼克号上乘客存活的数据统计。
提示: 要使用多个过滤条件,把每一个条件放在一个列表里作为最后一个参数传递进去。例如: ["Sex == 'male'", "Age < 18"] | survival_stats(data, outcomes, 'Pclass') | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
当查看和研究了图形化的泰坦尼克号上乘客的数据统计后,请补全下面这段代码中缺失的部分,使得函数可以返回你的预测。
在到达最终的预测模型前请确保记录你尝试过的各种特征和条件。
提示: 您可以用之前 predictions_2 的代码作为开始来修改代码,实现新的预测函数。 | def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'female':
if passenger['Pclass'] == 3 and passenger['Age'] > 40:
predictions.append(0)
else:
predictions.append(1)
elif passenger['Age'] < 10:
predictions.append(1)
elif passenger['Pclass'] < 2 and passenger['Age'] < 18:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
结论
请描述你实现80%准确度的预测模型所经历的步骤。您观察过哪些特征?某些特性是否比其他特征更有帮助?你用了什么条件来预测生还结果?你最终的预测的准确率是多少?
提示:运行下面的代码来查看你的预测准确度。 | print accuracy_score(outcomes, predictions) | Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb | littlewizardLI/Udacity-ML-nanodegrees | apache-2.0 |
plot_yield_input
To plot the yield data | s1.plot_yield_input() #[1,3,5,12][Fe/H]
s1.plot_yield_input(fig=2,xaxis='mini',yaxis='[Fe/H]',iniZ=0.0001,masses=[1,3,12,25],marker='s',color='r',shape='-')
s1.plot_yield_input(fig=3,xaxis='[C/H]',yaxis='[Fe/H]',iniZ=0.0001,masses=[1,3,12,25],marker='x',color='b',shape='--') | regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
The following commands plot the ISM metallicity in spectroscopic notation.
s1.plot_mass | s1.plot_mass()
s1.plot_mass(specie='N',shape='--',marker='x')
#s1.plot_mass_multi()
#s1.plot_mass_multi(fig=1,specie=['C','N'],ylims=[],source='all',norm=False,label=[],shape=['-','--'],marker=['o','D'],color=['r','b'],markevery=20)
#plt.legend() | regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
s1.plot_massfrac | s1.plot_massfrac()
s1.plot_massfrac(yaxis='He-4',shape='--',marker='x') | regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
s1.plot_spectro | s1.plot_spectro()
s1.plot_spectro(yaxis='[O/Fe]',marker='x',shape='--') | regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
s1.plot_totmasses | s1.plot_totmasses()
s1.plot_totmasses(source='agb',shape='--',marker='x')
s1.plot_totmasses(mass='stars',shape=':',marker='^') | regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
Test of SNIa and SNII rate plots | import sygma as s
reload(s)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',imf_bdys=[1,30],special_timesteps=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
#s1.plot_sn_distr(rate=True,label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s')
s1.plot_sn_distr(fig=4,rate=False,label1='SN1a, number',label2='SNII number',marker1='d',marker2='p')
##plt.xlim(1e6,1e10)
#plt.ylabel('Number/Rate')
s1.plot_sn_distr()
s1.plot_sn_distr(fig=5,rate=True,rate_only='',xaxis='time',label1='SN1a',label2='SN2',shape1=':',shape2='--',marker1='o',marker2='s',color1='k',color2='b',markevery=20) | regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
One point at the beginning for only 1 starburst | #s1=s.sygma(iolevel=0,mgal=1e11,dt=1e6,tend=1.3e10,imf_type='salpeter',imf_bdys=[1,30],special_timesteps=-1,iniZ=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
#s1.plot_sn_distr(rate=True,label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s')
#s1.plot_sn_distr(rate=False,label1='SN1a, number',label2='SNII number',marker1='d',marker2='p')
#plt.xlim(1e6,1e10)
#plt.ylabel('Number/Rate')
#s1=s.sygma(iniZ=0.0001,dt=1e9,tend=2e9)
#s2=s.sygma(iniZ=0.02)#,dt=1e7,tend=2e9)
reload(s)
s1=s.sygma(iolevel=0,iniZ=0.02,dt=1e8,tend=1e9) #standart not workign
#s2=s.sygma(iniZ=0.02,dt=1e8,tend=1e10)
| regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
plot_mass_range_contributions | s1.plot_mass_range_contributions()
s1.plot_mass_range_contributions(fig=7,specie='O',rebin=0.5,label='',shape='-',marker='o',color='b',markevery=20,extralabel=False,log=False)
#s1.plot_mass_range_contributions(fig=7,specie='O',prodfac=True,rebin=0.5,label='',shape='-',marker='o',color='r',markevery=20,extralabel=False,log=False)
| regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
Tests with two starbursts | import sygma as s
reload(s)
ssp1=s.sygma(iolevel=0,dt=1e8,mgal=1e11,starbursts=[0.1,0.1],tend=1e9,special_timesteps=-1,imf_type='kroupa',imf_bdys=[0.1,100],sn1a_on=False,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
ssp1.plot_star_formation_rate()
ssp1.plot_star_formation_rate(fig=6,marker='o',shape=':')
ssp1.plot_mass_range_contributions(fig=7,specie='H',prodfac=False,rebin=-1,time=-1,label='Total burst',shape='-',marker='o',color='r',markevery=20,extralabel=False,log=False)
ssp1.plot_mass_range_contributions(fig=7,specie='H',prodfac=False,rebin=-1,time=1e8,label='Burst at 1e8',shape='-',marker='o',color='b',markevery=20,extralabel=False,log=False) | regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
write_evol_table | #s1.write_evol_table(elements=['H','He','C'])
s1.write_evol_table(elements=['H'],isotopes=['H-1'],table_name='gce_table.txt',interact=False) | regression_tests/temp/RTS_plot_functions.ipynb | NuGrid/NuPyCEE | bsd-3-clause |
This import statement reads the book samples, which include nine sentences and nine book-length texts. It has also helpfully put each of these texts into a variable for us, from sent1 to sent9 and text1 to text9. | print(sent1)
print(sent3)
print(sent5) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
Let's look at the texts now. | print(text6)
print(text6.name)
print("This text has %d words" % len(text6.tokens))
print("The first hundred words are:", " ".join( text6.tokens[:100] )) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
Each of these texts is an nltk.text.Text object, and has methods to let you see what the text contains. But you can also treat it as a plain old list! | print(text5[0])
print(text3[0:11])
print(text4[0:51]) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
We can do simple concordancing, printing the context for each use of a word throughout the text: | text6.concordance( "swallow" ) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
The default is to show no more than 25 results for any given word, but we can change that. | text6.concordance('Arthur', lines=37) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
We can adjust the amount of context we show in our concordance: | text6.concordance('Arthur', width=100) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
...or get the number of times any individual word appears in the text. | word_to_count = "KNIGHT"
print("The word %s appears %d times." % ( word_to_count, text6.count( word_to_count ) )) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
We can generate a vocabulary for the text, and use the vocabulary to find the most frequent words as well as the ones that appear only once (a.k.a. the hapaxes.) | t6_vocab = text6.vocab()
t6_words = list(t6_vocab.keys())
print("The text has %d different words" % ( len( t6_words ) ))
print("Some arbitrary 50 of these are:", t6_words[:50])
print("The most frequent 50 words are:", t6_vocab.most_common(50))
print("The word swallow appears %d times" % ( t6_vocab['swallow'] ))
print("The text has %d words that appear only once" % ( len( t6_vocab.hapaxes() ) ))
print("Some arbitrary 100 of these are:", t6_vocab.hapaxes()[:100]) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
You've now seen two methods for getting the number of times a word appears in a text: t6.count(word) and t6_vocab[word]. These are in fact identical, and the following bit of code is just to prove that. An assert statement is used to test whether something is true - if it ever isn't true, the code will throw up an error! This is a basic building block for writing tests for your code. | print("Here we assert something that is true.")
for w in t6_words:
assert text6.count( w ) == t6_vocab[w]
print("See, that worked! Now we will assert something that is false, and we will get an error.")
for w in t6_words:
assert w.lower() == w | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
We can try and find interesting words in the text, such as words of a minimum length (the longer a word, the less common it probably is) that occur more than once or twice... | # With a list comprehension
long_words = [ w for w in t6_words if len( w ) > 5 and t6_vocab[w] > 3 ]
# The long way, with a for loop. This is identical to the above.
long_words = []
for w in t6_words:
if( len ( w ) > 5 and t6_vocab[w] > 3 ):
long_words.append( w )
print("The reasonably frequent long words in the text are:", long_words) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
And we can look for pairs of words that go together more often than chance would suggest. | print("\nUp to twenty collocations")
text6.collocations()
print("\nUp to fifty collocations")
text6.collocations(num=50)
print("\nCollocations that might have one word in between")
text6.collocations(window_size=3) | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
NLTK can also provide us with a few simple graph visualizations, when we have matplotlib installed. To make this work in iPython, we need the following magic line. If you are running in PyCharm, then you do not need this line - it will throw an error if you try to use it! | %pylab --no-import-all inline | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
The vocabulary we get from the .vocab() method is something called a "frequency distribution", which means it's a giant tally of each unique word and the number of times that word appears in the text. We can also make a frequency distribution of other features, such as "each possible word length and the number of times a word that length is used". Let's do that and plot it. | word_length_dist = FreqDist( [ len(w) for w in t6_vocab.keys() ] )
word_length_dist.plot() | lessons/09 Natural language processing with NLTK.ipynb | DHBern/Tools-and-Techniques | gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.