markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
The data that we will be using for this demo is the UCI Bank Marketing Dataset.
The first step we will need to take is to create a BigQuery dataset and a table so that we can store this data. Make sure that you replace your_dataset and your_table variables with any dataset and table name you want.
|
import os
your_dataset = 'your_dataset'
your_table = 'your_table'
project_id = os.environ["GOOGLE_CLOUD_PROJECT"]
!bq mk -d {project_id}:{your_dataset}
!bq mk -t {your_dataset}.{your_table}
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
There is a public dataset avaliable which has cleaned up some of the rows in the UCI Bank Marketing Dataset. We will download this file in the next cell and save locally as data.csv.
|
!curl https://storage.googleapis.com/erwinh-public-data/bankingdata/bank-full.csv --output data.csv
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
We will now upload the data.csv file into our BigQuery table.
|
!bq load --autodetect --source_format=CSV --field_delimiter ';' --skip_leading_rows=1 --replace {your_dataset}.{your_table} data.csv
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
1) Fetching data
In this chapter we will get data from BigQuery and create a Pandas dataframe that we will be using for data engineering, data visualization and modeling.
Data from BigQuery to Pandas
We are going to use the datalab.bigquery library to fetch data from bigquery and load a Pandas dataframe.
|
#import pandas and bigquery library
import pandas as pd
from google.cloud import bigquery as bq
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
We doing two things in this cell:
We are executing an SQL query
We are converting the output from BQ into a pandas dataframe using .to_dataframe()
|
# Execute the query and converts the result into a Dataframe
client = bq.Client(project=project_id)
df = client.query('''
SELECT
*
FROM
`%s.%s`
''' % (your_dataset, your_table)).to_dataframe()
df.head(3).T
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
We will now expore the data we got from BQ
2) Data exploration
We will use Pandas profiling to perform data exploration. This will give us information including distributions for each feature, missing values, the maximum and minimum values and many more. These are all out of the box. Run the next cell first if you haven't installed pandas profiling. (Note if after you haved installed pandas profiling, you get an import error, restart your kernel and re-run all the cells up until this section).
|
import pandas_profiling as pp
# Let's create a Profile Report using the dataframe that we just created.
pp.ProfileReport(df)
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Some interesting points from the pandas profiling:
* We have categorical and boolean columns which we need to convert to numeric values
* The predictor value is very skewed (only 5289 defaulted compared to a massive 39922 users not defaulting) so we need to ensure that our training and testing splits are representative of this skew
* No missing values
3) Data partitioning (split data into training and testing)
As our dataset is highly skewed, we need to be very careful with our sampling approach. Two things need to be considered:
Shuffle the dataset to avoid any form of pre-ordering.
Use stratified sampling (SE). SE makes sure that both datasets (test, training) do not significantly differ for variables of interest. In our case we use SE to achieve a similar distribution of y for both datasets.
|
from sklearn.model_selection import StratifiedShuffleSplit
#Here we apply a shuffle and stratified split to create a train and test set.
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=40)
for train_index, test_index in split.split(df, df["y"]):
strat_train_set = df.loc[train_index]
strat_test_set = df.loc[test_index]
# check the split sizes
print(strat_train_set.size)
print(strat_test_set.size)
# We can now check the data
strat_test_set.head(3).T
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
4) Data preparation (feature engineering)
Before we can create Machine Learning models, we need to format the data so that it is in a form that the models can understand.
We need to do the following steps:
For the numeric columns, we need to normalize these columns so that one column with very large values does not bias the computation.
Turn categorical values into numeric values replacing each unique value in a column with an integer. For example, if a column named "Colour" has three unique strings "red", "yellow" and "blue" they will be assigned the values 0, 1 and 2 respectively. So each instance of yellow in that column will be replaced with 0. Note: one hot encoding is an alternative method to convert categorical values to integers.
For True/False values we simply convert these to 1/0 respectively.
|
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Now we are going to ceate a function to split the label we want to predict and the feature that we will use to predict this value. In addition, we convert the label to 1/0.
|
def return_features_and_label(df):
"""returns features and label given argument dataframe"""
# Get all the columns except "y". It's also possible to exclude other columns
X = df.drop("y", axis=1)
Y = df["y"].copy ()
# Convert our label to an integer
Y = LabelEncoder().fit_transform(Y)
return X, Y
train_features, train_label = return_features_and_label(strat_train_set)
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Our training dataset, train_features, contains both categorical and numeric values. However, we know that machine learning models can only use numeric values. The function below converts categorical variables to integers and then normalizes the current numeric columns so that certain columns with very large numbers would not over-power those columns whose values are not so large.
|
def data_pipeline(df):
"""Normalizes and converts data and returns dataframe """
num_cols = df.select_dtypes(include=np.number).columns
cat_cols = list(set(df.columns) - set(num_cols))
# Normalize Numeric Data
df[num_cols] = StandardScaler().fit_transform(df[num_cols])
# Convert categorical variables to integers
df[cat_cols] = df[cat_cols].apply(LabelEncoder().fit_transform)
return df
train_features_prepared = data_pipeline(train_features)
train_features_prepared.head(3).T
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Some columns in our training dataset may not be very good predictors. This means that we should perform feature selection to get only the best predictors and reduce our time for training since our dataset will be much smaller.
|
from sklearn.feature_selection import SelectKBest, f_classif
predictors = train_features_prepared.columns
# Perform feature selection where `k` (5 in this case) indicates the number of features we wish to select
selector = SelectKBest(f_classif, k=5)
selector.fit(train_features_prepared[predictors], train_label)
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
To visualize the selection, we can plot a graph to look at the scores for each feature. Note that the duration feature had 0 as its p-value and so it could not be shown in the logarithmic scale.
|
# Get the p-values from our selector for each model and convert to a logarithmic scale for easy vizualization
importance_score = -np.log(selector.pvalues_)
# Plot each column with their importance score
plt.rcParams["figure.figsize"] = [14,7]
plt.barh(range(len(predictors)), importance_score, color='C0')
plt.ylabel("Predictors")
plt.title("Importance Score")
plt.yticks(range(len(predictors)), predictors)
plt.show()
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
It's also possible to use a Tree classifier to select the best features. It's often a good option when you have a highly imbalanced dataset.
|
# Example of how to use a Tree classifier to select best features.
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel
predictors_tree = train_features_prepared.columns
selector_clf = ExtraTreesClassifier(n_estimators=50, random_state=0)
selector_clf.fit(train_features_prepared[predictors], train_label)
# Plotting feature importance
importances = selector_clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in selector_clf.estimators_],
axis=0)
plt.rcParams["figure.figsize"] = [14,7]
plt.barh(range(len(predictors_tree)), std, color='C0')
plt.ylabel("Predictors")
plt.title("Importance Score")
plt.yticks(range(len(predictors_tree)), predictors_tree)
plt.show()
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
The plot sometimes may be difficult to know which are the top five features. We can display a simple table with the top selected features and their scores. We are using SelectKBest with f_classif.
|
# Plot the top 5 features based on the Log Score that we calculated earlier.
train_prepared_indexs = [count for count, selected in enumerate(selector.get_support()) if selected == True]
pd.DataFrame(
{'Feature' : predictors[train_prepared_indexs],
'Original Score': selector.pvalues_[train_prepared_indexs],
'Log Score' : importance_score[train_prepared_indexs]
}
)
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Let us now create a training dataset that contains the top 5 features.
|
# Here we are creating our new dataframe based on the selected features (from selector)
train_prepared_columns = [col for (selected, col) in zip(selector.get_support(), predictors) if selected == True]
train_prepared = train_features_prepared[train_prepared_columns]
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
5) Building and evaluation of the models
In this section we will be building models using Scikit-Learn. We show how hyper parameter tuning / optimization and model evaluation can be used to select the best model for deployment.
|
# Importing libraries needed
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.metrics import make_scorer
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import auc
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
import numpy as np
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
The following code defines different hyperparameter combinations. More precisely, we define different model types (e.g., Logistic Regression, Support Vectors Machines (SVC)) and the corresponding lists of parameters that will be used during the optimization process (e.g., different kernel types for SVM).
|
# this function will create the classifiers (models) that we want to test
def create_classifiers():
"""Create classifiers and specify hyper parameters"""
log_params = [{'penalty': ['l1', 'l2'], 'C': np.logspace(0, 4, 10)}]
knn_params = [{'n_neighbors': [3, 4, 5]}]
svc_params = [{'kernel': ['linear', 'rbf'], 'probability': [True]}]
tree_params = [{'criterion': ['gini', 'entropy']}]
forest_params = {'n_estimators': [1, 5, 10]}
mlp_params = {'activation': [
'identity', 'logistic', 'tanh', 'relu'
]}
ada_params = {'n_estimators': [1, 5, 10]}
classifiers = [
['LogisticRegression', LogisticRegression(random_state=42),
log_params],
['KNeighborsClassifier', KNeighborsClassifier(), knn_params],
['SVC', SVC(random_state=42), svc_params],
['DecisionTreeClassifier',
DecisionTreeClassifier(random_state=42), tree_params],
['RandomForestClassifier',
RandomForestClassifier(random_state=42), forest_params],
['MLPClassifier', MLPClassifier(random_state=42), mlp_params],
['AdaBoostClassifier', AdaBoostClassifier(random_state=42),
ada_params],
]
return classifiers
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
After defining our hyperparameters, we use sklearn's grid search to iterate through the different combinations of hyperparameters and return the best parameters for each model type. Furthermore, we use crossvalidation, pruning the data into smaller subsets (see K-fold cross validation).
|
# this grid search will iterate through the different combinations and returns the best parameters for each model type.
# Running this cell might take a while
def grid_search(model, parameters, name,training_features, training_labels):
"""Grid search that returns best parameters for each model type"""
clf = GridSearchCV(model, parameters, cv=3, refit = 'f1',
scoring='f1', verbose=0, n_jobs=4)
clf.fit(training_features, training_labels)
best_estimator = clf.best_estimator_
return [name, str(clf.best_params_), clf.best_score_,
best_estimator]
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Finally, we define a process enabling us to return the best configuration for each model using cross-validation (the best model is selected based on its F1-score).
|
# Now we want to get the best configuration for each model.
def best_configuration(classifiers, training_features, training_labels):
"""returns the best configuration for each model"""
clfs_best_config = []
for (name, model, parameters) in classifiers:
clfs_best_config.append(grid_search(model, parameters, name,
training_features, training_labels))
return clfs_best_config
# Here we call the Grid search and Best_configuration function (note we only use 100 rows to decrease the run time).
import warnings
warnings.filterwarnings('ignore')
classifiers = create_classifiers()
clfs_best_config = best_configuration(classifiers, train_prepared[:100], train_label[:100])
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Evaluation of model performance
In order to choose the best performing model, we shall compare each of the models on the held-out test dataset.
|
# Prepare the test data for prediction
test_features, test_label = return_features_and_label(strat_test_set)
test_features_prepared = data_pipeline(test_features)
test_prepared = test_features_prepared[train_prepared_columns]
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Model comparison
To compare the performance of different models we create a table with different metrics.
|
f1_score_list = []
accuracy_list = []
precision_list = []
recall_list = []
roc_auc_list = []
model_name_list = []
# Iterate through the different model combinations to calculate perf. metrics.
for name, params, score, model in clfs_best_config:
pred_label = model.predict(test_prepared) # Predict outcome.
f1_score_list.append(f1_score(test_label,pred_label)) # F1 score.
accuracy_list.append(accuracy_score(test_label, pred_label)) # Accuracy score.
precision_list.append(precision_score(test_label, pred_label)) # Precision score.
recall_list.append(recall_score(test_label, pred_label)) # Recall score.
roc_auc_list.append(roc_auc_score(test_label,
model.predict_proba(test_prepared)[:, 1])) # Predict probability.
model_name_list.append(name)
# Sum up metrics in a pandas data frame.
pd.DataFrame(
{'Model' : model_name_list,
'F1 Score' : f1_score_list,
'Accurary': accuracy_list,
'Precision': precision_list,
'Recall': recall_list,
'Roc_Auc': roc_auc_list
},
columns = ['Model','F1 Score','Precision','Recall', 'Accurary', 'Roc_Auc']
)
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Graphical comparison
For the graphical representation of model performance we use roc curves to highlight the True Positive Rate (TPR), also known as recall, and the False Positive Rate (FPR).
|
# Create a function that plots an ROC curve
def roc_graph(test_label, pred_label, name):
"""Plots the ROC curve's in a Graph"""
fpr, tpr, thresholds = roc_curve(test_label, pred_label, pos_label=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=2, label='%s ROC (area = %0.2f)' % (name, roc_auc))
plt.clf()
# Iterate though the models, create ROC graph for each model.
for name, _, _, model in clfs_best_config:
pred_label = model.predict_proba(test_prepared)[:,1]
roc_graph(test_label, pred_label, name)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves ')
plt.legend(loc="lower right", fontsize='small')
plt.show()
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Now that we have all these evaluation metrics, we can select a model based on which metrics we want out models to maximize or minimize.
6) Explaning the model
We use the python package LIME to explain the model and so we will move from our use of pandas to numpy matrices since this is what LIME accepts.
|
import lime.lime_tabular
import lime
import sklearn
import pprint
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
The first thing we will do is to get the unique values from our label.
|
class_names = strat_train_set["y"].unique()
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
We need a dataset with our top 5 features but with the categorical values still present. This will allow LIME to know how it should display our features. E.g. using our column example earlier, it will know to display "yellow" whenever it sees a 0.
|
train = train_features[train_prepared_columns].values
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
LIME needs to know the index of each catergorical column.
|
num_cols = train_features._get_numeric_data().columns
cat_cols = list(set(train_features.columns) - set(num_cols))
categorical_features_index = [i for i, val in enumerate(train_prepared_columns) if val in cat_cols]
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
In addition, LIME requires a dictionary which contains the name of each column and the unique values for each column.
|
categorical_names = {}
for feature in categorical_features_index:
# We still need to convert catergorical variables to integers
le = sklearn.preprocessing.LabelEncoder()
le.fit(train[:, feature])
train[:, feature] = le.transform(train[:, feature])
categorical_names[feature] = le.classes_
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Create a function that will return the probability that the model (in our case we chose the logistic regression model) selects a certain class.
|
predict_fn = lambda x: clfs_best_config[0][-1].predict_proba(x).astype(float)
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Use the LIME package to configure a variable that can be used to explain predicitons.
|
explainer = lime.lime_tabular.LimeTabularExplainer(train, feature_names=train_prepared_columns,class_names=class_names,
categorical_features=categorical_features_index,
categorical_names=categorical_names, kernel_width=3)
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
When you would like to understand the prediction of a value in the test set, create an explanation instance and show the result.
|
i = 106
exp = explainer.explain_instance(train[i], predict_fn)
pprint.pprint(exp.as_list())
fig = exp.as_pyplot_figure()
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
7) Train and Predict with Cloud AI Platform
We just saw how to train and predict our models locally. However, when we want more compute power or want to put our model in production serving 1000s of requests, we can use Cloud AI Platform to perform these tasks.
Let us define some environment variables that AI Platform uses. Do not forget to replace all the variables in square brackets (along with the square brackets) with your credentials.
|
%env GCS_BUCKET=<GCS_BUCKET>
%env REGION=us-central1
%env LOCAL_DIRECTORY=./trainer/data
%env TRAINER_PACKAGE_PATH=./trainer
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
AI Platform needs a Python package with our code to train models. We need to create a directory and move our code there. We also need to create an __init__.py file, this is a unique feature of python. You can read the docs to understand more about this file.
|
%%bash
mkdir trainer
touch trainer/__init__.py
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
In order for Cloud AI Platform to access the training data we need to upload a trainging file into Google Cloud Storage (GCS). We use our strat_train_set dataframe and convert it into a csv file which we upload to GCS.
|
strat_train_set.to_csv('train.csv', index=None)
!gsutil cp train.csv $GCS_BUCKET
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
This next cell might seem very long, however, most of the code is identical to earlier sections. We are simply combining the code we created previously into one file.
Before running this, cell substitute <BUCKET_NAME> with your GCS bucket name. Do not include the 'gs://' prefix.
|
%%writefile trainer/task.py
import datetime
import os
import pandas as pd
import numpy as np
import subprocess
from google.cloud import storage
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.metrics import make_scorer
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
# TODO: REPLACE '<BUCKET_NAME>' with your GCS bucket name
BUCKET_NAME = <BUCKET_NAME>
# Bucket holding the training data
bucket = storage.Client().bucket(BUCKET_NAME)
# Path to the data inside the bucket
blob = bucket.blob('train.csv')
# Download the data
blob.download_to_filename('train.csv')
# [END download-data]
# [START scikit-learn code]
# Load the training dataset
with open('./train.csv', 'r') as train_data:
df = pd.read_csv(train_data)
def return_features_and_label(df_tmp):
# Get all the columns except the one named "y"
X = df_tmp.drop("y", axis=1)
Y = df_tmp["y"].copy()
# Convert label to an integer
Y = LabelEncoder().fit_transform(Y)
return X, Y
def data_pipeline(df_tmp):
num_cols = df_tmp._get_numeric_data().columns
cat_cols = list(set(df_tmp.columns) - set(num_cols))
# Normalize Numeric Data
df_tmp[num_cols] = StandardScaler().fit_transform(df_tmp[num_cols])
# Convert categorical variables to integers
df_tmp[cat_cols] = df_tmp[cat_cols].apply(LabelEncoder().fit_transform)
return df_tmp
def create_classifiers():
log_params = [{'penalty': ['l1', 'l2'], 'C': np.logspace(0, 4, 10)}]
knn_params = [{'n_neighbors': [3, 4, 5]}]
svc_params = [{'kernel': ['linear', 'rbf'], 'probability': [True]}]
tree_params = [{'criterion': ['gini', 'entropy']}]
forest_params = {'n_estimators': [1, 5, 10]}
mlp_params = {'activation': [
'identity', 'logistic', 'tanh', 'relu'
]}
ada_params = {'n_estimators': [1, 5, 10]}
classifiers = [
['LogisticRegression', LogisticRegression(random_state=42),
log_params],
['KNeighborsClassifier', KNeighborsClassifier(), knn_params],
['SVC', SVC(random_state=42), svc_params],
['DecisionTreeClassifier',
DecisionTreeClassifier(random_state=42), tree_params],
['RandomForestClassifier',
RandomForestClassifier(random_state=42), forest_params],
['MLPClassifier', MLPClassifier(random_state=42), mlp_params],
['AdaBoostClassifier', AdaBoostClassifier(random_state=42),
ada_params],
]
return classifiers
def grid_search(model, parameters, name, X, y):
clf = GridSearchCV(model, parameters, cv=3, refit = 'f1',
scoring='f1', verbose=0, n_jobs=4)
clf.fit(X, y)
best_estimator = clf.best_estimator_
return [name, clf.best_score_, best_estimator]
def best_configuration(classifiers, training_values, testing_values):
clfs_best_config = []
best_clf = None
best_score = 0
for (name, model, parameters) in classifiers:
clfs_best_config.append(grid_search(model, parameters, name,
training_values, testing_values))
for name, quality, clf in clfs_best_config:
if quality > best_score:
best_score = quality
best_clf = clf
return best_clf
train_features, train_label = return_features_and_label(df)
train_features_prepared = data_pipeline(train_features)
predictors = train_features_prepared.columns
# Perform feature selection
selector = SelectKBest(f_classif, k=5)
selector.fit(train_features_prepared[predictors], train_label)
train_prepared_columns = [col for (selected, col) in zip(selector.get_support(), predictors) if selected == True]
train_features_prepared = train_features_prepared[train_prepared_columns]
x = train_features_prepared.values
y = train_label
classifiers = create_classifiers()
clf = best_configuration(classifiers, x[:100], y[:100])
# [END scikit-learn]
# [START export-to-gcs]
# Export the model to a file
model = 'model.joblib'
joblib.dump(clf, model)
# Upload the model to GCS
bucket = storage.Client().bucket(BUCKET_NAME)
blob = bucket.blob('{}/{}'.format(
datetime.datetime.now().strftime('model_%Y%m%d_%H%M%S'),
model))
blob.upload_from_filename(model)
# [END export-to-gcs]
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
To actully run the train.py file we need some parameters so that AI Platform knows how to set up the environment to run sucessfully.
|
%%bash
JOBNAME=banking_$(date -u +%y%m%d_%H%M%S)
echo $JOBNAME
gcloud ai-platform jobs submit training model_training_$JOBNAME \
--job-dir $GCS_BUCKET/$JOBNAME/output \
--package-path trainer \
--module-name trainer.task \
--region $REGION \
--runtime-version=1.9 \
--python-version=3.5 \
--scale-tier BASIC
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
The above cell submits a job to AI Platform which you can view by going to the Google Cloud Console's sidebar and select AI Platform > Jobs or search AI Platform in the search bar. ONLY run the cells below after your job has completed sucessfully. (It should take approximately 8 minutes to run).
Now that we have trained our model and it is saved in GCS we need to perform prediction. There are two options available to use for prediction:
Command Line
Python
|
test_features_prepared = data_pipeline(test_features)
test_prepared = test_features_prepared[train_prepared_columns]
test = test_prepared.as_matrix().tolist()
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Next, find the model directory in your GCS bucket that contains the model created in the previous steps.
|
!gsutil ls $GCS_BUCKET
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Just like training in AI Platform, we set some environment variables when we run our command line commands. Note that <GCS_BUCKET> is your the name of your GCS bucket set earlier. The MODEL_DIRECTORY will be inside the GCS bucket and of the form model_YYYYMMDD_HHMMSS (e.g. model_190114_134228).
|
%env VERSION_NAME=v1
%env MODEL_NAME=cmle_model
%env JSON_INSTANCE=input.json
%env MODEL_DIR=gs://<GCS_BUCKET>/MODEL_DIRECTORY
%env FRAMEWORK=SCIKIT_LEARN
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Create a model resource for your model versions as well as the version.
|
! gcloud ai-platform models create $MODEL_NAME --regions=us-central1
! gcloud ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME --origin $MODEL_DIR \
--runtime-version 1.9 --framework $FRAMEWORK \
--python-version 3.5
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
For prediction, we will upload a file with one line in our GCS bucket.
|
import json
with open('input.json', 'w') as outfile:
json.dump(test[0], outfile)
!gsutil cp input.json $GCS_BUCKET
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
We are now ready to submit our file to get a preditcion.
|
! gcloud ai-platform predict --model $MODEL_NAME \
--version $VERSION_NAME \
--json-instances $JSON_INSTANCE
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
We can also use python to perform predictions. See the cell below for a simple way to get predictions using python.
|
import googleapiclient.discovery
import os
import pandas as pd
PROJECT_ID = os.environ['GOOGLE_CLOUD_PROJECT']
VERSION_NAME = os.environ['VERSION_NAME']
MODEL_NAME = os.environ['MODEL_NAME']
# Create our AI Platform service
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME)
name += '/versions/{}'.format(VERSION_NAME)
# Iterate over the first 10 rows of our test dataset
results = []
for data in test[:10]:
# Send a prediction request
responses = service.projects().predict(
name=name,
body={"instances": [data]}
).execute()
if 'error' in responses:
raise RuntimeError(response['error'])
else:
results.extend(responses['predictions'])
for i, response in enumerate(results):
print('Prediction: {}\tLabel: {}'.format(response, test_label[i]))
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Programar ¿con qué se come?
La computadora es una gran gran calculadora que permite hacer cualquier tipo de cuenta de las que necesitemos dentro de la Física (y de la vida también) mientras sepamos cómo decirle a la máquina qué cómputos hacer.
La computadora para hacer cuentas tiene que almacenar los números que necesitemos y luego hacer operaciones con ellos. Nuestros valores numéricos se guardan en espacios de memoria, y esos espacios tienen un nombre, un rótulo con el cual los podremos llamar y pedirle a la computadora que los utilice para operar con ellos, los modifique, etc. Ese nombre a cada espacio de memoria se asigna, al menos en Python, con el símbolo = que significa de ahora en más: "asignación".
Pero no sólamente guardaremos valores numéricos. Además de haber distintos tipos de valores numéricos, como veremos ahora, podemos guardar otros tipos de datos, como texto (strings) y listas (lists) entre muchos otros. Todos los tipos de valores que podremos almacenar difieren entre si el espacio en memoria que ocupan y las operaciones que podremos hacer con ellos.
Veamos un par de ejemplos
|
x = 5
y = 'Hola mundo!'
z = [1,2,3]
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Aquí hemos guardado en un espacio de memoria llamado por nosotros "x" la información de un valor de tipo entero, 5, en otro espacio de memoria, que nosotros llamamos "y" guardamos el texto "Hola mundo!". En Python, las comillas indican que lo que encerramos con ellas es un texto. x no es un texto, así que Python lo tratará como variable para manipular. "z" es el nombre del espacio de memoria donde se almacena una lista con 3 elementos enteros.
Podemos hacer cosas con esta información. Python es un lenguaje interpretado (a diferencia de otros como Java o C++), eso significa que ni bien nosotros le pedimos algo a Python, éste lo ejecuta. Así es que podremos pedirle por ejemplo que imprima en pantalla el contenido en y, el tipo de valor que es x (entero) entre otras cosas.
|
print y
print type(x)
print type(y), type(z), len(z)
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Vamos a utilizar mucho la función type() para entender con qué tipo de variables estamos trabajando. type() es una función predeterminada por Python, y lo que hace es pedir como argumento (lo que va entre los paréntesis) una variable y devuelve inmediatamente el tipo de variable que es.
Para las variables integers(enteros) y floats (flotantes) podemos hacer las operaciones matemáticas usuales y esperables. Veamos un poco las compatibilidades entre estos tipos de variables.
|
a = 5
b = 7
c = 5.0
d = 7.0
print a+b, b+c, a*d, a/b, a/d, c**2
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Listas
Las listas son cadenas de datos de cualquier tipo, unidos por estar en una misma variable, con posiciones dentro de esa lista, con las cuales nosotros podemos llamarlas. En Python, las listas se enumeran desde el 0 en adelante.
Estas listas también tienen algunas operaciones que le son válidas.
Distintas son las tuplas. Las listas son editables, pero las tuplas no. Esto es importante cuando, a lo largo del desarrollo de un código donde necesitamos que ciertas cosas no cambien, no editemos por error valores fundamentales de nuestro problema a resolver.
|
lista1 = [1, 2, 'saraza']
print lista1, type(lista1)
print lista1[1], type(lista1[1])
print lista1[2], type(lista1[2])
print lista1[-1]
lista2 = [2,3,4]
lista3 = [5,6,7]
print lista2+lista3
print lista2[2]+lista3[0]
tupla1 = (1,2,3)
lista4 = [1,2,3]
lista4[2] = 0
print lista4
#tupla1[0] = 0
print tupla1
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Hay formas muy cómodas de hacer listas. Presentamos una que utilizaremos mucho, que es usando la función range.
|
listilla = range(10)
print listilla, type(listilla)
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Booleans
Este tipo de variable tiene sólo dos valores posibles: 1 y 0, o True y False. Las utilizaremos escencialmente para que Python reconozca relaciones entre números.
|
print 5>4
print 4>5
print 4==5 #La igualdad matemática se escribe con doble ==
print 4!=5 #La desigualdad matemática se escribe con !=
print type(4>5)
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Bibliotecas
Pero las operaciones básicas de suma, resta, multiplicación y división son todo lo que un lenguaje como Python puede hacer "nativamente". Una potencia o un seno es álgebra no lineal, y para hacerlo, habría que inventarse un algoritmo (una serie de pasos) para calcular por ejemplo sen($\pi$). Pero alguien ya lo hizo, ya lo pensó, ya lo escribió en lenguaje Python y ahora todos podemos usar ese algoritmo sin pensar en él. Solamente hay que decirle a nuestro intérprete de Python dónde está guardado ese algoritmo. Esta posibilidad de usar algoritmos de otros es fundamental en la programación, porque es lo que permite que nuestro problema se limite solamente a entender cómo llamar a estos algoritmos ya pensados y no tener que pensarlos cada vez.
Vamos entonces a llamar a una biblioteca llamada math que nos va a extender nuestras posibilididades matemáticas.
|
import math as m # Llamamos a una biblioteca y la bautizamos m por comodidad
r1 = m.pow(2,4)
r2 = m.cos(m.pi)
r3 = m.log(100,10)
r4 = m.log(m.e)
print r1, r2, r3, r4
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Para entender cómo funcionan estas funciones, es importante recurrir a su documentation. La de esta biblioteca en particular se encuentra en
https://docs.python.org/2/library/math.html
Funciones
Pero si queremos definir nuestra propia manera de calcular algo, o si queremos agrupar una serie de órdenes bajo un mismo nombre, podemos definirnos nuestras propias funciones, pidiendo la cantidad de argumentos que querramos.
Vamos a usar las funciones lambda más que nada para funciones matemáticas, aunque también tenga otros usos. Definamos el polinomio $f(x) = x^2 - 5x + 6$ que tiene como raíces $x = 3$ y $x = 2$.
|
f = lambda x: x**2 - 5*x + 6
print f(3), f(2), f(0)
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Las otras funciones, las más generales, se las llama funciones def, y tienen la siguiente forma.
|
def promedio(a,b,c):
N = a + b + c # Es importante que toda la función tenga su contenido indentado
N = N/3.0
return N
mipromedio = promedio(5,5,7) # Aquí rompimos la indentación
print mipromedio
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Control de flujo: iteraciones y condicionales (if y for para los amigos)
Si en el fondo un programa es una serie de algoritmos que la computadora debe seguir, un conocimiento fundamental para programar es saber cómo pedirle a una computadora que haga operaciones si se cumple una condición y que haga otras si no se cumple. Nos va a permitir hacer programas mucho más complejos. Veamos entonces como aplicar un if.
|
def ejemplo(parametro):
if parametro > 0: # un if inaugura también un nuevo bloque indentado
print 'Tu parametro es', parametro, 'y es mayor que cero'
print 'Gracias'
else: # el else inaugura otro bloque indentado
print 'Tu parametro es', parametro, 'y es menor o igual que cero'
print 'Gracias'
print 'Vuelva pronto'
print ' '
ejemplo(5)
ejemplo(-1)
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Para que Python repita una misma acción n cantidad de veces, utilizaremos la estructura for. En cada paso, nosotros podemos aprovechar el "número de iteración" como una variable. Eso nos servirá en la mayoría de los casos.
|
nuevalista = ['nada',1,2,'tres', 'cuatro', 7-2, 2*3, 7/1, 2**3, 3**2]
for i in range(10): # i es una variable que inventamos en el for, y que tomará los valores de la
print nuevalista[i] #lista que se genere con range(10)
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
La estructura while es poco recomendada en Python pero es importante saber que existe: consiste en repetir un paso mientras se cumpla una condición. Es como un for mezclado con un if.
|
i = 1
while i < 10: # tener cuidado con los while que se cumplen siempre. Eso daría lugar a los loops infinitos.
i = i+1
print i
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Un poquito de álgebra lineal
Para esta primera parte, utilizaremos una biblioteca muy útil para cálculo matricial, numpy. De allí sacaremos los tipos de dato ndarray, matrix, y todas las operaciones que utilicemos. Veamos un poco cómo definir vectores fila, matrices, y cómo llamar a determinado componente de determinado vector o matriz.
|
pos = array([1,4,3])
mat = matrix([[1,2,3],
[4,5,6],
[7,8,9]])
x = pos[0] # en Python, las posiciones de los vectores comienzan a contarse en 0
y = pos[1]
z = pos[2]
print 'suma de las componentes:', x+y+z, sum(pos)
c = (mat[0,0] + mat[0,1])* mat[1,1]
print 'operación con los elementos de la matriz:' , c
#Nos acostumbraremos a utilizar arrays para matrices
mat = array(mat) # La función array() transforma mat (que era una matrix) en un ndarray
print mat, type(mat)
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Las matrices especiales pueden armarse "a mano" como ejercicio, como también pueden buscarse entre las funciones que Python ofrece
|
identidad = zeros([3,3])
for i in range(3):
for j in range(3):
if i == j:
identidad[i,j] = 1
print 'identidad armada a mano: '
print identidad
identidad2 = identity(3)
print 'identidad de la función de Python: '
print identidad2
unos = ones([3,4]) #matriz de unos
print 'matriz de unos: '
print unos
ceros = zeros([4,3]) #matriz de ceros
print 'matriz de ceros: '
print ceros
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Entre las operaciones que pueden hacerse con vectores y matrices se encuentran las que esperamos que estén: producto escalar, producto de matrices con vectores, transposición.
|
print 'teníamos la matriz: '
print mat
trans = transpose(mat)
print 'la traspuesta es: '
print trans
print 'vector pos=', pos
print 'producto escalar: ' , dot(pos, pos) # producto escalar entre vectores
print 'producto componente a componente:' , pos*pos # producto componente a componente. Más útil con funciones
A = array([[1,2],[0,1]]) #Otra forma de definir directamente matrices como arrays sin tener que transformar
x = array([1,2])
print 'A = ',A,' ', 'x= ', x
print 'Ax=', dot(A, x)
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Y también tenemos esas funciones que siempre quisimos tener desde el CBC: inversa de una matriz, cálculo de determinantes, resolución de sistema de ecuaciones, hallazgo de autovalores y autovectores. Para esto usamos una biblioteca extra llamada linalg.
|
print 'Podemos encontrar la matriz inversa: '
print inv(A)
print 'su determinante:', det(A)
print 'Resolver un sistema de forma Ax=b'
b = x
print 'x =' ,solve(A, b)
print 'Y hallar autovalores y autovectores'
autoval, autovec = eig(A)
print 'Los autovalores: ', autoval
print 'Los autovectores: ', autovec
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Funciones y gráficos
Lo siguiente que Python tiene de interesante para usar son sus facilidades para hacer gráficos. La biblioteca matplotlib nos ayudará en este caso. Primero, definimos un vector que nos hace de dominio, luego, un vector imagen de alguna función, y luego haremos el gráfico. Se muestran aquí algunas de las opciones que tiene matplotlib para presentar un gráfico, pero yendo a la documentación podrán encontrar infinidad de herramientas para hacer esto.
|
# Ploteos
x = linspace(-10, 10, 200) # con la función linspace generaremos un vector con componentes equidistantes.
y = x**2 # el vector imagen será igual de largo que x
plot(x,y, '-', color = 'red', label = 'Curva x**2') # ver qué pasa con 'r', 'g', '*' entre otros
title('Mi primer ploteo')
xlabel('Eje de las x')
ylabel('Eje de las y')
#xlim(-5,5)
#ylim(0,4)
legend('best')
grid(True)
f = lambda x,n: sin(n*pi*x) # Definimos una sucesión de funciones trigonométricas
y = f(x,0) # vector imagen de la función f(x) = 0
for i in range(5): # este es un caso simple donde sumo las 5 primeras funciones de la sucesión
y = y + f(x,i) # a esa y le sumo el valor de las imágenes de las siguientes funciones de la sucesión de funciones
plot(x,y, label = 'Curva')
title('Suma de senos')
xlabel('Dominio')
ylabel('Suma de los primeros 5 senos')
legend('best')
# Podríamos hacer derivación numérica
# Integración sería útil?
# Transformada de Fourier es irse al chori?
|
python/Extras/Fisica2/Notebook para F2.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
2. Set all graphics from matplotlib to display inline
|
import matplotlib.pyplot as plt
%matplotlib inline
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
4. Display the names of the columns in the csv
|
df.columns
df.head()
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
5. Display the first 3 animals
|
df['animal'].head(3)
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
6. Sort the animals to see the 3 longest animals.
|
df.sort_values('length', ascending=False).head(3)
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
7. What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs.
|
df['animal'].value_counts()
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
8. Only select the dogs
|
dogs = df[df['animal'] == "dog"]
dogs
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
9. Display all of the animals that are greater than 40 cm.
|
animal_larger_40 = df['length'] > 40
animal_larger_40
df[animal_larger_40]
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
10. 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
|
df['length'].head()
inch = df['length'] * 0.393701
inch
inch = df['length'] / 2.54
inch
df['length_inch'] = inch
df.head()
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
11. Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
|
dogs =df[df['animal'] =="dog"]
dogs
cats =df[df['animal'] =="cat"]
cats
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
13. Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
|
cat = df['animal'] == "cat"
twelve_inch = df['length_inch'] > 12
df[cat & twelve_inch].head()
df[(df['animal'] == "cat") & (df['length_inch'] > 12)].head()
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
13. What's the mean length of a cat?
|
df[cat].describe()
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
14. What's the mean length of a dog
|
dog = df['animal'] == "dog"
df[dog].describe()
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
15. Use groupby to accomplish both of the above tasks at once.
|
df.groupby('animal').describe()
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
16. Make a histogram of the length of dogs.
|
dogs.hist()
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
17 Change your graphing style to be something else (anything else!)
|
dogs.plot(kind='line')
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
18. Make a horizontal bar graph of the length of the animals, with their name as the label
|
df['length'].plot(kind='barh', x='lenth', y='name')
df['length'].plot(kind='barh', labels= df['name'])
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
19. Make a sorted horizontal bar graph of the cats, with the larger cats on top.
|
cats =df[df['animal'] =="cat"]
cats
sort_cat = cats.sort_values('length', ascending=False)
sort_cat
sort_cat['length'].plot(kind='barh', x='length', y='name')
|
homework07/Homework07-BuildingPandas-Radhika.ipynb
|
radhikapc/foundation-homework
|
mit
|
Load coal data
Data is from:
McGlade, C & Ekins, P. The geographical distribution of fossil fuels unused when limiting global warming to 2 °C. Nature 517, 187–190. (2015) doi:10.1038/nature14016
Coal data from Figure 1c.
|
fn = 'nature14016-f1.xlsx'
sn = 'Coal data'
coal_df = pd.read_excel(fn, sn)
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Fortuntely the Cost values are already sorted in ascending order. Cost will be on the y-axis, and cumulative recoverable resources will be on the x-axis.
|
coal_df.head()
coal_df.tail()
names = coal_df['Resource'].values
amount = coal_df['Quantity (ZJ)'].values
cost = coal_df['Cost (2010$/GJ)'].values
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Create a set of names to use for assigning colors and creating the legend
I'm not being picky about the order of colors.
|
name_set = set(names)
name_set
color_dict = {}
for i, area in enumerate(name_set):
color_dict[area] = i #Assigning index position as value to resource name keys
color_dict
sns.color_palette('deep', n_colors=4, desat=.8)
sns.palplot(sns.color_palette('deep', n_colors=4, desat=.8))
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Define a function that returns the integer color choice based on the region name
Use the function color_match to create a Series with rgb colors that will be used for each box in the figure. Do this using the map operation, which applies a function to each element in a Pandas Series.
|
def color_match(name):
return sns.color_palette('deep', n_colors=4, desat=.8)[color_dict[name]]
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
color has rgb values for each resource
|
color = coal_df['Resource'].map(color_match)
color.head()
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Define the edges of the patch objects that will be drawn on the plot
|
# get the corners of the rectangles for the histogram
left = np.cumsum(np.insert(amount, 0, 0))
right = np.cumsum(np.append(amount, .01))
bottom = np.zeros(len(left))
top = np.append(cost, 0)
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Make the figure (coal)
|
sns.set_style('whitegrid')
fig, ax = plt.subplots(figsize=(10,5))
# we need a (numrects x numsides x 2) numpy array for the path helper
# function to build a compound path
for i, name in enumerate(names):
XY = np.array([[left[i:i+1], left[i:i+1], right[i:i+1], right[i:i+1]],
[bottom[i:i+1], top[i:i+1], top[i:i+1], bottom[i:i+1]]]).T
# get the Path object
barpath = path.Path.make_compound_path_from_polys(XY)
# make a patch out of it (a patch is the shape drawn on the plot)
patch = patches.PathPatch(barpath, facecolor=color[i], ec='0.2')
ax.add_patch(patch)
#Create patch elements for a custom legend
#The legend function expects multiple patch elements as a list
patch = [patches.Patch(color=sns.color_palette('deep', 4, 0.8)[color_dict[i]], label=i)
for i in color_dict]
# Axis labels/limits, remove horizontal gridlines, etc
plt.ylabel('Cost (2010$/GJ)', size=14)
plt.xlabel('Quantity (ZJ)', size=14)
ax.set_xlim(left[0], right[-1])
ax.set_ylim(bottom.min(), 12)
ax.yaxis.grid(False)
ax.xaxis.grid(False)
#remove top and right spines (box lines around figure)
sns.despine()
#Add the custom legend
plt.legend(handles=patch, loc=2, fontsize=12)
plt.savefig('Example Supply Curve (coal).png')
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Load oil data
Data is from:
McGlade, C & Ekins, P. The geographical distribution of fossil fuels unused when limiting global warming to 2 °C. Nature 517, 187–190. (2015) doi:10.1038/nature14016
I'm using data from Figure 1a.
|
fn = 'nature14016-f1.xlsx'
sn = 'Oil data'
df = pd.read_excel(fn, sn)
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Fortuntely the Cost values are already sorted in ascending order. Cost will be on the y-axis, and cumulative recoverable resources will be on the x-axis.
|
df.head()
df.tail()
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Create arrays of values with easy to type names
|
names = df['Resource'].values
amount = df['Quantity (Gb)'].values
cost = df['Cost (2010$/bbl)'].values
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Create a set of names to use for assigning colors and creating the legend
I'm not being picky about the order of colors.
|
name_set = set(names)
name_set
color_dict = {}
for i, area in enumerate(name_set):
color_dict[area] = i #Assigning index position as value to resource name keys
color_dict
sns.palplot(Paired_11.mpl_colors)
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Define a function that returns the integer color choice based on the region name
Use the function color_match to create a Series with rgb colors that will be used for each box in the figure. Do this using the map operation, which applies a function to each element in a Pandas Series.
|
def color_match(name):
return Paired_11.mpl_colors[color_dict[name]]
def color_match(name):
return sns.husl_palette(n_colors=11, h=0.1, s=0.9, l=0.6)[color_dict[name]]
color_match('NGL')
color = df['Resource'].map(color_match)
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Make the figure
|
sns.set_style('whitegrid')
fig, ax = plt.subplots(figsize=(10,5))
# we need a (numrects x numsides x 2) numpy array for the path helper
# function to build a compound path
for i, name in enumerate(names):
XY = np.array([[left[i:i+1], left[i:i+1], right[i:i+1], right[i:i+1]],
[bottom[i:i+1], top[i:i+1], top[i:i+1], bottom[i:i+1]]]).T
# get the Path object
barpath = path.Path.make_compound_path_from_polys(XY)
# make a patch out of it (a patch is the shape drawn on the plot)
patch = patches.PathPatch(barpath, facecolor=color[i], ec='0.8')
ax.add_patch(patch)
#Create patch elements for a custom legend
#The legend function expects multiple patch elements as a list
patch = []
for i in color_dict:
patch.append(patches.Patch(color=Paired_11.mpl_colors[color_dict[i]],
label=i))
# Axis labels/limits, remove horizontal gridlines, etc
plt.ylabel('Cost (2010$/bbl)', size=14)
plt.xlabel('Quantity (Gb)', size=14)
ax.set_xlim(left[0], right[-2])
ax.set_ylim(bottom.min(), 120)
ax.yaxis.grid(False)
ax.xaxis.grid(False)
#remove top and right spines (box lines around figure)
sns.despine()
#Add the custom legend
plt.legend(handles=patch, loc=2, fontsize=12,
ncol=2)
plt.savefig('Example Supply Curve.png')
|
Supply Curve example.ipynb
|
gschivley/Supply-Curve
|
mit
|
Похожим образом можно сделать трёхмерный список.
|
lst_3d = [
[[1, 1, 2], [3, 5], [8, 13]],
[[21, 34], [55]]
]
|
crash-course/2d-arrays.ipynb
|
citxx/sis-python
|
mit
|
Чаще всего используются двумерные списки с равным количеством элементов в каждой строке. Такой двумерный список можно называть матрицей.
|
matrix = [
[0, 0, 1, 5],
[1, 0, 2, 0],
[0, 3, 1, 0],
]
|
crash-course/2d-arrays.ipynb
|
citxx/sis-python
|
mit
|
Доступ и изменение элементов
Изменить или получить элемент матрицы можно так:
|
matrix = [
[0, 0, 1, 5],
[1, 0, 2, 0],
[0, 3, 1, 0],
]
print(matrix[1][2])
matrix[0][1] = 9
for row in matrix:
print(*row)
|
crash-course/2d-arrays.ipynb
|
citxx/sis-python
|
mit
|
Создание матрицы n x m
Теперь представим, что нам нужно заполнить матрицу c n строками и m столбцами нулями. Интуитивным выглядит такой способ:
|
n, m = 3, 4
matrix = [[0] * m] * n
|
crash-course/2d-arrays.ipynb
|
citxx/sis-python
|
mit
|
Но не все так просто...
|
n, m = 3, 4
matrix = [[0] * m] * n
# Изменяем третий элемент во второй строке
matrix[1][2] = 1
for row in matrix:
print(*row)
|
crash-course/2d-arrays.ipynb
|
citxx/sis-python
|
mit
|
Изменилась не только вторая строка, но и все остальные. Так произошло, потому что после повторения списка каждая строка указывает на один и тот же массив.
Чтобы этого избежать, нужно использовать генератор списков
|
n, m = 3, 4
matrix = [[0] * m for i in range(n)]
matrix[1][2] = 1
for row in matrix:
print(*row)
|
crash-course/2d-arrays.ipynb
|
citxx/sis-python
|
mit
|
Ввод матрицы
Обычно матрица задаётся в таком виде:
3 4
1 2 3 0
5 7 9 2
1 2 1 1
Превые два числа — количество строк и столбцов.
Считывать это можно так:
|
n, m = map(int, input().split())
matrix = []
for i in range(n):
matrix.append(list(map(int, input().split())))
|
crash-course/2d-arrays.ipynb
|
citxx/sis-python
|
mit
|
Или более кратко с помощью генератора списка:
|
n, m = map(int, input().split())
matrix = [list(map(int, input().split())) for i in range(n)]
|
crash-course/2d-arrays.ipynb
|
citxx/sis-python
|
mit
|
Create a BayesianOptimization Object
Enter the target function to be maximized, its variable(s) and their corresponding ranges (see this example for a multi-variable case). A minimum number of 2 initial guesses is necessary to kick start the algorithms, these can either be random or user defined.
|
bo = BayesianOptimization(target, {'x': (-2, 10)})
|
examples/visualization.ipynb
|
ysasaki6023/NeuralNetworkStudy
|
mit
|
In this example we will use the Upper Confidence Bound (UCB) as our utility function. It has the free parameter
$\kappa$ which control the balance between exploration and exploitation; we will set $\kappa=5$ which, in this case, makes the algorithm quite bold. Additionally we will use the cubic correlation in our Gaussian Process.
|
gp_params = {'corr': 'cubic'}
bo.maximize(init_points=2, n_iter=0, acq='ucb', kappa=5, **gp_params)
|
examples/visualization.ipynb
|
ysasaki6023/NeuralNetworkStudy
|
mit
|
Plotting and visualizing the algorithm at each step
Lets first define a couple functions to make plotting easier
|
def posterior(bo, xmin=-2, xmax=10):
xmin, xmax = -2, 10
bo.gp.fit(bo.X, bo.Y)
mu, sigma2 = bo.gp.predict(np.linspace(xmin, xmax, 1000).reshape(-1, 1), eval_MSE=True)
return mu, np.sqrt(sigma2)
def plot_gp(bo, x, y):
fig = plt.figure(figsize=(16, 10))
fig.suptitle('Gaussian Process and Utility Function After {} Steps'.format(len(bo.X)), fontdict={'size':30})
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])
axis = plt.subplot(gs[0])
acq = plt.subplot(gs[1])
mu, sigma = posterior(bo)
axis.plot(x, y, linewidth=3, label='Target')
axis.plot(bo.X.flatten(), bo.Y, 'D', markersize=8, label=u'Observations', color='r')
axis.plot(x, mu, '--', color='k', label='Prediction')
axis.fill(np.concatenate([x, x[::-1]]),
np.concatenate([mu - 1.9600 * sigma, (mu + 1.9600 * sigma)[::-1]]),
alpha=.6, fc='c', ec='None', label='95% confidence interval')
axis.set_xlim((-2, 10))
axis.set_ylim((None, None))
axis.set_ylabel('f(x)', fontdict={'size':20})
axis.set_xlabel('x', fontdict={'size':20})
utility = bo.util.utility(x.reshape((-1, 1)), bo.gp, 0)
acq.plot(x, utility, label='Utility Function', color='purple')
acq.plot(x[np.argmax(utility)], np.max(utility), '*', markersize=15,
label=u'Next Best Guess', markerfacecolor='gold', markeredgecolor='k', markeredgewidth=1)
acq.set_xlim((-2, 10))
acq.set_ylim((0, np.max(utility) + 0.5))
acq.set_ylabel('Utility', fontdict={'size':20})
acq.set_xlabel('x', fontdict={'size':20})
axis.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
acq.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
|
examples/visualization.ipynb
|
ysasaki6023/NeuralNetworkStudy
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.