markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Building decision tree classifier using entropy criteria (c5.o)
model = DecisionTreeClassifier(criterion = 'entropy',max_depth = 3) model.fit(x_train,y_train)
_____no_output_____
MIT
Decision_tree_C5.O_CART.ipynb
anagha0397/Decision-Tree
Plotting the decision tree
tree.plot_tree(model); model.get_n_leaves() ## As this tree is not visible so we will display it with some another technique # we will extract the feature names, class names and we will define the figure size so that our tree will be visible in a better way fn = ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'] cn = ['Iris-setosa', 'Iris-versicolar', 'Iris-virginica'] fig,axes = plt.subplots(nrows = 1, ncols =1, figsize =(4,4), dpi = 300) #dpi is the no. of pixels tree.plot_tree(model, feature_names = fn, class_names = cn, filled = True); # filled = true will fill the values inside the boxes # Predicting the builded model on our x-test data preds = model.predict(x_test) pd.Series(preds).value_counts() preds # In order to check whether the predictions are correct or wrong we will create a cross tab on y_test data crosstable = pd.crosstab(y_test,preds) crosstable # Final step we will calculate the accuracy of our model np.mean(preds==y_test) # We are comparing the predicted values with the actual values and calculating mean for the matches print(classification_report(preds,y_test))
precision recall f1-score support 0 1.00 1.00 1.00 8 1 1.00 0.92 0.96 13 2 0.90 1.00 0.95 9 accuracy 0.97 30 macro avg 0.97 0.97 0.97 30 weighted avg 0.97 0.97 0.97 30
MIT
Decision_tree_C5.O_CART.ipynb
anagha0397/Decision-Tree
Building a decision tree using CART method (Classifier model)
model_1 = DecisionTreeClassifier(criterion = 'gini',max_depth = 3) model_1.fit(x_train,y_train) tree.plot_tree(model_1); # predicting the values on xtest data preds = model_1.predict(x_test) preds pd.Series(preds).value_counts() # calculating accuracy of the model using the actual values np.mean(preds==y_test)
_____no_output_____
MIT
Decision_tree_C5.O_CART.ipynb
anagha0397/Decision-Tree
Decision tree Regressor using CART
from sklearn.tree import DecisionTreeRegressor # Just converting the iris data into the following way as I want my Y to be numeric X = iris.iloc[:,0:3] Y = iris.iloc[:,3] X_train,X_test,Y_train,Y_test = train_test_split(X,Y, test_size = 0.33, random_state = 1) model_reg = DecisionTreeRegressor() model_reg.fit(X_train,Y_train) preds1 = model_reg.predict(X_test) preds1 # Will see the correct and wrong matches pd.crosstab(Y_test,preds1) ## We will calculate the accuracy by using score method,this is an either way to calculate the accuracy of the model model_reg.score(X_test,Y_test) # THis model.score function will first calculate the predicted values using the X_test data and then internaly only it will compare those values with the y_test data which is our actual data
_____no_output_____
MIT
Decision_tree_C5.O_CART.ipynb
anagha0397/Decision-Tree
Homework
import matplotlib.pyplot as plt %matplotlib inline import random import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from plotting import overfittingDemo, plot_multiple_linear_regression, overlay_simple_linear_model,plot_simple_residuals from scipy.optimize import curve_fit
_____no_output_____
MIT
Lectures/Lecture6/Intro to Machine Learning Homework.ipynb
alaymodi/Spring-2019-Career-Exploration-master
**Exercise 1:** What are the two "specialities" of machine learning? Pick one and in your own words, explain what it means. ` Your Answer Here **Exercise 2:** What is the difference between a regression task and a classification task? Your Answer Here **Exercise 3:** 1. What is parametric fitting in your understanding?2. Given the data $x = 1,2,3,4,5, y_1 = 2,4,6,8,10, y_2 = 2,4,8,16,32,$ what function $f_1, f_2$ will you use to fit $y_1, y_2$? Why do you choose those?3. Why is parametric fitting somehow not machine learning? Your Answer Here **Exercise 4:** Take a look at the following residual plots. Residuals can be helpful in assessing if our model is overpredicting or underpredicting certain values. Assign the variable bestplot to the letter corresponding to which residual plot indicates a good fit for a linear model.
bestplot = 'Put your letter answer between these quotes'
_____no_output_____
MIT
Lectures/Lecture6/Intro to Machine Learning Homework.ipynb
alaymodi/Spring-2019-Career-Exploration-master
**Exercise 5:** Observe the following graphs. Assign each graph variable to one of the following strings: 'overfitting', 'underfitting', or 'bestfit'.
graph1 = "Put answer here" graph2 = "Put answer here" graph3 = "Put answer here"
_____no_output_____
MIT
Lectures/Lecture6/Intro to Machine Learning Homework.ipynb
alaymodi/Spring-2019-Career-Exploration-master
**Exercise 6:** What are the 3 sets we split our initial data set into? Your Answer Here **Exercise 7:** Refer to the graphs below when answering the following questions (Exercise 6 and 7).As we increase the degree of our model, what happens to the training error and what happens to the test error? Your Answer Here **Exercise 8:** What is the issue with just increasing the degree of our model to get the lowest training error possible? Your Answer Here **Exercise 9:** Find the gradient for ridge loss, most concretely, when $L(\theta, \textbf{y}, \alpha)= (\frac{1}{n} \sum_{i = 1}^{n}(y_i - \theta)^2) + \frac{\alpha }{2}\sum_{i = 1}^{n}\theta ^2$find $\frac{\partial}{\partial \hat{\theta}} L(\theta, \textbf{y},\alpha)$, you can have a look at the class example, they are really similar. Your Answer Here **Exercise 10:** Following the last part of the exercise, you've already fitted your model, now let's test the performance. Make sure you check the code for the previous example we went through in class.1. copy what you had from the exercise here.
import pandas as pd mpg = pd.read_csv("./mpg_category.csv", index_col="name") #exercise part 1 mpg['Old?'] = ... #exercise part 2 mpg_train, mpg_test = ..., ... #exercise part 3 from sklearn.linear_model import LogisticRegression softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10) X = ... Y = ... softmax_reg.fit(X, Y)
_____no_output_____
MIT
Lectures/Lecture6/Intro to Machine Learning Homework.ipynb
alaymodi/Spring-2019-Career-Exploration-master
2. create the test data set and make the prediction on test dataset
X_test = ... Y_test = ... pred = softmax_reg.predict(...)
_____no_output_____
MIT
Lectures/Lecture6/Intro to Machine Learning Homework.ipynb
alaymodi/Spring-2019-Career-Exploration-master
3. Make the confusion matrix and tell me how you interpret each of the cell in the confusion matrix. What does different depth of blue means. You can just run the cell below, assumed what you did above is correct. You just have to answer your understanding.
from sklearn.metrics import confusion_matrix confusion_matrix = confusion_matrix(Y_test, pred) X_label = ['old', 'new'] def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(X_label)) plt.xticks(tick_marks, X_label, rotation=45) plt.yticks(tick_marks, X_label,) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') plot_confusion_matrix(confusion_matrix) # confusion_matrix
_____no_output_____
MIT
Lectures/Lecture6/Intro to Machine Learning Homework.ipynb
alaymodi/Spring-2019-Career-Exploration-master
Your Answer Here
# be sure to hit save (File > Save and Checkpoint) or Ctrl/Command-S before you run the cell! from submit import create_and_submit create_and_submit(['Intro to Machine Learning Homework.ipynb'], verbose=True)
Parsed Intro to Machine Learning Homework.ipynb Enter your Berkeley email address: xinyiren@berkeley.edu Posting answers for Intro to Machine Learning Homework Your submission: {'exercise-1': 'Your Answer Here', 'exercise-1_output': None, 'exercise-2': 'Your Answer Here', 'exercise-2_output': None, 'exercise-3': 'Your Answer Here', 'exercise-3_output': None, 'exercise-4': "bestplot = 'Put your letter answer between these quotes'", 'exercise-4_output': None, 'exercise-5': 'graph1 = "Put answer here"\ngraph2 = "Put answer here"\ngraph3 = "Put answer here"', 'exercise-5_output': None, 'exercise-6': 'Your Answer Here', 'exercise-6_output': None, 'exercise-7': 'Your Answer Here', 'exercise-7_output': None, 'exercise-8': 'Your Answer Here', 'exercise-8_output': None, 'exercise-9': 'Your Answer Here', 'exercise-9_output': None, 'exercise-10-1': 'import pandas as pd\n\nmpg = pd.read_csv("./mpg_category.csv", index_col="name")\n\n#exercise part 1\nmpg[\'Old?\'] = ... \n\n#exercise part 2\nmpg_train, mpg_test = ..., ...\n\n#exercise part 3\nfrom sklearn.linear_model import LogisticRegression\nsoftmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=...)\nX = ...\nY = ...\nsoftmax_reg.fit(X, Y)', 'exercise-10-1_output': None, 'exercise-10-2': '2. create the test data set and make the prediction on test dataset', 'exercise-10-2_output': None, 'exercise-10-3': '3. Make the confusion matrix and tell me how you interpret each of the cell in the confusion matrix. What does different depth of blue means. You can just run the cell below, assumed what you did above is correct. You just have to answer your understanding.', 'exercise-10-3_output': None, 'exercise-10-4': 'Your Answer Here', 'exercise-10-4_output': None, 'email': 'xinyiren@berkeley.edu', 'sheet': 'Intro to Machine Learning Homework', 'timestamp': datetime.datetime(2019, 3, 18, 16, 46, 54, 7302)} Submitted!
MIT
Lectures/Lecture6/Intro to Machine Learning Homework.ipynb
alaymodi/Spring-2019-Career-Exploration-master
Copyright 2020 Google LLC.Licensed under the Apache License, Version 2.0 (the "License");
#@title License header # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
colab/resnet.ipynb
WindQAQ/iree
ResNet[ResNet](https://arxiv.org/abs/1512.03385) is a deep neural network architecture for image recognition.This notebook* Constructs a [ResNet50](https://www.tensorflow.org/api_docs/python/tf/keras/applications/ResNet50) model using `tf.keras`, with weights pretrained using the[ImageNet](http://www.image-net.org/) dataset* Compiles that model with IREE* Tests TensorFlow and IREE execution of the model on a sample image
#@title Imports and common setup from pyiree import rt as ireert from pyiree.tf import compiler as ireec from pyiree.tf.support import tf_utils import tensorflow as tf from matplotlib import pyplot as plt #@title Construct a pretrained ResNet model with ImageNet weights tf.keras.backend.set_learning_phase(False) # Static shape, including batch size (1). # Can be dynamic once dynamic shape support is ready. INPUT_SHAPE = [1, 224, 224, 3] tf_model = tf.keras.applications.resnet50.ResNet50( weights="imagenet", include_top=True, input_shape=tuple(INPUT_SHAPE[1:])) # Wrap the model in a tf.Module to compile it with IREE. class ResNetModule(tf.Module): def __init__(self): super(ResNetModule, self).__init__() self.m = tf_model self.predict = tf.function( input_signature=[tf.TensorSpec(INPUT_SHAPE, tf.float32)])(tf_model.call) #@markdown ### Backend Configuration backend_choice = "iree_vmla (CPU)" #@param [ "iree_vmla (CPU)", "iree_llvmjit (CPU)", "iree_vulkan (GPU/SwiftShader)" ] backend_choice = backend_choice.split(" ")[0] backend = tf_utils.BackendInfo(backend_choice) #@title Compile ResNet with IREE # This may take a few minutes. iree_module = backend.compile(ResNetModule, ["predict"]) #@title Load a test image of a [labrador](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg) def load_image(path_to_image): image = tf.io.read_file(path_to_image) image = tf.image.decode_image(image, channels=3) image = tf.image.resize(image, (224, 224)) image = image[tf.newaxis, :] return image content_path = tf.keras.utils.get_file( 'YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg') content_image = load_image(content_path) print("Test image:") plt.imshow(content_image.numpy().reshape(224, 224, 3) / 255.0) plt.axis("off") plt.tight_layout() #@title Model pre- and post-processing input_data = tf.keras.applications.resnet50.preprocess_input(content_image) def decode_result(result): return tf.keras.applications.resnet50.decode_predictions(result, top=3)[0] #@title Run TF model print("TF prediction:") tf_result = tf_model.predict(input_data) print(decode_result(tf_result)) #@title Run the model compiled with IREE print("IREE prediction:") iree_result = iree_module.predict(input_data) print(decode_result(iree_result))
IREE prediction: [('n02091244', 'Ibizan_hound', 0.12879075), ('n02099712', 'Labrador_retriever', 0.1263297), ('n02091831', 'Saluki', 0.09625255)]
Apache-2.0
colab/resnet.ipynb
WindQAQ/iree
ART for TensorFlow v2 - Keras API This notebook demonstrate applying ART with the new TensorFlow v2 using the Keras API. The code follows and extends the examples on www.tensorflow.org.
import warnings warnings.filterwarnings('ignore') import tensorflow as tf tf.compat.v1.disable_eager_execution() import numpy as np from matplotlib import pyplot as plt from art.estimators.classification import KerasClassifier from art.attacks.evasion import FastGradientMethod, CarliniLInfMethod if tf.__version__[0] != '2': raise ImportError('This notebook requires TensorFlow v2.')
_____no_output_____
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Load MNIST dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 x_test = x_test[0:100] y_test = y_test[0:100]
_____no_output_____
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
TensorFlow with Keras API Create a model using Keras API. Here we use the Keras Sequential model and add a sequence of layers. Afterwards the model is compiles with optimizer, loss function and metrics.
model = tf.keras.models.Sequential([ tf.keras.layers.InputLayer(input_shape=(28, 28)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']);
_____no_output_____
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Fit the model on training data.
model.fit(x_train, y_train, epochs=3);
Train on 60000 samples Epoch 1/3 60000/60000 [==============================] - 3s 46us/sample - loss: 0.2968 - accuracy: 0.9131 Epoch 2/3 60000/60000 [==============================] - 3s 46us/sample - loss: 0.1435 - accuracy: 0.9575 Epoch 3/3 60000/60000 [==============================] - 3s 46us/sample - loss: 0.1102 - accuracy: 0.9664
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Evaluate model accuracy on test data.
loss_test, accuracy_test = model.evaluate(x_test, y_test) print('Accuracy on test data: {:4.2f}%'.format(accuracy_test * 100))
Accuracy on test data: 100.00%
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Create a ART Keras classifier for the TensorFlow Keras model.
classifier = KerasClassifier(model=model, clip_values=(0, 1))
_____no_output_____
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Fast Gradient Sign Method attack Create a ART Fast Gradient Sign Method attack.
attack_fgsm = FastGradientMethod(estimator=classifier, eps=0.3)
_____no_output_____
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Generate adversarial test data.
x_test_adv = attack_fgsm.generate(x_test)
_____no_output_____
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Evaluate accuracy on adversarial test data and calculate average perturbation.
loss_test, accuracy_test = model.evaluate(x_test_adv, y_test) perturbation = np.mean(np.abs((x_test_adv - x_test))) print('Accuracy on adversarial test data: {:4.2f}%'.format(accuracy_test * 100)) print('Average perturbation: {:4.2f}'.format(perturbation))
Accuracy on adversarial test data: 0.00% Average perturbation: 0.18
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Visualise the first adversarial test sample.
plt.matshow(x_test_adv[0]) plt.show()
_____no_output_____
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Carlini&Wagner Infinity-norm attack Create a ART Carlini&Wagner Infinity-norm attack.
attack_cw = CarliniLInfMethod(classifier=classifier, eps=0.3, max_iter=100, learning_rate=0.01)
_____no_output_____
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Generate adversarial test data.
x_test_adv = attack_cw.generate(x_test)
C&W L_inf: 100%|██████████| 1/1 [00:04<00:00, 4.23s/it]
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Evaluate accuracy on adversarial test data and calculate average perturbation.
loss_test, accuracy_test = model.evaluate(x_test_adv, y_test) perturbation = np.mean(np.abs((x_test_adv - x_test))) print('Accuracy on adversarial test data: {:4.2f}%'.format(accuracy_test * 100)) print('Average perturbation: {:4.2f}'.format(perturbation))
Accuracy on adversarial test data: 10.00% Average perturbation: 0.03
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Visualise the first adversarial test sample.
plt.matshow(x_test_adv[0, :, :]) plt.show()
_____no_output_____
MIT
notebooks/art-for-tensorflow-v2-keras.ipynb
changx03/adversarial-robustness-toolbox
Prophet Time serie forecasting using ProphetOfficial documentation: https://facebook.github.io/prophet/docs/quick_start.html Procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It is released by Facebook’s Core Data Science team.Additive model is a model like: $Data = seasonal\space effect + trend + residual$and, multiplicative model: $Data = seasonal\space effect * trend * residual$The algorithm provides useful statistics that help visualize the tuning process, e.g. trend, week trend, year trend and their max and min errors. DataThe data on which the algorithms will be trained and tested upon comes from Kaggle Hourly Energy Consumption database. It is collected by PJM Interconnection, a company coordinating the continuous buying, selling, and delivery of wholesale electricity through the Energy Market from suppliers to customers in the reagon of South Carolina, USA. All .csv files contains rows with a timestamp and a value. The name of the value column corresponds to the name of the contractor. the timestamp represents a single hour and the value represents the total energy, cunsumed during that hour.The data we will be using is hourly power consumption data from PJM. Energy consumtion has some unique charachteristics. It will be interesting to see how prophet picks them up.https://www.kaggle.com/robikscube/hourly-energy-consumptionPulling the PJM East which has data from 2002-2018 for the entire east region.
import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from fbprophet import Prophet from sklearn.metrics import mean_squared_error, mean_absolute_error plt.style.use('fivethirtyeight') # For plots dataset_path = './data/hourly-energy-consumption/PJME_hourly.csv' df = pd.read_csv(dataset_path, index_col=[0], parse_dates=[0]) print("Dataset path:",df.shape) df.head(10) # VISUALIZE DATA # Color pallete for plotting color_pal = ["#F8766D", "#D39200", "#93AA00", "#00BA38", "#00C19F", "#00B9E3", "#619CFF", "#DB72FB"] df.plot(style='.', figsize=(20,10), color=color_pal[0], title='PJM East Dataset TS') plt.show() #Decompose the seasonal data def create_features(df, label=None): """ Creates time series features from datetime index. """ df = df.copy() df['date'] = df.index df['hour'] = df['date'].dt.hour df['dayofweek'] = df['date'].dt.dayofweek df['quarter'] = df['date'].dt.quarter df['month'] = df['date'].dt.month df['year'] = df['date'].dt.year df['dayofyear'] = df['date'].dt.dayofyear df['dayofmonth'] = df['date'].dt.day df['weekofyear'] = df['date'].dt.weekofyear X = df[['hour','dayofweek','quarter','month','year', 'dayofyear','dayofmonth','weekofyear']] if label: y = df[label] return X, y return X df.columns X, y = create_features(df, label='PJME_MW') features_and_target = pd.concat([X, y], axis=1) print("Shape",features_and_target.shape) features_and_target.head(10) sns.pairplot(features_and_target.dropna(), hue='hour', x_vars=['hour','dayofweek', 'year','weekofyear'], y_vars='PJME_MW', height=5, plot_kws={'alpha':0.15, 'linewidth':0} ) plt.suptitle('Power Use MW by Hour, Day of Week, Year and Week of Year') plt.show()
_____no_output_____
MIT
English/9_time_series_prediction/Prophet.ipynb
JeyDi/DataScienceCourse
Train and Test Split We use a temporal split, keeping old data and use only new period to do the prediction
split_date = '01-Jan-2015' pjme_train = df.loc[df.index <= split_date].copy() pjme_test = df.loc[df.index > split_date].copy() # Plot train and test so you can see where we have split pjme_test \ .rename(columns={'PJME_MW': 'TEST SET'}) \ .join(pjme_train.rename(columns={'PJME_MW': 'TRAINING SET'}), how='outer') \ .plot(figsize=(15,5), title='PJM East', style='.') plt.show()
_____no_output_____
MIT
English/9_time_series_prediction/Prophet.ipynb
JeyDi/DataScienceCourse
To use prophet you need to correctly rename features and label to correctly pass the input to the engine.
# Format data for prophet model using ds and y pjme_train.reset_index() \ .rename(columns={'Datetime':'ds', 'PJME_MW':'y'}) print(pjme_train.columns) pjme_train.head(5)
Index(['PJME_MW'], dtype='object')
MIT
English/9_time_series_prediction/Prophet.ipynb
JeyDi/DataScienceCourse
Create and train the model
# Setup and train model and fit model = Prophet() model.fit(pjme_train.reset_index() \ .rename(columns={'Datetime':'ds', 'PJME_MW':'y'})) # Predict on training set with model pjme_test_fcst = model.predict(df=pjme_test.reset_index() \ .rename(columns={'Datetime':'ds'})) pjme_test_fcst.head()
_____no_output_____
MIT
English/9_time_series_prediction/Prophet.ipynb
JeyDi/DataScienceCourse
Plot the results and forecast
# Plot the forecast f, ax = plt.subplots(1) f.set_figheight(5) f.set_figwidth(15) fig = model.plot(pjme_test_fcst, ax=ax) plt.show() # Plot the components of the model fig = model.plot_components(pjme_test_fcst)
_____no_output_____
MIT
English/9_time_series_prediction/Prophet.ipynb
JeyDi/DataScienceCourse
Frequency rangeThe first first step needed to simulate an electrochemical impedance spectra is to generate a frequency domain, to do so, use to build-in freq_gen() function, as follows
f_range = freq_gen(f_start=10**10, f_stop=0.1, pts_decade=7) # print(f_range[0]) #First 5 points in the freq. array print() # print(f_range[1]) #First 5 points in the angular freq.array
MIT
examples/nyquist_plots_examples.ipynb
EISy-as-Py/EISy-as-Py
Note that all functions included are described, to access these descriptions stay within () and press shift+tab. The freq_gen(), returns both the frequency, which is log seperated based on points/decade between f_start to f_stop, and the angular frequency. This function is quite useful and will be used through this tutorial The Equivalent CircuitsThere exist a number of equivalent circuits that can be simulated and fitted, these functions are made as definations and can be called at any time. To find these, write: "cir_" and hit tab. All functions are outline in the next cell and can also be viewed in the equivalent circuit overview:
cir_RC cir_RQ cir_RsRQ cir_RsRQRQ cir_Randles cir_Randles_simplified cir_C_RC_C cir_Q_RQ_Q cir_RCRCZD cir_RsTLsQ cir_RsRQTLsQ cir_RsTLs cir_RsRQTLs cir_RsTLQ cir_RsRQTLQ cir_RsTL cir_RsRQTL cir_RsTL_1Dsolid cir_RsRQTL_1Dsolid
_____no_output_____
MIT
examples/nyquist_plots_examples.ipynb
EISy-as-Py/EISy-as-Py
Simulation of -(RC)- Input Parameters:- w = Angular frequency [1/s]- R = Resistance [Ohm]- C = Capacitance [F]- fs = summit frequency of RC circuit [Hz]
RC_example = EIS_sim(frange=f_range[0], circuit=cir_RC(w=f_range[1], R=70, C=10**-6), legend='on')
_____no_output_____
MIT
examples/nyquist_plots_examples.ipynb
EISy-as-Py/EISy-as-Py
Simulation of -Rs-(RQ)- Input parameters:- w = Angular frequency [1/s]- Rs = Series resistance [Ohm]- R = Resistance [Ohm]- Q = Constant phase element [s^n/ohm]- n = Constant phase elelment exponent [-]- fs = summit frequency of RQ circuit [Hz]
RsRQ_example = EIS_sim(frange=f_range[0], circuit=cir_RsRQ(w=f_range[1], Rs=70, R=200, n=.8, Q=10**-5), legend='on') RsRC_example = EIS_sim(frange=f_range[0], circuit=cir_RsRC(w=f_range[1], Rs=80, R=100, C=10**-5), legend='on')
_____no_output_____
MIT
examples/nyquist_plots_examples.ipynb
EISy-as-Py/EISy-as-Py
Simulation of -Rs-(RQ)-(RQ)- Input parameters:- w = Angular frequency [1/s]- Rs = Series Resistance [Ohm]- R = Resistance [Ohm]- Q = Constant phase element [s^n/ohm]- n = Constant phase element exponent [-]- fs = summit frequency of RQ circuit [Hz]- R2 = Resistance [Ohm]- Q2 = Constant phase element [s^n/ohm]- n2 = Constant phase element exponent [-]- fs2 = summit frequency of RQ circuit [Hz]
RsRQRQ_example = EIS_sim(frange=f_range[0], circuit=cir_RsRQRQ(w=f_range[1], Rs=200, R=150, n=.872, Q=10**-4, R2=50, n2=.853, Q2=10**-6), legend='on')
_____no_output_____
MIT
examples/nyquist_plots_examples.ipynb
EISy-as-Py/EISy-as-Py
Simulation of -Rs-(Q(RW))- (Randles-circuit)This circuit is often used for an experimental setup with a macrodisk working electrode with an outer-sphere heterogeneous charge transfer. This, classical, warburg element is controlled by semi-infinite linear diffusion, which is given by the geometry of the working electrode. Two Randles functions are avaliable for simulations: cir_Randles_simplified() and cir_Randles(). The former contains the Warburg constant (sigma), which summs up all mass transport constants (Dox/Dred, Cred/Cox, number of electrons (n_electron), Faradays constant (F), T, and E0) into a single constant sigma, while the latter contains all of these constants. Only cir_Randles_simplified() is avaliable for fitting, as either D$_{ox}$ or D$_{red}$ and C$_{red}$ or C$_{ox}$ are needed. Input parameters:- Rs = Series resistance [ohm]- Rct = charge-transfer resistance [ohm]- Q = Constant phase element used to model the double-layer capacitance [F]- n = expononent of the CPE [-]- sigma = Warburg Constant [ohm/s^1/2]
Randles = cir_Randles_simplified(w=f_range[1], Rs=100, R=1000, n=1, sigma=300, Q=10**-5) Randles_example = EIS_sim(frange=f_range[0], circuit=Randles, legend='off') Randles_example = EIS_sim(frange=f_range[0], circuit=cir_Randles_simplified(w=f_range[1], Rs=100, R=1000, n=1, sigma=300, Q='none', fs=10**3.3), legend='off')
_____no_output_____
MIT
examples/nyquist_plots_examples.ipynb
EISy-as-Py/EISy-as-Py
In the following, the Randles circuit with the Warburg constant (sigma) defined is simulated where:- D$_{red}$/D$_{ox}$ = 10$^{-6}$ cm$^2$/s- C$_{red}$/C$_{ox}$ = 10 mM- n_electron = 1- T = 25 $^o$CThis function is a great tool to simulate expected impedance responses prior to starting experiments as it allows for evaluation of concentrations, diffusion constants, number of electrons, and Temp. to evaluate the feasability of obtaining information on either kinetics, mass-transport, or both.
Randles_example = EIS_sim(frange=f_range[0], circuit=cir_Randles(w=f_range[1], Rs=100, Rct=1000, Q=10**-7, n=1, T=298.15, D_ox=10**-9, D_red=10**-9, C_ox=10**-5, C_red=10**-5, n_electron=1, E=0, A=1), legend='off')
_____no_output_____
MIT
examples/nyquist_plots_examples.ipynb
EISy-as-Py/EISy-as-Py
Complex Dummy Experiment Manager> Dummy experiment manager with features that allow additional functionality
#export from hpsearch.examples.dummy_experiment_manager import DummyExperimentManager, FakeModel import hpsearch import os import shutil import os import hpsearch.examples.dummy_experiment_manager as dummy_em from hpsearch.visualization import plot_utils #for tests import pytest from block_types.utils.nbdev_utils import md
_____no_output_____
MIT
nbs/examples/complex_dummy_experiment_manager.ipynb
Jaume-JCI/hpsearch
ComplexDummyExperimentManager
#export class ComplexDummyExperimentManager (DummyExperimentManager): def __init__ (self, model_file_name='model_weights.pk', **kwargs): super().__init__ (model_file_name=model_file_name, **kwargs) self.raise_error_if_run = False def run_experiment (self, parameters={}, path_results='./results'): # useful for testing: in some cases the experiment manager should not call run_experiment if self.raise_error_if_run: raise RuntimeError ('run_experiment should not be called') # extract hyper-parameters used by our model. All the parameters have default values if they are not passed. offset = parameters.get('offset', 0.5) # default value: 0.5 rate = parameters.get('rate', 0.01) # default value: 0.01 epochs = parameters.get('epochs', 10) # default value: 10 noise = parameters.get('noise', 0.0) if parameters.get('actual_epochs') is not None: epochs = parameters.get('actual_epochs') # other parameters that do not form part of our experiment definition # changing the values of these other parameters, does not make the ID of the experiment change verbose = parameters.get('verbose', True) # build model with given hyper-parameters model = FakeModel (offset=offset, rate=rate, epochs=epochs, noise = noise, verbose=verbose) # load training, validation and test data (fake step) model.load_data() # start from previous experiment if indicated by parameters path_results_previous_experiment = parameters.get('prev_path_results') if path_results_previous_experiment is not None: model.load_model_and_history (path_results_previous_experiment) # fit model with training data model.fit () # save model weights and evolution of accuracy metric across epochs model.save_model_and_history(path_results) # simulate ctrl-c if parameters.get ('halt', False): raise KeyboardInterrupt ('stopped') # evaluate model with validation and test data validation_accuracy, test_accuracy = model.score() # store model self.model = model # the function returns a dictionary with keys corresponding to the names of each metric. # We return result on validation and test set in this example dict_results = dict (validation_accuracy = validation_accuracy, test_accuracy = test_accuracy) return dict_results
_____no_output_____
MIT
nbs/examples/complex_dummy_experiment_manager.ipynb
Jaume-JCI/hpsearch
Usage
#exports tests.examples.test_complex_dummy_experiment_manager def test_complex_dummy_experiment_manager (): #em = generate_data ('complex_dummy_experiment_manager') md ( ''' Extend previous experiment by using a larger number of epochs We see how to create a experiment that is the same as a previous experiment, only increasing the number of epochs. 1.a. For test purposes, we first run the full number of epochs, 30, take note of the accuracy, and remove the experiment ''' ) em = ComplexDummyExperimentManager (path_experiments='test_complex_dummy_experiment_manager', verbose=0) em.create_experiment_and_run (parameters = {'epochs': 30}); reference_accuracy = em.model.accuracy reference_weight = em.model.weight from hpsearch.config.hpconfig import get_path_experiments import os import pandas as pd path_experiments = get_path_experiments () print (f'experiments folders: {os.listdir(f"{path_experiments}/experiments")}\n') experiments_data = pd.read_pickle (f'{path_experiments}/experiments_data.pk') print ('csv data') display (experiments_data) md ('we plot the history') from hpsearch.visualization.experiment_visualization import plot_multiple_histories plot_multiple_histories ([0], run_number=0, op='max', backend='matplotlib', metrics='validation_accuracy') md ('1.b. Now we run two experiments: ') md ('We run the first experiment with 20 epochs:') # a.- remove previous experiment em.remove_previous_experiments() # b.- create first experiment with epochs=20 em.create_experiment_and_run (parameters = {'epochs': 20}); print (f'experiments folders: {os.listdir(f"{path_experiments}/experiments")}\n') experiments_data = pd.read_pickle (f'{path_experiments}/experiments_data.pk') print ('csv data') display(experiments_data) print (f'weight: {em.model.weight}, accuracy: {em.model.accuracy}') md ('We run the second experiment resumes from the previous one and increases the epochs to 30') # 4.- create second experiment with epochs=10 em.create_experiment_and_run (parameters = {'epochs': 30}, other_parameters={'prev_epoch': True, 'name_epoch': 'epochs', 'previous_model_file_name': 'model_weights.pk'}); experiments_data = pd.read_pickle (f'{path_experiments}/experiments_data.pk') print ('csv data') display(experiments_data) new_accuracy = em.model.accuracy new_weight = em.model.weight assert new_weight==reference_weight assert new_accuracy==reference_accuracy print (f'weight: {new_weight}, accuracy: {new_accuracy}') md ('We plot the history') plot_multiple_histories ([1], run_number=0, op='max', backend='matplotlib', metrics='validation_accuracy') em.remove_previous_experiments() tst.run (test_complex_dummy_experiment_manager, tag='dummy')
running test_complex_dummy_experiment_manager
MIT
nbs/examples/complex_dummy_experiment_manager.ipynb
Jaume-JCI/hpsearch
Running experiments and removing experiments
# export def run_multiple_experiments (**kwargs): dummy_em.run_multiple_experiments (EM=ComplexDummyExperimentManager, **kwargs) def remove_previous_experiments (): dummy_em.remove_previous_experiments (EM=ComplexDummyExperimentManager) #export def generate_data (name_folder): em = ComplexDummyExperimentManager (path_experiments=f'test_{name_folder}', verbose=0) em.remove_previous_experiments () run_multiple_experiments (em=em, nruns=5, noise=0.1, verbose=False) return em
_____no_output_____
MIT
nbs/examples/complex_dummy_experiment_manager.ipynb
Jaume-JCI/hpsearch
Workshop 13 _Object-oriented programming._ Classes and Objects
class MyClass: pass obj1 = MyClass() obj2 = MyClass() print(obj1) print(type(obj1)) print(obj2) print(type(obj2))
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Constructor and destructor
class Employee: def __init__(self): print('Employee created.') def __del__(self): print('Destructor called, Employee deleted.') obj = Employee() del obj
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Attributes and methods
class Student: def __init__(self, name, grade): self.name = name self.grade = grade def __str__(self): return '{' + self.name + ': ' + str(self.grade) + '}' def learn(self): print('My name is %s. I am learning Python! My grade is %d.' % (self.name, self.grade)) students = [Student('Steve', 9), Student('Oleg', 10)] for student in students: print() print('student.name = ' + student.name) print('student.grade = ' + str(student.grade)) print('student = ' + str(student)) student.learn()
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Class and instance attributes
class Person: # class variable shared by all instances status = 'student' def __init__(self, name): # instance variable unique to each instance self.name = name a = Person('Steve') b = Person('Mark') print('') print(a.name + ' : ' + a.status) print(b.name + ' : ' + b.status) Person.status = 'graduate' print('') print(a.name + ' : ' + a.status) print(b.name + ' : ' + b.status) Person.status = 'student' print('') print(a.name + ' : ' + a.status) print(b.name + ' : ' + b.status)
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Class and static methods
class Env: os = 'Windows' @classmethod def print_os(self): print(self.os) @staticmethod def print_user(): print('guest') Env.print_os() Env.print_user()
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Encapsulation
class Person: def __init__(self, name): self.name = name def __str__(self): return 'My name is ' + self.name person = Person('Steve') print(person.name) person.name = 'Said' print(person.name) class Identity: def __init__(self, name): self.__name = name def __str__(self): return 'My name is ' + self.__name person = Identity('Steve') print(person.__name) person.__name = 'Said' print(person)
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Operator overloading
class Number: def __init__(self, value): self.__value = value def __del__(self): pass def __str__(self): return str(self.__value) def __int__(self): return self.__value def __eq__(self, other): return self.__value == other.__value def __ne__(self, other): return self.__value != other.__value def __lt__(self, other): return self.__value < other.__value def __gt__(self, other): return self.__value > other.__value def __add__(self, other): return Number(self.__value + other.__value) def __mul__(self, other): return Number(self.__value * other.__value) def __neg__(self): return Number(-self.__value) a = Number(10) b = Number(20) c = Number(5) # Overloaded operators x = -a + b * c print(x) print(a < b) print(b > c) # Unsupported operators print(a <= b) print(b >= c) print(a // c)
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Inheritance and polymorphism
class Creature: def say(self): pass class Dog(Creature): def say(self): print('Woof!') class Cat(Creature): def say(self): print("Meow!") class Lion(Creature): def say(self): print("Roar!") animals = [Creature(), Dog(), Cat(), Lion()] for animal in animals: print(type(animal)) animal.say()
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Multiple inheritance
class Person: def __init__(self, name): self.name = name class Student(Person): def __init__(self, name, grade): super().__init__(name) self.grade = grade class Employee: def __init__(self, salary): self.salary = salary class Teacher(Person, Employee): def __init__(self, name, salary): Person.__init__(self, name) Employee.__init__(self, salary) class TA(Student, Employee): def __init__(self, name, grage, salary): Student.__init__(self, name, grage) Employee.__init__(self, salary) x = Student('Oleg', 9) y = TA('Sergei', 10, 1000) z = Teacher('Andrei', 2000) for person in [x, y, z]: print(person.name) if isinstance(person, Employee): print(person.salary) if isinstance(person, Student): print(person.grade)
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Function _isinstance_
x = 10 print('') print(isinstance(x, int)) print(isinstance(x, float)) print(isinstance(x, str)) y = 3.14 print('') print(isinstance(y, int)) print(isinstance(y, float)) print(isinstance(y, str)) z = 'Hello world' print('') print(isinstance(z, int)) print(isinstance(z, float)) print(isinstance(z, str)) class A: pass class B: pass class C(A): pass class D(A, B): pass a = A() b = B() c = C() d = D() print('') print(isinstance(a, object)) print(isinstance(a, A)) print(isinstance(b, B)) print('') print(isinstance(b, object)) print(isinstance(b, A)) print(isinstance(b, B)) print(isinstance(b, C)) print('') print(isinstance(c, object)) print(isinstance(c, A)) print(isinstance(c, B)) print(isinstance(c, D)) print('') print(isinstance(d, object)) print(isinstance(d, A)) print(isinstance(d, B)) print(isinstance(d, C)) print(isinstance(d, D))
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Composition
class Teacher: pass class Student: pass class ClassRoom: def __init__(self, teacher, students): self.teacher = teacher self.students = students cl = ClassRoom(Teacher(), [Student(), Student(), Student()]) class Set: def __init__(self, values=None): self.dict = {} if values is not None: for value in values: self.add(value) def __repr__(self): return "Set: " + str(self.dict.keys()) def add(self, value): self.dict[value] = True def contains(self, value): return value in self.dict def remove(self, value): del self.dict[value] s = Set([1,2,3]) s.add(4) print(s.contains(4)) s.remove(3) print(s.contains(3))
_____no_output_____
Apache-2.0
lessons/Workshop_13_OOP.ipynb
andrewt0301/python-problems
Scalable GP Classification in 1D (w/ KISS-GP)This example shows how to use grid interpolation based variational classification with an `ApproximateGP` using a `GridInterpolationVariationalStrategy` module. This classification module is designed for when the inputs of the function you're modeling are one-dimensional.The use of inducing points allows for scaling up the training data by making computational complexity linear instead of cubic.In this example, we’re modeling a function that is periodically labeled cycling every 1/8 (think of a square wave with period 1/4)This notebook doesn't use cuda, in general we recommend GPU use if possible and most of our notebooks utilize cuda as well.Kernel interpolation for scalable structured Gaussian processes (KISS-GP) was introduced in this paper:http://proceedings.mlr.press/v37/wilson15.pdfKISS-GP with SVI for classification was introduced in this paper:https://papers.nips.cc/paper/6426-stochastic-variational-deep-kernel-learning.pdf
import math import torch import gpytorch from matplotlib import pyplot as plt from math import exp %matplotlib inline %load_ext autoreload %autoreload 2 train_x = torch.linspace(0, 1, 26) train_y = torch.sign(torch.cos(train_x * (2 * math.pi))).add(1).div(2) from gpytorch.models import ApproximateGP from gpytorch.variational import CholeskyVariationalDistribution from gpytorch.variational import GridInterpolationVariationalStrategy class GPClassificationModel(ApproximateGP): def __init__(self, grid_size=128, grid_bounds=[(0, 1)]): variational_distribution = CholeskyVariationalDistribution(grid_size) variational_strategy = GridInterpolationVariationalStrategy(self, grid_size, grid_bounds, variational_distribution) super(GPClassificationModel, self).__init__(variational_strategy) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self,x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x) return latent_pred model = GPClassificationModel() likelihood = gpytorch.likelihoods.BernoulliLikelihood() from gpytorch.mlls.variational_elbo import VariationalELBO # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.01) # "Loss" for GPs - the marginal log likelihood # n_data refers to the number of training datapoints mll = VariationalELBO(likelihood, model, num_data=train_y.numel()) def train(): num_iter = 100 for i in range(num_iter): optimizer.zero_grad() output = model(train_x) # Calc loss and backprop gradients loss = -mll(output, train_y) loss.backward() print('Iter %d/%d - Loss: %.3f' % (i + 1, num_iter, loss.item())) optimizer.step() # Get clock time %time train() # Set model and likelihood into eval mode model.eval() likelihood.eval() # Initialize axes f, ax = plt.subplots(1, 1, figsize=(4, 3)) with torch.no_grad(): test_x = torch.linspace(0, 1, 101) predictions = likelihood(model(test_x)) ax.plot(train_x.numpy(), train_y.numpy(), 'k*') pred_labels = predictions.mean.ge(0.5).float() ax.plot(test_x.data.numpy(), pred_labels.numpy(), 'b') ax.set_ylim([-1, 2]) ax.legend(['Observed Data', 'Mean', 'Confidence'])
_____no_output_____
MIT
examples/06_Scalable_GP_Classification_1D/KISSGP_Classification_1D.ipynb
phumm/gpytorch
np.savetxt('/nfs/slac/g/ki/ki18/des/swmclau2/AB_tests/sham_vpeak_wp.npy', sham_wp)np.savetxt('/nfs/slac/g/ki/ki18/des/swmclau2/AB_tests/sham_vpeak_nd.npy', np.array([len(galaxy_catalog)/((cat.Lbox*cat.h)**3)])) np.savetxt('/nfs/slac/g/ki/ki18/des/swmclau2/AB_tests/rp_bins_split.npy',rp_bins )
plt.figure(figsize=(10,8)) for p, mock_wp in zip(split, mock_wps): plt.plot(bin_centers, mock_wp, label = p) #plt.plot(bin_centers, sham_wp, ls='--', label = 'SHAM') plt.plot(bin_centers, noab_wp, ls=':', label = 'No AB') plt.loglog() plt.legend(loc='best',fontsize = 15) plt.xlim([1e-1, 30e0]); #plt.ylim([1,15000]) plt.xlabel(r'$r$',fontsize = 15) plt.ylabel(r'$\xi(r)$',fontsize = 15) plt.show() np.log10(mock_wps[-1]) plt.figure(figsize=(10,8)) for p, mock_wp in zip(split, mock_wps): plt.plot(bin_centers, mock_wp/sham_wp, label = p) #plt.plot(bin_centers, sham_wp, ls='--', label = 'SHAM') #plt.plot(bin_centers, noab_wp, ls=':', label = 'No AB') #plt.loglog() plt.xscale('log') plt.legend(loc='best',fontsize = 15) plt.xlim([1e-1, 15e0]); plt.ylim([0.8,1.2]) plt.xlabel(r'$r$',fontsize = 15) plt.ylabel(r'$\xi(r)$',fontsize = 15) plt.show() plt.figure(figsize=(10,8)) #for p, mock_wp in zip(split, mock_wps): # plt.plot(bin_centers, mock_wp/sham_wp, label = p) #plt.plot(bin_centers, sham_wp, ls='--', label = 'SHAM') plt.plot(bin_centers, noab_wp/sham_wp, label = 'No AB') #plt.loglog() plt.xscale('log') plt.legend(loc='best',fontsize = 15) plt.xlim([1e-1, 15e0]); #plt.ylim([1,15000]) plt.xlabel(r'$r$',fontsize = 15) plt.ylabel(r'$\xi(r)$',fontsize = 15) plt.show() plt.figure(figsize=(10,8)) for p, mock_wp in zip(split, mock_wps_1h): plt.plot(bin_centers, mock_wp, label = p) #plt.plot(bin_centers, sham_wp, ls='--', label = 'SHAM') #plt.plot(bin_centers, noab_wp, ls=':', label = 'No AB') plt.loglog() plt.legend(loc='best',fontsize = 15) plt.xlim([1e-1, 30e0]); #plt.ylim([1,15000]) plt.xlabel(r'$r$',fontsize = 15) plt.ylabel(r'$\xi(r)$',fontsize = 15) plt.show() plt.figure(figsize=(10,8)) for p, mock_wp in zip(split, mock_wps_2h): plt.plot(bin_centers, mock_wp, label = p) #plt.plot(bin_centers, sham_wp, ls='--', label = 'SHAM') plt.plot(bin_centers, noab_wp, ls=':', label = 'No AB') plt.loglog() plt.legend(loc='best',fontsize = 15) plt.xlim([1e-1, 30e0]); #plt.ylim([1,15000]) plt.xlabel(r'$r$',fontsize = 15) plt.ylabel(r'$\xi(r)$',fontsize = 15) plt.show() plt.figure(figsize=(10,8)) for p, mock_wp in zip(split, mock_wps_2h): plt.plot(bin_centers, mock_wp/noab_wp, label = p) #plt.plot(bin_centers, sham_wp, ls='--', label = 'SHAM') #plt.plot(bin_centers, noab_wp, ls=':', label = 'No AB') plt.loglog() plt.legend(loc='best',fontsize = 15) plt.xlim([1e-1, 30e0]); #plt.ylim([1,15000]) plt.xlabel(r'$r$',fontsize = 15) plt.ylabel(r'$\xi(r)$',fontsize = 15) plt.show() plt.plot(bin_centers, mock_wps[0, :]) plt.plot(bin_centers, mock_wps_1h[0, :]) plt.plot(bin_centers, mock_wps_2h[0, :]) plt.loglog() plt.legend(loc='best',fontsize = 15) plt.xlim([1e-1, 30e0]); #plt.ylim([1,15000]) plt.xlabel(r'$r$',fontsize = 15) plt.ylabel(r'$\xi(r)$',fontsize = 15) plt.show() plt.figure(figsize=(10,8)) #avg = mock_wps.mean(axis = 0) for p, mock_wp in zip(split, mock_wps): plt.plot(bin_centers, mock_wp/sham_wp, label = 'p = %.2f'%p) plt.plot(bin_centers, noab_wp/sham_wp, label = 'No AB', ls = ':') #plt.loglog() plt.xscale('log') plt.legend(loc='best',fontsize = 15) plt.xlim([1e-1, 5e0]); plt.ylim([0.75,1.25]); plt.xlabel(r'$r$',fontsize = 15) plt.ylabel(r'$\xi(r)/\xi_{SHAM}(r)$',fontsize = 15) plt.show() sats_occ = cat.model._input_model_dictionary['satellites_occupation'] sats_occ._split_ordinates = [0.99]
_____no_output_____
MIT
notebooks/AB_tests/Understand Splitting Fraction.ipynb
mclaughlin6464/pearce
cens_occ = cat.model._input_model_dictionary['centrals_occupation']cens_occ._split_ordinates = [0.1]
print sats_occ baseline_lower_bound, baseline_upper_bound = 0,np.inf prim_haloprop = cat.model.mock.halo_table['halo_mvir'] sec_haloprop = cat.model.mock.halo_table['halo_nfw_conc'] from halotools.utils.table_utils import compute_conditional_percentile_values split = sats_occ.percentile_splitting_function(prim_haloprop) # Compute the baseline, undecorated result result = sats_occ.baseline_mean_occupation(prim_haloprop=prim_haloprop) # We will only decorate values that are not edge cases, # so first compute the mask for non-edge cases no_edge_mask = ( (split > 0) & (split < 1) & (result > baseline_lower_bound) & (result < baseline_upper_bound) ) # Now create convenient references to the non-edge-case sub-arrays no_edge_result = result[no_edge_mask] no_edge_split = split[no_edge_mask]
_____no_output_____
MIT
notebooks/AB_tests/Understand Splitting Fraction.ipynb
mclaughlin6464/pearce
percentiles = compute_conditional_percentiles( prim_haloprop=prim_haloprop, sec_haloprop=sec_haloprop )no_edge_percentiles = percentiles[no_edge_mask]type1_mask = no_edge_percentiles > no_edge_splitperturbation = sats_occ._galprop_perturbation(prim_haloprop=prim_haloprop[no_edge_mask],baseline_result=no_edge_result, splitting_result=no_edge_split)frac_type1 = 1 - no_edge_splitfrac_type2 = 1 - frac_type1perturbation[~type1_mask] *= (-frac_type1[~type1_mask] /(frac_type2[~type1_mask])) Retrieve percentile values (medians) if they've been precomputed. Else, compute them.no_edge_percentile_values = compute_conditional_percentile_values(p=no_edge_split, prim_haloprop=prim_haloprop[no_edge_mask], sec_haloprop=sec_haloprop[no_edge_mask])pv_sub_sec_haloprop = sec_haloprop[no_edge_mask] - no_edge_percentile_valuesperturbation = sats_occ._galprop_perturbation( prim_haloprop=prim_haloprop[no_edge_mask], sec_haloprop=pv_sub_sec_haloprop/np.max(np.abs(pv_sub_sec_haloprop)), baseline_result=no_edge_result)
from halotools.utils.table_utils import compute_conditional_averages strength = sats_occ.assembias_strength(prim_haloprop[no_edge_mask]) slope = sats_occ.assembias_slope(prim_haloprop[no_edge_mask]) # the average displacement acts as a normalization we need. max_displacement = sats_occ._disp_func(sec_haloprop=pv_sub_sec_haloprop/np.max(np.abs(pv_sub_sec_haloprop)), slope=slope) disp_average = compute_conditional_averages(vals=max_displacement,prim_haloprop=prim_haloprop[no_edge_mask]) #disp_average = np.ones((prim_haloprop.shape[0], ))*0.5 perturbation2 = np.zeros(len(prim_haloprop[no_edge_mask])) greater_than_half_avg_idx = disp_average > 0.5 less_than_half_avg_idx = disp_average <= 0.5 if len(max_displacement[greater_than_half_avg_idx]) > 0: base_pos = result[no_edge_mask][greater_than_half_avg_idx] strength_pos = strength[greater_than_half_avg_idx] avg_pos = disp_average[greater_than_half_avg_idx] upper_bound1 = (base_pos - baseline_lower_bound)/avg_pos upper_bound2 = (baseline_upper_bound - base_pos)/(1-avg_pos) upper_bound = np.minimum(upper_bound1, upper_bound2) print upper_bound1, upper_bound2 perturbation2[greater_than_half_avg_idx] = strength_pos*upper_bound*(max_displacement[greater_than_half_avg_idx]-avg_pos) if len(max_displacement[less_than_half_avg_idx]) > 0: base_neg = result[no_edge_mask][less_than_half_avg_idx] strength_neg = strength[less_than_half_avg_idx] avg_neg = disp_average[less_than_half_avg_idx] lower_bound1 = (base_neg-baseline_lower_bound)/avg_neg#/(1- avg_neg) lower_bound2 = (baseline_upper_bound - base_neg)/(1-avg_neg)#(avg_neg) lower_bound = np.minimum(lower_bound1, lower_bound2) perturbation2[less_than_half_avg_idx] = strength_neg*lower_bound*(max_displacement[less_than_half_avg_idx]-avg_neg) print np.unique(max_displacement[indices_of_mb]) print np.unique(disp_average[indices_of_mb]) perturbation mass_bins = compute_mass_bins(prim_haloprop) mass_bin_idxs = compute_prim_haloprop_bins(prim_haloprop_bin_boundaries=mass_bins, prim_haloprop = prim_haloprop[no_edge_mask]) mb = 87 indices_of_mb = np.where(mass_bin_idxs == mb)[0] plt.hist(perturbation[indices_of_mb], bins =100); plt.yscale('log'); #plt.loglog(); print max(perturbation) print min(perturbation) print max(perturbation[indices_of_mb]) print min(perturbation[indices_of_mb]) idxs = np.argsort(perturbation) print mass_bin_idxs[idxs[-10:]] plt.hist(perturbation2[indices_of_mb], bins =100); plt.yscale('log'); #plt.loglog(); print perturbation2
_____no_output_____
MIT
notebooks/AB_tests/Understand Splitting Fraction.ipynb
mclaughlin6464/pearce
Showing uncertainty> Uncertainty occurs everywhere in data science, but it's frequently left out of visualizations where it should be included. Here, we review what a confidence interval is and how to visualize them for both single estimates and continuous functions. Additionally, we discuss the bootstrap resampling technique for assessing uncertainty and how to visualize it properly. This is the Summary of lecture "Improving Your Data Visualizations in Python", via datacamp.- toc: true - badges: true- comments: true- author: Chanseok Kang- categories: [Python, Datacamp, Visualization]- image: images/so2_compare.png
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns plt.rcParams['figure.figsize'] = (10, 5)
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
Point estimate intervals- When is uncertainty important? - Estimates from sample - Average of a subset - Linear model coefficients- Why is uncertainty important? - Helps inform confidence in estimate - Neccessary for decision making - Acknowledges limitations of data Basic confidence intervalsYou are a data scientist for a fireworks manufacturer in Des Moines, Iowa. You need to make a case to the city that your company's large fireworks show has not caused any harm to the city's air. To do this, you look at the average levels for pollutants in the week after the fourth of July and how they compare to readings taken after your last show. By showing confidence intervals around the averages, you can make a case that the recent readings were well within the normal range.
average_ests = pd.read_csv('./dataset/average_ests.csv', index_col=0) average_ests # Construct CI bounds for averages average_ests['lower'] = average_ests['mean'] - 1.96 * average_ests['std_err'] average_ests['upper'] = average_ests['mean'] + 1.96 * average_ests['std_err'] # Setup a grid of plots, with non-shared x axes limits g = sns.FacetGrid(average_ests, row='pollutant', sharex=False, aspect=2); # Plot CI for average estimate g.map(plt.hlines, 'y', 'lower', 'upper'); # Plot observed values for comparison and remove axes labels g.map(plt.scatter, 'seen', 'y', color='orangered').set_ylabels('').set_xlabels('');
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
This simple visualization shows that all the observed values fall well within the confidence intervals for all the pollutants except for $O_3$. Annotating confidence intervalsYour data science work with pollution data is legendary, and you are now weighing job offers in both Cincinnati, Ohio and Indianapolis, Indiana. You want to see if the SO2 levels are significantly different in the two cities, and more specifically, which city has lower levels. To test this, you decide to look at the differences in the cities' SO2 values (Indianapolis' - Cincinnati's) over multiple years.Instead of just displaying a p-value for a significant difference between the cities, you decide to look at the 95% confidence intervals (columns `lower` and `upper`) of the differences. This allows you to see the magnitude of the differences along with any trends over the years.
diffs_by_year = pd.read_csv('./dataset/diffs_by_year.csv', index_col=0) diffs_by_year # Set start and ends according to intervals # Make intervals thicker plt.hlines(y='year', xmin='lower', xmax='upper', linewidth=5, color='steelblue', alpha=0.7, data=diffs_by_year); # Point estimates plt.plot('mean', 'year', 'k|', data=diffs_by_year); # Add a 'null' reference line at 0 and color orangered plt.axvline(x=0, color='orangered', linestyle='--'); # Set descriptive axis labels and title plt.xlabel('95% CI'); plt.title('Avg SO2 differences between Cincinnati and Indianapolis');
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
By looking at the confidence intervals you can see that the difference flipped from generally positive (more pollution in Cincinnati) in 2013 to negative (more pollution in Indianapolis) in 2014 and 2015. Given that every year's confidence interval contains the null value of zero, no P-Value would be significant, and a plot that only showed significance would have been entirely hidden this trend. Confidence bands Making a confidence bandVandenberg Air Force Base is often used as a location to launch rockets into space. You have a theory that a recent increase in the pace of rocket launches could be harming the air quality in the surrounding region. To explore this, you plotted a 25-day rolling average line of the measurements of atmospheric $NO_2$. To help decide if any pattern observed is random-noise or not, you decide to add a 99% confidence band around your rolling mean. Adding a confidence band to a trend line can help shed light on the stability of the trend seen. This can either increase or decrease the confidence in the discovered trend.
vandenberg_NO2 = pd.read_csv('./dataset/vandenberg_NO2.csv', index_col=0) vandenberg_NO2.head() # Draw 99% interval bands for average NO2 vandenberg_NO2['lower'] = vandenberg_NO2['mean'] - 2.58 * vandenberg_NO2['std_err'] vandenberg_NO2['upper'] = vandenberg_NO2['mean'] + 2.58 * vandenberg_NO2['std_err'] # Plot mean estimate as a white semi-transparent line plt.plot('day', 'mean', data=vandenberg_NO2, color='white', alpha=0.4); # Fill between the upper and lower confidence band values plt.fill_between(x='day', y1='lower', y2='upper', data=vandenberg_NO2);
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
This plot shows that the middle of the year's $NO_2$ values are not only lower than the beginning and end of the year but also are less noisy. If just the moving average line were plotted, then this potentially interesting observation would be completely missed. (Can you think of what may cause reduced variance at the lower values of the pollutant?) Separating a lot of bandsIt is relatively simple to plot a bunch of trend lines on top of each other for rapid and precise comparisons. Unfortunately, if you need to add uncertainty bands around those lines, the plot becomes very difficult to read. Figuring out whether a line corresponds to the top of one class' band or the bottom of another's can be hard due to band overlap. Luckily in Seaborn, it's not difficult to break up the overlapping bands into separate faceted plots.To see this, explore trends in SO2 levels for a few cities in the eastern half of the US. If you plot the trends and their confidence bands on a single plot - it's a mess. To fix, use Seaborn's `FacetGrid()` function to spread out the confidence intervals to multiple panes to ease your inspection.
eastern_SO2 = pd.read_csv('./dataset/eastern_SO2.csv', index_col=0) eastern_SO2.head() # setup a grid of plots with columns divided by location g = sns.FacetGrid(eastern_SO2, col='city', col_wrap=2); # Map interval plots to each cities data with coral colored ribbons g.map(plt.fill_between, 'day', 'lower', 'upper', color='coral'); # Map overlaid mean plots with white line g.map(plt.plot, 'day', 'mean', color='white');
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
By separating each band into its own plot you can investigate each city with ease. Here, you see that Des Moines and Houston on average have lower SO2 values for the entire year than the two cities in the Midwest. Cincinnati has a high and variable peak near the beginning of the year but is generally more stable and lower than Indianapolis. Cleaning up bands for overlapsYou are working for the city of Denver, Colorado and want to run an ad campaign about how much cleaner Denver's air is than Long Beach, California's air. To investigate this claim, you will compare the SO2 levels of both cities for the year 2014. Since you are solely interested in how the cities compare, you want to keep the bands on the same plot. To make the bands easier to compare, decrease the opacity of the confidence bands and set a clear legend.
SO2_compare = pd.read_csv('./dataset/SO2_compare.csv', index_col=0) SO2_compare.head() for city, color in [('Denver', '#66c2a5'), ('Long Beach', '#fc8d62')]: # Filter data to desired city city_data = SO2_compare[SO2_compare.city == city] # Set city interval color to desired and lower opacity plt.fill_between(x='day', y1='lower', y2='upper', data=city_data, color=color, alpha=0.4); # Draw a faint mean line for reference and give a label for legend plt.plot('day', 'mean', data=city_data, label=city, color=color, alpha=0.25); plt.legend();
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
From these two curves you can see that during the first half of the year Long Beach generally has a higher average SO2 value than Denver, in the middle of the year they are very close, and at the end of the year Denver seems to have higher averages. However, by showing the confidence intervals, you can see however that almost none of the year shows a statistically meaningful difference in average values between the two cities. Beyond 95% 90, 95, and 99% intervalsYou are a data scientist for an outdoor adventure company in Fairbanks, Alaska. Recently, customers have been having issues with SO2 pollution, leading to costly cancellations. The company has sensors for CO, NO2, and O3 but not SO2 levels.You've built a model that predicts SO2 values based on the values of pollutants with sensors (loaded as `pollution_model`, a `statsmodels` object). You want to investigate which pollutant's value has the largest effect on your model's SO2 prediction. This will help you know which pollutant's values to pay most attention to when planning outdoor tours. To maximize the amount of information in your report, show multiple levels of uncertainty for the model estimates.
from statsmodels.formula.api import ols pollution = pd.read_csv('./dataset/pollution_wide.csv') pollution = pollution.query("city == 'Fairbanks' & year == 2014 & month == 11") pollution_model = ols(formula='SO2 ~ CO + NO2 + O3 + day', data=pollution) res = pollution_model.fit() # Add interval percent widths alphas = [ 0.01, 0.05, 0.1] widths = [ '99% CI', '95%', '90%'] colors = ['#fee08b','#fc8d59','#d53e4f'] for alpha, color, width in zip(alphas, colors, widths): # Grab confidence interval conf_ints = res.conf_int(alpha) # Pass current interval color and legend label to plot plt.hlines(y = conf_ints.index, xmin = conf_ints[0], xmax = conf_ints[1], colors = color, label = width, linewidth = 10) # Draw point estimates plt.plot(res.params, res.params.index, 'wo', label = 'Point Estimate') plt.legend(loc = 'upper right')
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
90 and 95% bandsYou are looking at a 40-day rolling average of the $NO_2$ pollution levels for the city of Cincinnati in 2013. To provide as detailed a picture of the uncertainty in the trend you want to look at both the 90 and 99% intervals around this rolling estimate.To do this, set up your two interval sizes and an orange ordinal color palette. Additionally, to enable precise readings of the bands, make them semi-transparent, so the Seaborn background grids show through.
cinci_13_no2 = pd.read_csv('./dataset/cinci_13_no2.csv', index_col=0); cinci_13_no2.head() int_widths = ['90%', '99%'] z_scores = [1.67, 2.58] colors = ['#fc8d59', '#fee08b'] for percent, Z, color in zip(int_widths, z_scores, colors): # Pass lower and upper confidence bounds and lower opacity plt.fill_between( x = cinci_13_no2.day, alpha = 0.4, color = color, y1 = cinci_13_no2['mean'] - Z * cinci_13_no2['std_err'], y2 = cinci_13_no2['mean'] + Z * cinci_13_no2['std_err'], label = percent); plt.legend();
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
This plot shows us that throughout 2013, the average NO2 values in Cincinnati followed a cyclical pattern with the seasons. However, the uncertainty bands show that for most of the year you can't be sure this pattern is not noise at both a 90 and 99% confidence level. Using band thickness instead of coloringYou are a researcher investigating the elevation a rocket reaches before visual is lost and pollutant levels at Vandenberg Air Force Base. You've built a model to predict this relationship, and since you are working independently, you don't have the money to pay for color figures in your journal article. You need to make your model results plot work in black and white. To do this, you will plot the 90, 95, and 99% intervals of the effect of each pollutant as successively smaller bars.
rocket_model = pd.read_csv('./dataset/rocket_model.csv', index_col=0) rocket_model # Decrase interval thickness as interval widens sizes = [ 15, 10, 5] int_widths = ['90% CI', '95%', '99%'] z_scores = [ 1.67, 1.96, 2.58] for percent, Z, size in zip(int_widths, z_scores, sizes): plt.hlines(y = rocket_model.pollutant, xmin = rocket_model['est'] - Z * rocket_model['std_err'], xmax = rocket_model['est'] + Z * rocket_model['std_err'], label = percent, # Resize lines and color them gray linewidth = size, color = 'gray'); # Add point estimate plt.plot('est', 'pollutant', 'wo', data = rocket_model, label = 'Point Estimate'); plt.legend(loc = 'center left', bbox_to_anchor = (1, 0.5));
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
While less elegant than using color to differentiate interval sizes, this plot still clearly allows the reader to access the effect each pollutant has on rocket visibility. You can see that of all the pollutants, O3 has the largest effect and also the tightest confidence bounds Visualizing the bootstrap The bootstrap histogramYou are considering a vacation to Cincinnati in May, but you have a severe sensitivity to NO2. You pull a few years of pollution data from Cincinnati in May and look at a bootstrap estimate of the average $NO_2$ levels. You only have one estimate to look at the best way to visualize the results of your bootstrap estimates is with a histogram.While you like the intuition of the bootstrap histogram by itself, your partner who will be going on the vacation with you, likes seeing percent intervals. To accommodate them, you decide to highlight the 95% interval by shading the region.
# Perform bootstrapped mean on a vector def bootstrap(data, n_boots): return [np.mean(np.random.choice(data,len(data))) for _ in range(n_boots) ] pollution = pd.read_csv('./dataset/pollution_wide.csv') cinci_may_NO2 = pollution.query("city == 'Cincinnati' & month == 5").NO2 # Generate bootstrap samples boot_means = bootstrap(cinci_may_NO2, 1000) # Get lower and upper 95% interval bounds lower, upper = np.percentile(boot_means, [2.5, 97.5]) # Plot shaded area for interval plt.axvspan(lower, upper, color = 'gray', alpha = 0.2); # Draw histogram of bootstrap samples sns.distplot(boot_means, bins = 100, kde = False);
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
Your bootstrap histogram looks stable and uniform. You're now confident that the average NO2 levels in Cincinnati during your vacation should be in the range of 16 to 23. Bootstrapped regressionsWhile working for the Long Beach parks and recreation department investigating the relationship between $NO_2$ and $SO_2$ you noticed a cluster of potential outliers that you suspect might be throwing off the correlations.Investigate the uncertainty of your correlations through bootstrap resampling to see how stable your fits are. For convenience, the bootstrap sampling is complete and is provided as `no2_so2_boot` along with `no2_so2` for the non-resampled data.
no2_so2 = pd.read_csv('./dataset/no2_so2.csv', index_col=0) no2_so2_boot = pd.read_csv('./dataset/no2_so2_boot.csv', index_col=0) sns.lmplot('NO2', 'SO2', data = no2_so2_boot, # Tell seaborn to a regression line for each sample hue = 'sample', # Make lines blue and transparent line_kws = {'color': 'steelblue', 'alpha': 0.2}, # Disable built-in confidence intervals ci = None, legend = False, scatter = False); # Draw scatter of all points plt.scatter('NO2', 'SO2', data = no2_so2);
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
The outliers appear to drag down the regression lines as evidenced by the cluster of lines with more severe slopes than average. In a single plot, you have not only gotten a good idea of the variability of your correlation estimate but also the potential effects of outliers. Lots of bootstraps with beeswarmsAs a current resident of Cincinnati, you're curious to see how the average NO2 values compare to Des Moines, Indianapolis, and Houston: a few other cities you've lived in.To look at this, you decide to use bootstrap estimation to look at the mean NO2 values for each city. Because the comparisons are of primary interest, you will use a swarm plot to compare the estimates.
pollution_may = pollution.query("month == 5") pollution_may # Initialize a holder DataFrame for bootstrap results city_boots = pd.DataFrame() for city in ['Cincinnati', 'Des Moines', 'Indianapolis', 'Houston']: # Filter to city city_NO2 = pollution_may[pollution_may.city == city].NO2 # Bootstrap city data & put in DataFrame cur_boot = pd.DataFrame({'NO2_avg': bootstrap(city_NO2, 100), 'city': city}) # Append to other city's bootstraps city_boots = pd.concat([city_boots,cur_boot]) # Beeswarm plot of averages with citys on y axis sns.swarmplot(y = "city", x = "NO2_avg", data = city_boots, color = 'coral');
_____no_output_____
Apache-2.0
_notebooks/2020-06-29-01-Showing-uncertainty.ipynb
AntonovMikhail/chans_jupyter
Define our search parameters and send to the USGS Use a dict, with same names as used by the USGS web call.Send a query to the web server. The result is a list of events also in a dict format.
search_params = { 'starttime': "2018-05-01", 'endtime': "2018-05-17", 'minmagnitude': 6.8, 'maxmagnitude': 10.0, 'mindepth': 0.0, 'maxdepth': 50.0, 'minlongitude': -180.0, 'maxlongitude': -97.0, 'minlatitude': 0.0, 'maxlatitude': 45.0, 'limit': 50, 'producttype': 'shakemap' } events = usgs_web.search_usgsevents(search_params)
Sending query to get events... Parsing... ...1 events returned (limit of 50) 70116556 : M 6.9 - 19km SSW of Leilani Estates, Hawaii
MIT
notebooks/find_a_shakemap.ipynb
iwbailey/shakemap_lookup
Check the metadata Display metadata including number of earthquakes returned and what url was used for the query
for k, v in events['metadata'].items(): print(k,":", v)
generated : 1575582197000 url : https://earthquake.usgs.gov/fdsnws/event/1/query?starttime=2018-05-01&endtime=2018-05-17&minmagnitude=6.8&maxmagnitude=10.0&mindepth=0.0&maxdepth=50.0&minlongitude=-180.0&maxlongitude=-97.0&minlatitude=0.0&maxlatitude=45.0&limit=50&producttype=shakemap&format=geojson&jsonerror=true title : USGS Earthquakes status : 200 api : 1.8.1 limit : 50 offset : 1 count : 1
MIT
notebooks/find_a_shakemap.ipynb
iwbailey/shakemap_lookup
Selection of event from candidates
my_event = usgs_web.choose_event(events) my_event
USER SELECTION OF EVENT: ======================== 0: M 6.9 - 19km SSW of Leilani Estates, Hawaii (70116556) None: First on list -1: Exit Choice: ... selected M 6.9 - 19km SSW of Leilani Estates, Hawaii (70116556)
MIT
notebooks/find_a_shakemap.ipynb
iwbailey/shakemap_lookup
Select which ShakeMap for the selected event
smDetail = usgs_web.query_shakemapdetail(my_event['properties'])
Querying detailed event info for eventId=70116556... ...2 shakemaps found USER SELECTION OF SHAKEMAP: =========================== Option 0: eventsourcecode: 70116556 version: 1 process-timestamp: 2018-09-08T02:52:24Z Option 1: eventsourcecode: 1000dyad version: 11 process-timestamp: 2018-06-15T23:02:03Z Choice [default 0]: ... selected 0
MIT
notebooks/find_a_shakemap.ipynb
iwbailey/shakemap_lookup
Display available content for the ShakeMap
print("Available Content\n=================") for k, v in smDetail['contents'].items(): print("{:32s}: {} [{}]".format(k, v['contentType'], v['length']))
Available Content ================= about_formats.html : text/html [28820] contents.xml : application/xml [9187] download/70116556.kml : application/vnd.google-earth.kml+xml [1032] download/cont_mi.json : application/json [79388] download/cont_mi.kmz : application/vnd.google-earth.kmz [17896] download/cont_pga.json : application/json [17499] download/cont_pga.kmz : application/vnd.google-earth.kmz [4362] download/cont_pgv.json : application/json [12352] download/cont_pgv.kmz : application/vnd.google-earth.kmz [3309] download/cont_psa03.json : application/json [24669] download/cont_psa03.kmz : application/vnd.google-earth.kmz [5843] download/cont_psa10.json : application/json [15028] download/cont_psa10.kmz : application/vnd.google-earth.kmz [3843] download/cont_psa30.json : application/json [7537] download/cont_psa30.kmz : application/vnd.google-earth.kmz [2254] download/epicenter.kmz : application/vnd.google-earth.kmz [1299] download/event.txt : text/plain [125] download/grid.xml : application/xml [3423219] download/grid.xml.zip : application/zip [493382] download/grid.xyz.zip : application/zip [428668] download/hazus.zip : application/zip [329755] download/hv70116556.kml : application/vnd.google-earth.kml+xml [1032] download/hv70116556.kmz : application/vnd.google-earth.kmz [127511] download/ii_overlay.png : image/png [25259] download/ii_thumbnail.jpg : image/jpeg [3530] download/info.json : application/json [2237] download/intensity.jpg : image/jpeg [60761] download/intensity.ps.zip : application/zip [139098] download/metadata.txt : text/plain [33137] download/mi_regr.png : image/png [35160] download/overlay.kmz : application/vnd.google-earth.kmz [25245] download/pga.jpg : image/jpeg [49594] download/pga.ps.zip : application/zip [89668] download/pga_regr.png : image/png [33466] download/pgv.jpg : image/jpeg [49781] download/pgv.ps.zip : application/zip [89389] download/pgv_regr.png : image/png [17605] download/polygons_mi.kmz : application/vnd.google-earth.kmz [43271] download/psa03.jpg : image/jpeg [49354] download/psa03.ps.zip : application/zip [90027] download/psa03_regr.png : image/png [18371] download/psa10.jpg : image/jpeg [49003] download/psa10.ps.zip : application/zip [89513] download/psa10_regr.png : image/png [31310] download/psa30.jpg : image/jpeg [48956] download/psa30.ps.zip : application/zip [89113] download/psa30_regr.png : image/png [18055] download/raster.zip : application/zip [1940448] download/rock_grid.xml.zip : application/zip [403486] download/sd.jpg : image/jpeg [45869] download/shape.zip : application/zip [1029832] download/stationlist.json : application/json [55083] download/stationlist.txt : text/plain [6737] download/stationlist.xml : application/xml [32441] download/stations.kmz : application/vnd.google-earth.kmz [7343] download/tvguide.txt : text/plain [8765] download/tvmap.jpg : image/jpeg [44223] download/tvmap.ps.zip : application/zip [273000] download/tvmap_bare.jpg : image/jpeg [48640] download/tvmap_bare.ps.zip : application/zip [273146] download/uncertainty.xml.zip : application/zip [211743] download/urat_pga.jpg : image/jpeg [45869] download/urat_pga.ps.zip : application/zip [51741] intensity.html : text/html [19291] pga.html : text/html [19083] pgv.html : text/html [19083] products.html : text/html [18584] psa03.html : text/html [20250] psa10.html : text/html [20249] psa30.html : text/html [20249] stationlist.html : text/html [127947]
MIT
notebooks/find_a_shakemap.ipynb
iwbailey/shakemap_lookup
Get download linksClick on the link to download
# Extract the shakemap grid urls and version from the detail grid = smDetail['contents']['download/grid.xml.zip'] print(grid['url']) grid = smDetail['contents']['download/uncertainty.xml.zip'] print(grid['url'])
https://earthquake.usgs.gov/archive/product/shakemap/hv70116556/us/1536375199192/download/uncertainty.xml.zip
MIT
notebooks/find_a_shakemap.ipynb
iwbailey/shakemap_lookup
Lambda School Data Science*Unit 2, Sprint 3, Module 3*--- Permutation & Boosting- Get **permutation importances** for model interpretation and feature selection- Use xgboost for **gradient boosting** SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries:- category_encoders- [**eli5**](https://eli5.readthedocs.io/en/latest/)- matplotlib- numpy- pandas- scikit-learn- [**xgboost**](https://xgboost.readthedocs.io/en/latest/)
%%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* !pip install eli5 # If you're working locally: else: DATA_PATH = '../data/'
_____no_output_____
MIT
module3-permutation-boosting/LS_DS_233.ipynb
mariokart345/DS-Unit-2-Applied-Modeling
We'll go back to Tanzania Waterpumps for this lesson.
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split # Merge train_features.csv & train_labels.csv train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) # Read test_features.csv & sample_submission.csv test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') # Split train into train & val train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['status_group'], random_state=42) def wrangle(X): """Wrangle train, validate, and test sets in the same way""" # Prevent SettingWithCopyWarning X = X.copy() # About 3% of the time, latitude has small values near zero, # outside Tanzania, so we'll treat these values like zero. X['latitude'] = X['latitude'].replace(-2e-08, 0) # When columns have zeros and shouldn't, they are like null values. # So we will replace the zeros with nulls, and impute missing values later. # Also create a "missing indicator" column, because the fact that # values are missing may be a predictive signal. cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) X[col+'_MISSING'] = X[col].isnull() # Drop duplicate columns duplicates = ['quantity_group', 'payment_type'] X = X.drop(columns=duplicates) # Drop recorded_by (never varies) and id (always varies, random) unusable_variance = ['recorded_by', 'id'] X = X.drop(columns=unusable_variance) # Convert date_recorded to datetime X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True) # Extract components from date_recorded, then drop the original column X['year_recorded'] = X['date_recorded'].dt.year X['month_recorded'] = X['date_recorded'].dt.month X['day_recorded'] = X['date_recorded'].dt.day X = X.drop(columns='date_recorded') # Engineer feature: how many years from construction_year to date_recorded X['years'] = X['year_recorded'] - X['construction_year'] X['years_MISSING'] = X['years'].isnull() # return the wrangled dataframe return X train = wrangle(train) val = wrangle(val) test = wrangle(test) # Arrange data into X features matrix and y target vector target = 'status_group' X_train = train.drop(columns=target) y_train = train[target] X_val = val.drop(columns=target) y_val = val[target] X_test = test import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) # Fit on train, score on val pipeline.fit(X_train, y_train) print('Validation Accuracy', pipeline.score(X_val, y_val))
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm
MIT
module3-permutation-boosting/LS_DS_233.ipynb
mariokart345/DS-Unit-2-Applied-Modeling
Get permutation importances for model interpretation and feature selection Overview Default Feature Importances are fast, but Permutation Importances may be more accurate.These links go deeper with explanations and examples:- Permutation Importances - [Kaggle / Dan Becker: Machine Learning Explainability](https://www.kaggle.com/dansbecker/permutation-importance) - [Christoph Molnar: Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/feature-importance.html)- (Default) Feature Importances - [Ando Saabas: Selecting good features, Part 3, Random Forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/) - [Terence Parr, et al: Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html) There are three types of feature importances: 1. (Default) Feature ImportancesFastest, good for first estimates, but be aware:>**When the dataset has two (or more) correlated features, then from the point of view of the model, any of these correlated features can be used as the predictor, with no concrete preference of one over the others.** But once one of them is used, the importance of others is significantly reduced since effectively the impurity they can remove is already removed by the first feature. As a consequence, they will have a lower reported importance. This is not an issue when we want to use feature selection to reduce overfitting, since it makes sense to remove features that are mostly duplicated by other features. But when interpreting the data, it can lead to the incorrect conclusion that one of the variables is a strong predictor while the others in the same group are unimportant, while actually they are very close in terms of their relationship with the response variable. — [Selecting good features – Part III: random forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/) > **The scikit-learn Random Forest feature importance ... tends to inflate the importance of continuous or high-cardinality categorical variables.** ... Breiman and Cutler, the inventors of Random Forests, indicate that this method of “adding up the gini decreases for each individual variable over all trees in the forest gives a **fast** variable importance that is often very consistent with the permutation importance measure.” — [Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html)
# Get feature importances rf = pipeline.named_steps['randomforestclassifier'] importances = pd.Series(rf.feature_importances_, X_train.columns) # Plot feature importances %matplotlib inline import matplotlib.pyplot as plt n = 20 plt.figure(figsize=(10,n/2)) plt.title(f'Top {n} features') importances.sort_values()[-n:].plot.barh(color='grey');
_____no_output_____
MIT
module3-permutation-boosting/LS_DS_233.ipynb
mariokart345/DS-Unit-2-Applied-Modeling
2. Drop-Column ImportanceThe best in theory, but too slow in practice
column = 'wpt_name' # Fit without column pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) pipeline.fit(X_train.drop(columns=column), y_train) score_without = pipeline.score(X_val.drop(columns=column), y_val) print(f'Validation Accuracy without {column}: {score_without}') # Fit with column pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) pipeline.fit(X_train, y_train) score_with = pipeline.score(X_val, y_val) print(f'Validation Accuracy with {column}: {score_with}') # Compare the error with & without column print(f'Drop-Column Importance for {column}: {score_with - score_without}')
Validation Accuracy without wpt_name: 0.8087542087542088 Validation Accuracy with wpt_name: 0.8135521885521886 Drop-Column Importance for wpt_name: 0.004797979797979801
MIT
module3-permutation-boosting/LS_DS_233.ipynb
mariokart345/DS-Unit-2-Applied-Modeling
3. Permutation ImportancePermutation Importance is a good compromise between Feature Importance based on impurity reduction (which is the fastest) and Drop Column Importance (which is the "best.")[The ELI5 library documentation explains,](https://eli5.readthedocs.io/en/latest/blackbox/permutation_importance.html)> Importance can be measured by looking at how much the score (accuracy, F1, R^2, etc. - any score we’re interested in) decreases when a feature is not available.>> To do that one can remove feature from the dataset, re-train the estimator and check the score. But it requires re-training an estimator for each feature, which can be computationally intensive. ...>>To avoid re-training the estimator we can remove a feature only from the test part of the dataset, and compute score without using this feature. It doesn’t work as-is, because estimators expect feature to be present. So instead of removing a feature we can replace it with random noise - feature column is still there, but it no longer contains useful information. This method works if noise is drawn from the same distribution as original feature values (as otherwise estimator may fail). The simplest way to get such noise is to shuffle values for a feature, i.e. use other examples’ feature values - this is how permutation importance is computed.>>The method is most suitable for computing feature importances when a number of columns (features) is not huge; it can be resource-intensive otherwise. Do-It-Yourself way, for intuition
#lets see how permutation works first nevi_array = [1,2,3,4,5] nevi_permuted = np.random.permutation(nevi_array) nevi_permuted #BEFORE : sequence of the feature to be permuted feature = 'quantity' X_val[feature].head() #BEFORE: distribution X_val[feature].value_counts() #PERMUTE X_val_permuted = X_val.copy() X_val_permuted[feature] =np.random.permutation(X_val[feature]) #AFTER : sequence of the feature to be permuted feature = 'quantity' X_val_permuted[feature].head() #AFTER: distribution X_val_permuted[feature].value_counts() #get the permutation importance X_val_permuted[feature] =np.random.permutation(X_val[feature]) score_permuted = pipeline.score(X_val_permuted, y_val) print(f'Validation Accuracy with {feature}: {score_with}') print(f'Validation Accuracy with {feature} permuted: {score_permuted}') print(f'Permutation Importance: {score_with - score_permuted}') feature = 'wpt_name' X_val_permuted=X_val.copy() X_val_permuted[feature] = np.random.permutation(X_val[feature]) score_permuted = pipeline.score(X_val_permuted, y_val) print(f'Validation Accuracy with {feature}: {score_with}') print(f'Validation Accuracy with {feature} permuted: {score_permuted}') print(f'Permutation Importance: {score_with - score_permuted}') X_val[feature]
_____no_output_____
MIT
module3-permutation-boosting/LS_DS_233.ipynb
mariokart345/DS-Unit-2-Applied-Modeling
With eli5 libraryFor more documentation on using this library, see:- [eli5.sklearn.PermutationImportance](https://eli5.readthedocs.io/en/latest/autodocs/sklearn.htmleli5.sklearn.permutation_importance.PermutationImportance)- [eli5.show_weights](https://eli5.readthedocs.io/en/latest/autodocs/eli5.htmleli5.show_weights)- [scikit-learn user guide, `scoring` parameter](https://scikit-learn.org/stable/modules/model_evaluation.htmlthe-scoring-parameter-defining-model-evaluation-rules)eli5 doesn't work with pipelines.
# Ignore warnings transformers = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median') ) X_train_transformed = transformers.fit_transform(X_train) X_val_transformed = transformers.transform(X_val) model = RandomForestClassifier(n_estimators=50, random_state=42, n_jobs=-1) model.fit(X_train_transformed, y_train) import eli5 from eli5.sklearn import PermutationImportance permuter = PermutationImportance( model, scoring='accuracy', n_iter=5, random_state=42 ) permuter.fit(X_val_transformed,y_val) feature_names = X_val.columns.to_list() pd.Series(permuter.feature_importances_, feature_names).sort_values(ascending=False) eli5.show_weights( permuter, top=None, feature_names=feature_names )
_____no_output_____
MIT
module3-permutation-boosting/LS_DS_233.ipynb
mariokart345/DS-Unit-2-Applied-Modeling
We can use importances for feature selectionFor example, we can remove features with zero importance. The model trains faster and the score does not decrease.
print('Shape before removing feature ', X_train.shape) #remove features with feature importance <0 minimum_importance = 0 mask=permuter.feature_importances_ > minimum_importance features = X_train.columns[mask] X_train=X_train[features] print('Shape AFTER removing feature ', X_train.shape) X_val=X_val[features] pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='mean'), RandomForestClassifier(n_estimators=50, random_state=42, n_jobs=-1) ) #fit on train, score on val pipeline.fit(X_train, y_train) print('Validation accuracy', pipeline.score(X_val, y_val))
Validation accuracy 0.8066498316498316
MIT
module3-permutation-boosting/LS_DS_233.ipynb
mariokart345/DS-Unit-2-Applied-Modeling
Use xgboost for gradient boosting Overview In the Random Forest lesson, you learned this advice: Try Tree Ensembles when you do machine learning with labeled, tabular data- "Tree Ensembles" means Random Forest or **Gradient Boosting** models. - [Tree Ensembles often have the best predictive accuracy](https://arxiv.org/abs/1708.05070) with labeled, tabular data.- Why? Because trees can fit non-linear, non-[monotonic](https://en.wikipedia.org/wiki/Monotonic_function) relationships, and [interactions](https://christophm.github.io/interpretable-ml-book/interaction.html) between features.- A single decision tree, grown to unlimited depth, will [overfit](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/). We solve this problem by ensembling trees, with bagging (Random Forest) or **[boosting](https://www.youtube.com/watch?v=GM3CDQfQ4sw)** (Gradient Boosting).- Random Forest's advantage: may be less sensitive to hyperparameters. **Gradient Boosting's advantage:** may get better predictive accuracy. Like Random Forest, Gradient Boosting uses ensembles of trees. But the details of the ensembling technique are different: Understand the difference between boosting & baggingBoosting (used by Gradient Boosting) is different than Bagging (used by Random Forests). Here's an excerpt from [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8.2.3, Boosting:>Recall that bagging involves creating multiple copies of the original training data set using the bootstrap, fitting a separate decision tree to each copy, and then combining all of the trees in order to create a single predictive model.>>**Boosting works in a similar way, except that the trees are grown _sequentially_: each tree is grown using information from previously grown trees.**>>Unlike fitting a single large decision tree to the data, which amounts to _fitting the data hard_ and potentially overfitting, the boosting approach instead _learns slowly._ Given the current model, we fit a decision tree to the residuals from the model.>>We then add this new decision tree into the fitted function in order to update the residuals. Each of these trees can be rather small, with just a few terminal nodes. **By fitting small trees to the residuals, we slowly improve fˆ in areas where it does not perform well.**>>Note that in boosting, unlike in bagging, the construction of each tree depends strongly on the trees that have already been grown.This high-level overview is all you need to know for now. If you want to go deeper, we recommend you watch the StatQuest videos on gradient boosting! Let's write some code. We have lots of options for which libraries to use: Python libraries for Gradient Boosting- [scikit-learn Gradient Tree Boosting](https://scikit-learn.org/stable/modules/ensemble.htmlgradient-boosting) — slower than other libraries, but [the new version may be better](https://twitter.com/amuellerml/status/1129443826945396737) - Anaconda: already installed - Google Colab: already installed- [xgboost](https://xgboost.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://xiaoxiaowang87.github.io/monotonicity_constraint/) - Anaconda, Mac/Linux: `conda install -c conda-forge xgboost` - Windows: `conda install -c anaconda py-xgboost` - Google Colab: already installed- [LightGBM](https://lightgbm.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://blog.datadive.net/monotonicity-constraints-in-machine-learning/) - Anaconda: `conda install -c conda-forge lightgbm` - Google Colab: already installed- [CatBoost](https://catboost.ai/) — can accept missing values and use [categorical features](https://catboost.ai/docs/concepts/algorithm-main-stages_cat-to-numberic.html) without preprocessing - Anaconda: `conda install -c conda-forge catboost` - Google Colab: `pip install catboost` In this lesson, you'll use a new library, xgboost — But it has an API that's almost the same as scikit-learn, so it won't be a hard adjustment! [XGBoost Python API Reference: Scikit-Learn API](https://xgboost.readthedocs.io/en/latest/python/python_api.htmlmodule-xgboost.sklearn)
from xgboost import XGBClassifier pipeline = make_pipeline( ce.OrdinalEncoder(), XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) pipeline.fit(X_train, y_train) from sklearn.metrics import accuracy_score y_pred=pipeline.predict(X_val) print('Validation score', accuracy_score(y_val, y_pred))
Validation score 0.7453703703703703
MIT
module3-permutation-boosting/LS_DS_233.ipynb
mariokart345/DS-Unit-2-Applied-Modeling
[Avoid Overfitting By Early Stopping With XGBoost In Python](https://machinelearningmastery.com/avoid-overfitting-by-early-stopping-with-xgboost-in-python/)Why is early stopping better than a For loop, or GridSearchCV, to optimize `n_estimators`?With early stopping, if `n_iterations` is our number of iterations, then we fit `n_iterations` decision trees.With a for loop, or GridSearchCV, we'd fit `sum(range(1,n_rounds+1))` trees.But it doesn't work well with pipelines. You may need to re-run multiple times with different values of other parameters such as `max_depth` and `learning_rate`. XGBoost parameters- [Notes on parameter tuning](https://xgboost.readthedocs.io/en/latest/tutorials/param_tuning.html)- [Parameters documentation](https://xgboost.readthedocs.io/en/latest/parameter.html)
encoder = ce.OrdinalEncoder() X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) model = XGBClassifier( n_estimators=1000, # <= 1000 trees, depend on early stopping max_depth=7, # try deeper trees because of high cardinality categoricals learning_rate=0.5, # try higher learning rate n_jobs=-1 ) eval_set = [(X_train_encoded, y_train), (X_val_encoded, y_val)] model.fit(X_train_encoded, y_train, eval_set=eval_set, eval_metric='merror', early_stopping_rounds=50) # Stop if the score hasn't improved in 50 rounds results = model.evals_result() train_error = results['validation_0']['merror'] val_error = results['validation_1']['merror'] epoch = list(range(1, len(train_error)+1)) plt.plot(epoch, train_error, label='Train') plt.plot(epoch, val_error, label='Validation') plt.ylabel('Classification Error') plt.xlabel('Model Complexity (n_estimators)') plt.title('Validation Curve for this XGBoost model') plt.ylim((0.10, 0.25)) # Zoom in plt.legend();
_____no_output_____
MIT
module3-permutation-boosting/LS_DS_233.ipynb
mariokart345/DS-Unit-2-Applied-Modeling
Introductory Data Analysis Workflow ![Pipeline](https://imgs.xkcd.com/comics/data_pipeline.png)https://xkcd.com/2054 An example machine learning notebook* Original Notebook by [Randal S. Olson](http://www.randalolson.com/)* Supported by [Jason H. Moore](http://www.epistasis.org/)* [University of Pennsylvania Institute for Bioinformatics](http://upibi.org/)* Adapted for LU Py-Sem 2018 by [Valdis Saulespurens](valdis.s.coding@gmail.com) **You can also [execute the code in this notebook on Binder](https://mybinder.org/v2/gh/ValRCS/RigaComm_DataAnalysis/master) - no local installation required.**
# text 17.04.2019 import datetime print(datetime.datetime.now()) print('hello')
2019-06-13 16:12:23.662194 hello
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Table of contents1. [Introduction](Introduction)2. [License](License)3. [Required libraries](Required-libraries)4. [The problem domain](The-problem-domain)5. [Step 1: Answering the question](Step-1:-Answering-the-question)6. [Step 2: Checking the data](Step-2:-Checking-the-data)7. [Step 3: Tidying the data](Step-3:-Tidying-the-data) - [Bonus: Testing our data](Bonus:-Testing-our-data)8. [Step 4: Exploratory analysis](Step-4:-Exploratory-analysis)9. [Step 5: Classification](Step-5:-Classification) - [Cross-validation](Cross-validation) - [Parameter tuning](Parameter-tuning)10. [Step 6: Reproducibility](Step-6:-Reproducibility)11. [Conclusions](Conclusions)12. [Further reading](Further-reading)13. [Acknowledgements](Acknowledgements) Introduction[[ go back to the top ]](Table-of-contents)In the time it took you to read this sentence, terabytes of data have been collectively generated across the world — more data than any of us could ever hope to process, much less make sense of, on the machines we're using to read this notebook.In response to this massive influx of data, the field of Data Science has come to the forefront in the past decade. Cobbled together by people from a diverse array of fields — statistics, physics, computer science, design, and many more — the field of Data Science represents our collective desire to understand and harness the abundance of data around us to build a better world.In this notebook, I'm going to go over a basic Python data analysis pipeline from start to finish to show you what a typical data science workflow looks like.In addition to providing code examples, I also hope to imbue in you a sense of good practices so you can be a more effective — and more collaborative — data scientist.I will be following along with the data analysis checklist from [The Elements of Data Analytic Style](https://leanpub.com/datastyle), which I strongly recommend reading as a free and quick guidebook to performing outstanding data analysis.**This notebook is intended to be a public resource. As such, if you see any glaring inaccuracies or if a critical topic is missing, please feel free to point it out or (preferably) submit a pull request to improve the notebook.** License[[ go back to the top ]](Table-of-contents)Please see the [repository README file](https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projectslicense) for the licenses and usage terms for the instructional material and code in this notebook. In general, I have licensed this material so that it is as widely usable and shareable as possible. Required libraries[[ go back to the top ]](Table-of-contents)If you don't have Python on your computer, you can use the [Anaconda Python distribution](http://continuum.io/downloads) to install most of the Python packages you need. Anaconda provides a simple double-click installer for your convenience.This notebook uses several Python packages that come standard with the Anaconda Python distribution. The primary libraries that we'll be using are:* **NumPy**: Provides a fast numerical array structure and helper functions.* **pandas**: Provides a DataFrame structure to store data in memory and work with it easily and efficiently.* **scikit-learn**: The essential Machine Learning package in Python.* **matplotlib**: Basic plotting library in Python; most other Python plotting libraries are built on top of it.* **Seaborn**: Advanced statistical plotting library.* **watermark**: A Jupyter Notebook extension for printing timestamps, version numbers, and hardware information.**Note:** I will not be providing support for people trying to run this notebook outside of the Anaconda Python distribution. The problem domain[[ go back to the top ]](Table-of-contents)For the purposes of this exercise, let's pretend we're working for a startup that just got funded to create a smartphone app that automatically identifies species of flowers from pictures taken on the smartphone. We're working with a moderately-sized team of data scientists and will be building part of the data analysis pipeline for this app.We've been tasked by our company's Head of Data Science to create a demo machine learning model that takes four measurements from the flowers (sepal length, sepal width, petal length, and petal width) and identifies the species based on those measurements alone.We've been given a [data set](https://github.com/ValRCS/RCS_Data_Analysis_Python/blob/master/data/iris-data.csv) from our field researchers to develop the demo, which only includes measurements for three types of *Iris* flowers: *Iris setosa* *Iris versicolor* *Iris virginica*The four measurements we're using currently come from hand-measurements by the field researchers, but they will be automatically measured by an image processing model in the future.**Note:** The data set we're working with is the famous [*Iris* data set](https://archive.ics.uci.edu/ml/datasets/Iris) — included with this notebook — which I have modified slightly for demonstration purposes. Step 1: Answering the question[[ go back to the top ]](Table-of-contents)The first step to any data analysis project is to define the question or problem we're looking to solve, and to define a measure (or set of measures) for our success at solving that task. The data analysis checklist has us answer a handful of questions to accomplish that, so let's work through those questions.>Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?We're trying to classify the species (i.e., class) of the flower based on four measurements that we're provided: sepal length, sepal width, petal length, and petal width.Petal - ziedlapiņa, sepal - arī ziedlapiņa![Petal vs Sepal](https://upload.wikimedia.org/wikipedia/commons/thumb/7/78/Petal-sepal.jpg/293px-Petal-sepal.jpg)>Did you define the metric for success before beginning?Let's do that now. Since we're performing classification, we can use [accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) — the fraction of correctly classified flowers — to quantify how well our model is performing. Our company's Head of Data has told us that we should achieve at least 90% accuracy.>Did you understand the context for the question and the scientific or business application?We're building part of a data analysis pipeline for a smartphone app that will be able to classify the species of flowers from pictures taken on the smartphone. In the future, this pipeline will be connected to another pipeline that automatically measures from pictures the traits we're using to perform this classification.>Did you record the experimental design?Our company's Head of Data has told us that the field researchers are hand-measuring 50 randomly-sampled flowers of each species using a standardized methodology. The field researchers take pictures of each flower they sample from pre-defined angles so the measurements and species can be confirmed by the other field researchers at a later point. At the end of each day, the data is compiled and stored on a private company GitHub repository.>Did you consider whether the question could be answered with the available data?The data set we currently have is only for three types of *Iris* flowers. The model built off of this data set will only work for those *Iris* flowers, so we will need more data to create a general flower classifier.Notice that we've spent a fair amount of time working on the problem without writing a line of code or even looking at the data.**Thinking about and documenting the problem we're working on is an important step to performing effective data analysis that often goes overlooked.** Don't skip it. Step 2: Checking the data[[ go back to the top ]](Table-of-contents)The next step is to look at the data we're working with. Even curated data sets from the government can have errors in them, and it's vital that we spot these errors before investing too much time in our analysis.Generally, we're looking to answer the following questions:* Is there anything wrong with the data?* Are there any quirks with the data?* Do I need to fix or remove any of the data?Let's start by reading the data into a pandas DataFrame.
import pandas as pd iris_data = pd.read_csv('../data/iris-data.csv') # Resources for loading data from nonlocal sources # Pandas Can generally handle most common formats # https://pandas.pydata.org/pandas-docs/stable/io.html # SQL https://stackoverflow.com/questions/39149243/how-do-i-connect-to-a-sql-server-database-with-python # NoSQL MongoDB https://realpython.com/introduction-to-mongodb-and-python/ # Apache Hadoop: https://dzone.com/articles/how-to-get-hadoop-data-into-a-python-model # Apache Spark: https://www.datacamp.com/community/tutorials/apache-spark-python # Data Scraping / Crawling libraries : https://elitedatascience.com/python-web-scraping-libraries Big Topic in itself # Most data resources have some form of Python API / Library iris_data.head()
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
We're in luck! The data seems to be in a usable format.The first row in the data file defines the column headers, and the headers are descriptive enough for us to understand what each column represents. The headers even give us the units that the measurements were recorded in, just in case we needed to know at a later point in the project.Each row following the first row represents an entry for a flower: four measurements and one class, which tells us the species of the flower.**One of the first things we should look for is missing data.** Thankfully, the field researchers already told us that they put a 'NA' into the spreadsheet when they were missing a measurement.We can tell pandas to automatically identify missing values if it knows our missing value marker.
iris_data.shape iris_data.info() iris_data.describe() iris_data = pd.read_csv('../data/iris-data.csv', na_values=['NA', 'N/A'])
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Voilà! Now pandas knows to treat rows with 'NA' as missing values. Next, it's always a good idea to look at the distribution of our data — especially the outliers.Let's start by printing out some summary statistics about the data set.
iris_data.describe()
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
We can see several useful values from this table. For example, we see that five `petal_width_cm` entries are missing.If you ask me, though, tables like this are rarely useful unless we know that our data should fall in a particular range. It's usually better to visualize the data in some way. Visualization makes outliers and errors immediately stand out, whereas they might go unnoticed in a large table of numbers.Since we know we're going to be plotting in this section, let's set up the notebook so we can plot inside of it.
# This line tells the notebook to show plots inside of the notebook %matplotlib inline import matplotlib.pyplot as plt import seaborn as sb
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Next, let's create a **scatterplot matrix**. Scatterplot matrices plot the distribution of each column along the diagonal, and then plot a scatterplot matrix for the combination of each variable. They make for an efficient tool to look for errors in our data.We can even have the plotting package color each entry by its class to look for trends within the classes.
# We have to temporarily drop the rows with 'NA' values # because the Seaborn plotting function does not know # what to do with them sb.pairplot(iris_data.dropna(), hue='class')
C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\_methods.py:140: RuntimeWarning: Degrees of freedom <= 0 for slice keepdims=keepdims) C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\_methods.py:132: RuntimeWarning: invalid value encountered in double_scalars ret = ret.dtype.type(ret / rcount)
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
From the scatterplot matrix, we can already see some issues with the data set:1. There are five classes when there should only be three, meaning there were some coding errors.2. There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.3. We had to drop those rows with missing values.In all of these cases, we need to figure out what to do with the erroneous data. Which takes us to the next step... Step 3: Tidying the data GIGO principle[[ go back to the top ]](Table-of-contents)Now that we've identified several errors in the data set, we need to fix them before we proceed with the analysis.Let's walk through the issues one-by-one.>There are five classes when there should only be three, meaning there were some coding errors.After talking with the field researchers, it sounds like one of them forgot to add `Iris-` before their `Iris-versicolor` entries. The other extraneous class, `Iris-setossa`, was simply a typo that they forgot to fix.Let's use the DataFrame to fix these errors.
iris_data['class'].unique() # Copy and Replace iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor' iris_data['class'].unique() # So we take a row where a specific column('class' here) matches our bad values # and change them to good values iris_data.loc[iris_data['class'] == 'Iris-setossa', 'class'] = 'Iris-setosa' iris_data['class'].unique() iris_data.tail() iris_data[98:103]
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Much better! Now we only have three class types. Imagine how embarrassing it would've been to create a model that used the wrong classes.>There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.Fixing outliers can be tricky business. It's rarely clear whether the outlier was caused by measurement error, recording the data in improper units, or if the outlier is a real anomaly. For that reason, we should be judicious when working with outliers: if we decide to exclude any data, we need to make sure to document what data we excluded and provide solid reasoning for excluding that data. (i.e., "This data didn't fit my hypothesis" will not stand peer review.)In the case of the one anomalous entry for `Iris-setosa`, let's say our field researchers know that it's impossible for `Iris-setosa` to have a sepal width below 2.5 cm. Clearly this entry was made in error, and we're better off just scrapping the entry than spending hours finding out what happened.
smallpetals = iris_data.loc[(iris_data['sepal_width_cm'] < 2.5) & (iris_data['class'] == 'Iris-setosa')] smallpetals iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist() # This line drops any 'Iris-setosa' rows with a separal width less than 2.5 cm # Let's go over this command in class iris_data = iris_data.loc[(iris_data['class'] != 'Iris-setosa') | (iris_data['sepal_width_cm'] >= 2.5)] iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Excellent! Now all of our `Iris-setosa` rows have a sepal width greater than 2.5.The next data issue to address is the several near-zero sepal lengths for the `Iris-versicolor` rows. Let's take a look at those rows.
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') & (iris_data['sepal_length_cm'] < 1.0)]
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
How about that? All of these near-zero `sepal_length_cm` entries seem to be off by two orders of magnitude, as if they had been recorded in meters instead of centimeters.After some brief correspondence with the field researchers, we find that one of them forgot to convert those measurements to centimeters. Let's do that for them.
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist() iris_data['sepal_length_cm'].hist() # Here we fix the wrong units iris_data.loc[(iris_data['class'] == 'Iris-versicolor') & (iris_data['sepal_length_cm'] < 1.0), 'sepal_length_cm'] *= 100.0 iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist() ; iris_data['sepal_length_cm'].hist()
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Phew! Good thing we fixed those outliers. They could've really thrown our analysis off.>We had to drop those rows with missing values.Let's take a look at the rows with missing values:
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) | (iris_data['sepal_width_cm'].isnull()) | (iris_data['petal_length_cm'].isnull()) | (iris_data['petal_width_cm'].isnull())]
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
It's not ideal that we had to drop those rows, especially considering they're all `Iris-setosa` entries. Since it seems like the missing data is systematic — all of the missing values are in the same column for the same *Iris* type — this error could potentially bias our analysis.One way to deal with missing data is **mean imputation**: If we know that the values for a measurement fall in a certain range, we can fill in empty values with the average of that measurement.Let's see if we can do that here.
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].hist()
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Most of the petal widths for `Iris-setosa` fall within the 0.2-0.3 range, so let's fill in these entries with the average measured petal width.
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean() average_petal_width = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean() print(average_petal_width) iris_data.loc[(iris_data['class'] == 'Iris-setosa') & (iris_data['petal_width_cm'].isnull()), 'petal_width_cm'] = average_petal_width iris_data.loc[(iris_data['class'] == 'Iris-setosa') & (iris_data['petal_width_cm'] == average_petal_width)] iris_data.loc[(iris_data['sepal_length_cm'].isnull()) | (iris_data['sepal_width_cm'].isnull()) | (iris_data['petal_length_cm'].isnull()) | (iris_data['petal_width_cm'].isnull())]
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Great! Now we've recovered those rows and no longer have missing data in our data set.**Note:** If you don't feel comfortable imputing your data, you can drop all rows with missing data with the `dropna()` call: iris_data.dropna(inplace=True)After all this hard work, we don't want to repeat this process every time we work with the data set. Let's save the tidied data file *as a separate file* and work directly with that data file from now on.
iris_data.to_json('../data/iris-clean.json') iris_data.to_csv('../data/iris-data-clean.csv', index=False) cleanedframe = iris_data.dropna() iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Now, let's take a look at the scatterplot matrix now that we've tidied the data.
myplot = sb.pairplot(iris_data_clean, hue='class') myplot.savefig('irises.png') import scipy.stats as stats iris_data = pd.read_csv('../data/iris-data.csv') iris_data.columns.unique() stats.entropy(iris_data_clean['sepal_length_cm']) iris_data.columns[:-1] # we go through list of column names except last one and get entropy # for data (without missing values) in each column for col in iris_data.columns[:-1]: print("Entropy for: ", col, stats.entropy(iris_data[col].dropna()))
Entropy for: sepal_length_cm 4.96909746125432 Entropy for: sepal_width_cm 5.000701325982732 Entropy for: petal_length_cm 4.888113822938816 Entropy for: petal_width_cm 4.754264731532864
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July