markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
3) Splitting data into training and testing sets Our day is now nice and squeaky clean! This definitely always happens in real life. Next up, let's scale the data and split it into a training and test set.
from sklearn import preprocessing from sklearn.cross_validation import train_test_split # Scale and split dataset X_scaled = preprocessing.scale(X) # Split into training and test sets XTrain, XTest, yTrain, yTest = train_test_split(X_scaled, y, random_state=1)
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
4. Running algorithms on the data Blah blah now it's time to train algorithms. We are doing binary classification. Could ahve also used logistic regression, kNN, etc etc. 4.1 Random forests Let’s build a random forest. A great explanation of random forests can be found here. Briefly, random forests build a collection of classification trees, which each try to predict classes by recursively splitting the data on features that split classes best. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced, a variety of different trees are built, and the 'random forest' ensembles together these base learners. A hyperparameter is something than influences the performance of your model, but isn't directly tuned during model training. The main hyperparameters to adjust for random forrests are n_estimators and max_features. n_estimators controls the number of trees in the forest - the more the better, but more trees comes at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node. We could also choose to tune various other hyperpramaters, like max_depth (the maximum depth of a tree, which controls how tall we grow our trees and influences overfitting) and the choice of the purity criterion (which are specific formulas for calculating how good or 'pure' our splits make the terminal nodes). We are doing gridsearch to find optimal hyperparameter values, which tries out each given value for each hyperparameter of interst and sees how well it performs using (in this case) 10-fold cross-validation (CV). As a reminder, in cross-validation we try to estimate the test-set performance for a model; in k-fold CV, the estimate is done by repeatedly partitioning the dataset into k parts and 'testing' on 1/kth of it. We could have also tuned our hyperparameters using randomized search, which samples some values from a distribution rather than trying out all given values. Either is probably fine. The following code block takes about a minute to run.
from sklearn import metrics from sklearn.grid_search import GridSearchCV, RandomizedSearchCV from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier # Search for good hyperparameter values # Specify values to grid search over n_estimators = np.arange(1, 30, 5) max_features = np.arange(1, X.shape[1], 10) max_depth = np.arange(1, 100, 10) hyperparameters = {'n_estimators': n_estimators, 'max_features': max_features, 'max_depth': max_depth} # Grid search using cross-validation gridCV = GridSearchCV(RandomForestClassifier(), param_grid=hyperparameters, cv=10, n_jobs=4) gridCV.fit(XTrain, yTrain) best_n_estim = gridCV.best_params_['n_estimators'] best_max_features = gridCV.best_params_['max_features'] best_max_depth = gridCV.best_params_['max_depth'] # Train classifier using optimal hyperparameter values # We could have also gotten this model out from gridCV.best_estimator_ clfRDF = RandomForestClassifier(n_estimators=best_n_estim, max_features=best_max_features, max_depth=best_max_depth) clfRDF.fit(XTrain, yTrain) RF_predictions = clfRDF.predict(XTest) print (metrics.classification_report(yTest, RF_predictions)) print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
93-95% accuracy, not too shabby! Have a look and see how random forests with suboptimal hyperparameters fare. We got around 91-92% accuracy on the out of the box (untuned) random forests, which actually isn't terrible. 2) Second algorithm: support vector machines Let's train our second algorithm, support vector machines (SVMs) to do the same exact prediction task. A great introduction to the theory behind SVMs can be read here. Briefly, SVMs search for hyperplanes in the feature space which best divide the different classes in your dataset. Crucially, SVMs can find non-linear decision boundaries between classes using a process called kernelling, which projects the data into a higher-dimensional space. This sounds a bit abstract, but if you've ever fit a linear regression to power-transformed variables (e.g. maybe you used x^2, x^3 as features), you're already familiar with the concept. SVMs can use different types of kernels, like Gaussian or radial ones, to throw the data into a different space. The main hyperparameters we must tune for SVMs are gamma (a kernel parameter, controlling how far we 'throw' the data into the new feature space) and C (which controls the bias-variance tradeoff of the model).
from sklearn.svm import SVC # Search for good hyperparameter values # Specify values to grid search over g_range = 2. ** np.arange(-15, 5, step=2) C_range = 2. ** np.arange(-5, 15, step=2) hyperparameters = [{'gamma': g_range, 'C': C_range}] # Grid search using cross-validation grid = GridSearchCV(SVC(), param_grid=hyperparameters, cv= 10) grid.fit(XTrain, yTrain) bestG = grid.best_params_['gamma'] bestC = grid.best_params_['C'] # Train SVM and output predictions rbfSVM = SVC(kernel='rbf', C=bestC, gamma=bestG) rbfSVM.fit(XTrain, yTrain) SVM_predictions = rbfSVM.predict(XTest) print metrics.classification_report(yTest, SVM_predictions) print "Overall Accuracy:", round(metrics.accuracy_score(yTest, SVM_predictions),2)
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
Looks good! This is similar performance to what we saw in the random forests. 3) Third algorithm: neural network Finally, let's jump on the hype wagon and throw neural networks at our problem. Neural networks (NNs) represent a different way of thinking about machine learning algorithms. A great place to start learning about neural networks and deep learning is this resource. Briefly, NNs are composed of multiple layers of artificial neurons, which individually are simple processing units that weigh up input data. Together, layers of neurons can work together to compute some very complex functions of the data, which in turn can make excellent predictions. You may be aware of some of the crazy results that NN research has recently achieved. Here, we train a shallow, fully-connected, feedforward neural network on the spam dataset. Other types of neural network implementations in scikit are available here. The hyperparameters we optimize here are the overall architecture (number of neurons in each layer and the number of layers) and the learning rate (which controls how quickly the parameters in our network change during the training phase; see gradient descent and backpropagation).
from multilayer_perceptron import multilayer_perceptron # Search for good hyperparameter values # Specify values to grid search over layer_size_range = [(3,2),(10,10),(2,2,2),10,5] # different networks shapes learning_rate_range = np.linspace(.1,1,3) hyperparameters = [{'hidden_layer_sizes': layer_size_range, 'learning_rate_init': learning_rate_range}] # Grid search using cross-validation grid = GridSearchCV(multilayer_perceptron.MultilayerPerceptronClassifier(), param_grid=hyperparameters, cv=10) grid.fit(XTrain, yTrain) # Output best hyperparameter values best_size = grid.best_params_['hidden_layer_sizes'] best_best_lr = grid.best_params_['learning_rate_init'] # Train neural network and output predictions nnet = multilayer_perceptron.MultilayerPerceptronClassifier(hidden_layer_sizes=best_size, learning_rate_init=best_best_lr) nnet.fit(XTrain, yTrain) NN_predictions = nnet.predict(XTest) print metrics.classification_report(yTest, NN_predictions) print "Overall Accuracy:", round(metrics.accuracy_score(yTest, NN_predictions),2)
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
Looks like this neural network (given this dataset, architecture, and hyperparameterisation) is doing slightly worse on the spam dataset. That's okay, it could still be picking up on a signal that the random forest and SVM weren't. Machine learning algorithns... ensemble! 4) Majority vote on classifications
# here's a rough solution import collections # stick all predictions into a dataframe predictions = pd.DataFrame(np.array([RF_predictions, SVM_predictions, NN_predictions])).T predictions.columns = ['RF', 'SVM', 'NN'] predictions = pd.DataFrame(np.where(predictions=='yes', 1, 0), columns=predictions.columns, index=predictions.index) # initialise empty array for holding predictions ensembled_predictions = np.zeros(shape=yTest.shape) # majority vote and output final predictions for test_point in range(predictions.shape[0]): predictions.iloc[test_point,:] counts = collections.Counter(predictions.iloc[test_point,:]) majority_vote = counts.most_common(1)[0][0] # output votes ensembled_predictions[test_point] = majority_vote.astype(int) print "The majority vote for test point", test_point, "is: ", majority_vote # Get final accuracy of ensembled model yTest[yTest == "yes"] = 1 yTest[yTest == "no"] = 0 print metrics.classification_report(yTest.astype(int), ensembled_predictions.astype(int)) print "Ensemble Accuracy:", round(metrics.accuracy_score(yTest.astype(int), ensembled_predictions.astype(int)),2)
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
Lists Lists group together data. Many languages have arrays (we'll look at those in a bit in python). But unlike arrays in most languages, lists can hold data of all different types -- they don't need to be homogeneos. The data can be a mix of integers, floating point or complex #s, strings, or other objects (including other lists). A list is defined using square brackets:
a = [1, 2.0, "my list", 4] print(a)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
We can index a list to get a single element -- remember that python starts counting at 0:
print(a[2])
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Like with strings, mathematical operators are defined on lists:
print(a*2)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
The len() function returns the length of a list
print(len(a))
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Unlike strings, lists are mutable -- you can change elements in a list easily
a[1] = -2.0 a a[0:1] = [-1, -2.1] # this will put two items in the spot where 1 existed before a
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Note that lists can even contain other lists:
a[1] = ["other list", 3] a
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Just like everything else in python, a list is an object that is the instance of a class. Classes have methods (functions) that know how to operate on an object of that class. There are lots of methods that work on lists. Two of the most useful are append, to add to the end of a list, and pop, to remove the last element:
a.append(6) a a.pop() a
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
<div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div> An operation we'll see a lot is to begin with an empty list and add elements to it. An empty list is created as: a = [] Create an empty list Append the integers 1 through 10 to it. Now pop them out of the list one by one. <hr>
a = [] a.append(1) a.append(2) a.append(3) a.append(4) a.append(5) a a.pop() a.pop() a.pop() a.pop() a.pop() a.pop()
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
copying may seem a little counterintuitive at first. The best way to think about this is that your list lives in memory somewhere and when you do a = [1, 2, 3, 4] then the variable a is set to point to that location in memory, so it refers to the list. If we then do b = a then b will also point to that same location in memory -- the exact same list object. Since these are both pointing to the same location in memory, if we change the list through a, the change is reflected in b as well:
a = [1, 2, 3, 4] b = a # both a and b refer to the same list object in memory print(a) a[0] = "changed" print(b)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
if you want to create a new object in memory that is a copy of another, then you can either index the list, using : to get all the elements, or use the list() function:
c = list(a) # you can also do c = a[:], which basically slices the entire list a[1] = "two" print(a) print(c)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Things get a little complicated when a list contains another mutable object, like another list. Then the copy we looked at above is only a shallow copy. Look at this example&mdash;the list within the list here is still the same object in memory for our two copies:
f = [1, [2, 3], 4] print(f) g = list(f) print(g)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Now we are going to change an element of that list [2, 3] inside of our main list. We need to index f once to get that list, and then a second time to index that list:
f[1][0] = "a" print(f) print(g)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Note that the change occured in both&mdash;since that inner list is shared in memory between the two. Note that we can still change one of the other values without it being reflected in the other list&mdash;this was made distinct by our shallow copy:
f[0] = -1 print(g) print(f)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Note: this is what is referred to as a shallow copy. If the original list had any special objects in it (like another list), then the new copy and the old copy will still point to that same object. There is a deep copy method when you really want everything to be unique in memory. When in doubt, use the id() function to figure out where in memory an object lies (you shouldn't worry about the what value of the numbers you get from id mean, but just whether they are the same as those for another object)
print(id(a), id(b), id(c))
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
There are lots of other methods that work on lists (remember, ask for help)
my_list = [10, -1, 5, 24, 2, 9] my_list.sort() print(my_list) print(my_list.count(-1)) my_list help(a.insert) a.insert(3, "my inserted element") a
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
joining two lists is simple. Like with strings, the + operator concatenates:
b = [1, 2, 3] c = [4, 5, 6] d = b + c print(d)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Dictionaries A dictionary stores data as a key:value pair. Unlike a list where you have a particular order, the keys in a dictionary allow you to access information anywhere easily:
my_dict = {"key1":1, "key2":2, "key3":3} print(my_dict["key1"])
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
you can add a new key:pair easily, and it can be of any type
my_dict["newkey"] = "new" print(my_dict)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Note that a dictionary is unordered. You can also easily get the list of keys that are defined in a dictionary
keys = list(my_dict.keys()) print(keys)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
and check easily whether a key exists in the dictionary using the in operator
print("key1" in keys) print("invalidKey" in keys)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
List Comprehensions list comprehensions provide a compact way to initialize lists. Some examples from the tutorial
squares = [x**2 for x in range(10)] squares
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
here we use another python type, the tuple, to combine numbers from two lists into a pair
[(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
<div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div> Use a list comprehension to create a new list from squares containing only the even numbers. It might be helpful to use the modulus operator, % <hr> Tuples tuples are immutable -- they cannot be changed, but they are useful for organizing data in some situations. We use () to indicate a tuple:
a = (1, 2, 3, 4) print(a)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
We can unpack a tuple:
w, x, y, z = a print(w) print(w, x, y, z)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Since a tuple is immutable, we cannot change an element:
a[0] = 2
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
But we can turn it into a list, and then we can change it
z = list(a) z[0] = "new" print(z)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Control Flow To write a program, we need the ability to iterate and take action based on the values of a variable. This includes if-tests and loops. Python uses whitespace to denote a block of code. While loop A simple while loop&mdash;notice the indentation to denote the block that is part of the loop. Here we also use the compact += operator: n += 1 is the same as n = n + 1
n = 0 while n < 10: print(n) n += 1
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
This was a very simple example. But often we'll use the range() function in this situation. Note that range() can take a stride.
for n in range(2, 10, 2): print(n) print(list(range(10)))
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
if statements if allows for branching. python does not have a select/case statement like some other languages, but if, elif, and else can reproduce any branching functionality you might need.
x = 0 if x < 0: print("negative") elif x == 0: print("zero") else: print("positive")
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Iterating over elements it's easy to loop over items in a list or any iterable object. The in operator is the key here.
alist = [1, 2.0, "three", 4] for a in alist: print(a) for c in "this is a string": print(c)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
We can combine loops and if-tests to do more complex logic, like break out of the loop when you find what you're looking for
n = 0 for a in alist: if a == "three": break else: n += 1 print(n)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
(for that example, however, there is a simpler way)
print(alist.index("three"))
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
for dictionaries, you can also loop over the elements
my_dict = {"key1":1, "key2":2, "key3":3} for k, v in my_dict.items(): print("key = {}, value = {}".format(k, v)) # notice how we do the formatting here for k in sorted(my_dict): print(k, my_dict[k])
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
sometimes we want to loop over a list element and know its index -- enumerate() helps here:
for n, a in enumerate(alist): print(n, a)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Feature Evaluation Pipeline
SEED = 2**8+1 # TODO: Optimize lgb_params = { 'objective':'binary', 'boosting_type':'gbdt', 'metric':'auc', 'n_jobs':-1, 'learning_rate':0.01, 'num_leaves': 2**5, # 5-8 'max_depth':-1, 'tree_learner':'serial', 'colsample_bytree': 0.7, 'subsample_freq':1, 'subsample':0.7, 'max_bin':255, 'verbose':-1, 'seed': SEED, 'feature_fraction_seed': SEED + 2, 'bagging_seed': SEED + 3, 'drop_seed': SEED + 4, 'data_random_seed': SEED + 5, } def display_report(report): print('{} Folds Used'.format(len(report['folds']))) print('{} Neg DownSample Frac with {} Seed'.format(report['downsample_frac'], report['downsample_seed'])) print('{} AVG AUC, {} STD'.format(np.round(report['avg_auc'],3), np.round(report['std_auc'],3))) print('{} AVG Rounds, {} Rounds'.format(report['avg_iterations'], report['std_iterations']), end='\n\n') features = pd.DataFrame({ 'feature': report['features'], 'adversarial': list(report['cvs'].values()), 'perm_import': list(report['avg_permutation_importance'].values()), 'perm_import_std': list(report['std_permutation_importance'].values()), }) features.sort_values(['perm_import','adversarial'], ascending=False, inplace=True) sns_df = pd.DataFrame({ 'feature' : sum([list(fold['permutation_importance'].keys()) for fold in results['folds']], []), 'perm_import': sum([list(fold['permutation_importance'].values()) for fold in results['folds']], []), }) sns_df.sort_values(['feature','perm_import'], ascending=False, inplace=True) print(report['params']) return features, sns_df def compare_reports(report1, report2): pass def run_evaluation(data, features, params, downsample_seed=None, downsample_frac=0.2, save_file_path=None): # NOTE: data should contain, at minimal, all train + test samples, # along with the isFraud column, for separation and scoring purposes. gc.collect() # Run evaluation and store results in a report # Steps: # 1) [x] Negative Downsample non-frauds # 2) [x] Run adversarial validation on features + record scores # 3) [x] Train on 50% overlapping folds on the trainset # 3b) [x] Perform permutation importance (soon to be drop importance) each fold # 4) [x] Aggregate and save results report = { 'features': features, 'params': params, 'downsample_seed': downsample_seed, 'downsample_frac': downsample_frac, 'cvs': {}, 'folds': [], 'avg_permutation_importance': {}, 'std_permutation_importance': {}, } ###################### print('\n# 1) [x] Negative Downsample (non-frauds)') if downsample_seed is None: selection = data.copy() else: np.random.seed(downsample_seed) normies = data[data.isFraud==0].index.values normies = np.random.choice( normies, int(data.shape[0]*downsample_frac), replace=False ) selection = data[data.index.isin( # All fruds and a number of normies np.concatenate([normies, data[data.isFraud==1].index.values]) )].copy() print(selection.shape[0], 'total train samples!') if selection.shape[0] > data.isFraud.isna().sum(): # If we have more train samples than test samples, use all test samples selection_test = data[data.isFraud.isna()] else: # Use a balanced set of test samples selection_test = np.random.choice( data[data.isFraud.isna()].index.values, selection.shape[0], replace=False ) selection_test = data[data.index.isin(selection_test)] ###################### print('\n# 2) [x] Run adversarial validation (CVS) on features + record scores') # Build CVS dataset cvsdata = selection.append(selection_test, sort=False) cvsdata.reset_index(inplace=True) cvsdata['which_set'] = (np.arange(cvsdata.shape[0]) >= selection.shape[0]).astype(np.uint8) cvsdata = cvsdata.sample(frac=1).reset_index(drop=True) # Shuffle the thing trn_cvs = cvsdata.index < (cvsdata.shape[0] // 2) for col in features: trn_lgb = lgb.Dataset(cvsdata[trn_cvs][[col]], label=cvsdata[trn_cvs].which_set) val_lgb = lgb.Dataset(cvsdata[~trn_cvs][[col]], label=cvsdata[~trn_cvs].which_set) clf = lgb.train( params, trn_lgb, valid_sets = [trn_lgb, val_lgb], verbose_eval = 200, early_stopping_rounds = 25, num_boost_round = 80000, ) report['cvs'][col] = clf.best_score['valid_1']['auc'] - 0.5 # 0.5 = 0, best score del cvsdata, trn_lgb, val_lgb, trn_cvs; gc.collect() ###################### print('\n#3) [x] Train on 50% overlapping folds on the trainset') for fold_, i in enumerate(range(0,57,14)): gc.collect() fold = { 'fold_num': fold_, 'trn_range': [i,i+90], 'val_range': [i+90+15,i+90+15+20], } print('\nFold', fold_+1, '— Train', fold['trn_range'], '— Test', fold['val_range']) trn = selection[selection.tdoy.between(i, 90+i)] val = selection[selection.tdoy.between(90+i+15, 90+i+15+20)].copy() trn_lgb = lgb.Dataset(trn[features], label=trn.isFraud) val_lgb = lgb.Dataset(val[features], label=val.isFraud) clf = lgb.train( params, trn_lgb, valid_sets = [trn_lgb, val_lgb], verbose_eval = 200, early_stopping_rounds = 25, num_boost_round = 80000, #categorical_feature=[] ) baseline = clf.best_score['valid_1']['auc'] fold['auc'] = baseline fold['iterations'] = clf.best_iteration print('baseline - ', baseline) ###################### # TODO: Repalce with Drop importance print('\n#3b) [x] Perform permutation importance (soon to be drop importance) each fold') perm = {} for col in features: backup = val[col].values.copy() val[col] = np.random.permutation(val[col].values) y_true = clf.predict(val[features]) perm[col] = baseline - roc_auc_score(val.isFraud, y_true) val[col] = backup fold['permutation_importance'] = perm report['folds'].append(fold) ###################### print('\n# 4) [x] Aggregate and save results') aucs = [fold['auc'] for fold in report['folds']] report['avg_auc'] = np.mean(aucs) report['std_auc'] = np.std(aucs) iterations = [fold['iterations'] for fold in report['folds']] report['avg_iterations'] = np.mean(iterations) report['std_iterations'] = np.std(iterations) for feature in features: pi = [fold['permutation_importance'][feature] for fold in report['folds']] report['avg_permutation_importance'][feature] = np.mean(pi) report['std_permutation_importance'][feature] = np.std(pi) if save_file_path is not None: with open(save_file_path, 'w', encoding='utf-8') as f: json.dump(report, f, ensure_ascii=False, indent=4) gc.collect() return report
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
The function above does a few things. First, it downsamples the negative values. This was reported by a number of top-50 people on the forums, and gold medal winners in previous competitions have used this technique as well. The idea being that fraud is unique and it shouldn't matter much which non-frauds we train against, since the frauds are a distinct class and should have some level of 'difference' about them. Think outliers. This would also be a good time to mention that when we train for submission, we can add diversity to our models by emsembling various batches trained with different samples non-fraud values =). The next thing the method above does is compute covariate shift scores per feature, e.g. adversarial validation. The smaller the number, the more ideal the variable. The larger the value, the worse it is—the easier it is for our model to tell train samples from test samples. If there is too much shift, we can expect our model to fail at generalizing in the private lb. We should shoot for features that have <= 0.02 CVS. If you have a really good (high permutation importance auc, low permutation importance std) feature that also has a high CVS, try engineering, transforming, windsorizing, or otherwise degrading the variable until you get it within the cutoff threshold range. Moving on, the method above trains various overlapping folds. The folds are 90 days long, have a 15 day gap, and then use the next 20 days for validation. These windows are shifted +14 days each fold, until we get to the end of the dataset. As-Is, it doesn't matter what start date you set, because each day will still have 24hours. But if you generate features that are like holiday features, then start date becomes important. As each fold is trained, we calculate the permutation importance measure. There's a lot of talk about what that is so I won't go into it in detail here. Just know that it's not perfect. It's 10000x better than the stalk-feature-importance measures; but to get the true feature importance, we need to run a drop importance calculation. I haven't added it here because it slows down the execution a bit (re-trains the model for each feature); but it'll probably be in our best interest to switch to that anyway. Any feature that has a <0 permutation or drop importance should be discarded immediately. And any feature with a veri low importance, we can consider discarding recursively to test if it improves out model's score. Prepare Data
data = traintr.append(testtr, sort=False) data.reset_index(inplace=True) features = [ 'TransactionAmt', 'ProductCD', 'card1', 'card2', 'card3', 'card4', 'card5', 'card6', 'addr1', 'addr2', 'dist1', 'dist2', 'P_emaildomain', 'R_emaildomain', 'D3', 'D1', 'V286', 'V100', 'thour', ] # LE: for col in features: if data[col].dtype!='O': continue print('Found str', col, '... encoding!') mapper = {key:val for val,key in enumerate(data[col].unique())} data[col] = data[col].map(mapper)
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
Do It
results = run_evaluation( data, features, lgb_params, downsample_seed=1773, downsample_frac=0.2, save_file_path='./report_test.json' # persist the results to a file ) results report, sns_features = display_report(results) plt.figure(figsize=(16, 16)) sns.barplot(x="perm_import", y="feature", data=sns_features, edgecolor=('white'), linewidth=2)#, palette="rocket") plt.title('Permutation Importance (averaged/folds)', fontsize=18) plt.tight_layout() report
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
If we were to look at regular gain or splits, the V columns would dominate the show. But here we accurately see they really aren't that important in the grand scheme of things. Notice how some of our variables have a very high CVS score, especially the high cardinality categorical variables, like card2 for example. While the unique values present for card2 in train and test are very similar, their distributions shift a bit:
a = set(traintr.card2.unique()) b = set(testtr.card2.unique()) len(a-b), len(b-a) traintr.card2.value_counts().head(15) testtr.card2.value_counts().head(15)
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
Not much we can do there. I'm not advocating removing card2 as a variable; but I am advocating we use these results to draw our attention to possible issues. So for example, the appropriate thing to do here would be to attempt the removal of card2 and observe how it affects the model's mean and std AUC. That is the ultimate 'measure'. As we mentioned previously, any feature with a negative perm importance should be destroyed. TODO: Submission I still need to flesh out the submission pipeline. Today was really brutal at work... 9 hours straight of data analysis. I'll try to get this fleshed out ASAAPPPPPPP. Once we've decided on a subset of good engineered features to use, we must run the submission pipeline to submit. Submission pipeline is completely different from the feature validation pipeline. Rather, the sub pipeline builds many versions of a single model and merges them together. I wouldn't all this ensembling per se, since that'll happen is another, external script. But this is how we prepare our 'level-1' models for posting against Kaggle LB.
def submission(num_boost_rounds=0): # We train using fixed num_boost_rounds train_groups = [] for month_start in range(4): # using 3x dif seeds each months = [12 + month_start, 12 + month_start + 1, 12 + month_start + 2] train_groups.append(months) # Then using double num_boost_rounds train_groups += [[12,13,14,15,16,17]] #dif seed 2x train_groups += [[12,13,14,15,16,17]] #dif seed 2x return train_groups submission()
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
Sending a mail is, with the proper library, a piece of cake...
from sender import Mail mail = Mail(MAIL_SERVER) mail.fromaddr = ("Secret admirer", FROM_ADDRESS) mail.send_message("Raspberry Pi has a soft spot for you", to=TO_ADDRESS, body="Hi sweety! Grab a smoothie?")
notebooks/en-gb/Communication - Send mails.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0
... but if we take it a little further, we can connect our doorbell project to the sending of mail! APPKEY is the Application Key for a (free) http://www.realtime.co/ "Realtime Messaging Free" subscription. See "104 - Remote deurbel - Een cloud API gebruiken om berichten te sturen" voor meer gedetailleerde info. info.
APPKEY = "******" mail.fromaddr = ("Your doorbell", FROM_ADDRESS) mail_to_addresses = { "Donald Duck":"dd@****.com", "Maleficent":"mf@****.com", "BigBadWolf":"bw@****.com" } def on_message(sender, channel, message): mail_message = "{}: Call for {}".format(channel, message) print(mail_message) mail.send_message("Raspberry Pi alert!", to=mail_to_addresses[message], body=mail_message) import ortc oc = ortc.OrtcClient() oc.cluster_url = "http://ortc-developers.realtime.co/server/2.1" def on_connected(sender): print('Connected') oc.subscribe('doorbell', True, on_message) oc.set_on_connected_callback(on_connected) oc.connect(APPKEY)
notebooks/en-gb/Communication - Send mails.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0
Estimates Using Pivot First, we'll replicate the results for the question "Ever diagnosed with diabetes" for all of California, for 2017, from AskCHIS. Diagnosed with diabetes: 10.7%, ( 9.6% - 11.8% ) 3,145,000 Never diagnosed with diabetes 89.3% ( 88.2% - 90.4% ) 26,311,000 Total population: 29,456,000 Getting estimates is easy. Each of the values of rakedw0 is the number of people that the associated responded represents in the total California population. So, all of the values of rakedw0 will sum to the controlled California population of adults, and dividing the whole dataset by responses on a variable and summing the values of rakedw0 for each response gives us the estimate number of people who would have given that response.
t = df.pivot_table(values='rakedw0', columns='diabetes', index=df.index) t2 = t.sum().round(-3) t2
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Summing across responses yields the total popluation, which we can use to calculate percentages.
t2.sum() (t2/t2.sum()*100).round(1)
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Estimates Using Unstack You can also calculate the same values using set_index and unstack.
t = df[['diabetes','rakedw0']].set_index('diabetes',append=True).unstack() t2 = t.sum().round(-3) diabetes_yes = t2.unstack().loc['rakedw0','YES'] diabetes_no = t2.unstack().loc['rakedw0','NO'] diabetes_yes, diabetes_no
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Calculating Variance The basic formula for calculating the variance is in section 9.2, Methods for Variance Estimation of CHIS Report 5 Weighting and Variance Estimation . Basically, the other 80 raked weights, rakedw1 through rakedw80 give alternate estimates. It's like running the survey an additional 80 times, which allows you to calculate the sample variance from the set of alternate estimates. In the code below, we'll expand the operation with temporary variables, to document each step.
weight_cols = [c for c in df.columns if 'raked' in c] t = df[['diabetes']+weight_cols] # Get the column of interest, and all of the raked weights t = t.set_index('diabetes',append=True) # Move the column of interest into the index t = t.unstack() # Unstack the column of interest, so both values are now in multi-level columns t = t.sum() # Sum all of the weights for each of the raked weght set and "YES"/"NO" t = t.unstack() # Now we have sums for each of the replicated, for each of the variable values. t = t.sub(t.loc['rakedw0']).iloc[1:] # Subtract off the median estimate from each of the replicates t = (t**2).sum() # sum of squares ci_95 = np.sqrt(t)*1.96 # sqrt to get stddev, and 1.96 to get 95% CI
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
The final percentage ranges match those from AskCHIS.
((diabetes_yes-ci_95.loc['YES'])/29_456_000*100).round(1), ((diabetes_yes+ci_95.loc['YES'])/29_456_000*100).round(1) ((diabetes_no-ci_95.loc['NO'])/29_456_000*100).round(1), ((diabetes_no+ci_95.loc['NO'])/29_456_000*100).round(1)
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Functions Here is a function for calculating the estimate, percentages, Standard Error and Relative Standard Error from a dataset. This function also works with a subset of the dataset, but note that the percentages will be relative to the total from the input dataset, not the whole California population.
def chis_estimate(df, column, ci=True, pct=True, rse=False): """Calculate estimates for CHIS variables, with variances, as 95% CI, from the replicate weights""" weight_cols = [c for c in df.columns if 'raked' in c] t = df[[column]+weight_cols] # Get the column of interest, and all of the raked weights t = t.set_index(column,append=True) # Move the column of interest into the index t = t.unstack() # Unstack the column of interest, so both values are now in multi-level columns t = t.sum() # Sum all of the weights for each of the raked weight set and "YES"/"NO" t = t.unstack() # Now we have sums for each of the replicats, for each of the variable values. est = t.iloc[0].to_frame() # Replicate weight 0 is the estimate est.columns = [column] total = est.sum()[column] t = t.sub(t.loc['rakedw0']).iloc[1:] # Subtract off the median estimate from each of the replicates t = (t**2).sum() # sum of squares se = np.sqrt(t) # sqrt to get stddev, ci_95 = se*1.96 # and 1.96 to get 95% CI if ci: est[column+'_95_l'] = est[column] - ci_95 est[column+'_95_h'] = est[column] + ci_95 else: est[column+'_se'] = se if pct: est[column+'_pct'] = (est[column]/total*100).round(1) if ci: est[column+'_pct_l'] = (est[column+'_95_l']/total*100).round(1) est[column+'_pct_h'] = (est[column+'_95_h']/total*100).round(1) if rse: est[column+'_rse'] = (se/est[column]*100).round(1) est.rename(columns={column:column+'_count'}, inplace=True) return est chis_estimate(df, 'diabetes', ci=False, pct=False) # This validates with the whole population for 2017, from the AskCHIS web application chis_estimate(df, 'ag1') # This validates with the latino subset for 2017, from the AskCHIS web application chis_estimate(df[df.racedf_p1=='LATINO'], 'ag1')
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Segmenting Results This function allows segmenting on another column, for instance, breaking out responses by race. Note that in the examples we are checking for estimates to have a relative standard error ( such as diabetes_rse ) of greater than 30%. CHIS uses 30% as a limit for unstable values, and won't publish estimate with higher RSEs.
def chis_segment_estimate(df, column, segment_columns): """Return aggregated CHIS data, segmented on one or more other variables. """ if not isinstance(segment_columns, (list,tuple)): segment_columns = [segment_columns] odf = None for index,row in df[segment_columns].drop_duplicates().iterrows(): query = ' and '.join([ "{} == '{}'".format(c,v) for c,v in zip(segment_columns, list(row))]) x = chis_estimate(df.query(query), column, ci=True, pct=True, rse=True) x.columns.names = ['measure'] x = x.unstack() for col,val in zip(segment_columns, list(row)): x = pd.concat([x], keys=[val], names=[col]) if odf is None: odf = x else: odf = pd.concat([odf, x]) odf = odf.to_frame() odf.columns = ['value'] return odf
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
The dataframe returned by this function has a multi-level index, which include all of the unique values from the segmentation columns, a level for measures, and the values from the target column. For instance:
chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'ur_ihs']).head(20)
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
You can "pivot" a level out of the row into the columns with unstack(). Here we move the measures out of the row index into columns.
t = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'ur_ihs']) t.unstack('measure').head()
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Complex selections can be made with .loc.
t = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'ur_ihs']) idx = pd.IndexSlice # Convenience redefinition. # The IndexSlices should have one term ( seperated by ',') for each of the levels in the index. # We have one `IndexSlice` for rows, and one for columns. Note that the ``row_indexer`` has 4 terms. row_indexer = idx[:,:,('diabetes_pct','diabetes_rse'),'YES'] col_indexer = idx[:] # Now we can select with the two indexers. t = t.loc[row_indexer,col_indexer] # Rotate the measures out of rows into columns t = t.unstack('measure') # The columns are multi-level, but there is only one value for the first level, # so it is useless. t.columns = t.columns.droplevel() # Only use estimates wtih RSE < 30% t = t[t.diabetes_rse < 30] # We don't nee the RSE colum any more. t = t.drop(columns='diabetes_rse') # Move the Rural/Urban into columns t = t.unstack(0) t x = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'am3']) row_indexer = idx[('YES','NO'),:,('diabetes_pct','diabetes_rse'),'YES'] col_indexer = idx[:] t = x.loc[row_indexer,col_indexer].unstack('measure') t.columns = t.columns.droplevel() t = t[t.diabetes_rse < 30].drop(columns='diabetes_rse') t x = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'am3']) row_indexer = idx[:,:,('diabetes_pct','diabetes_rse'),'YES'] col_indexer = idx[:] t = x.loc[row_indexer,col_indexer].unstack('measure') #t.index = t.index.droplevel('diabetes') t.columns = t.columns.droplevel() t = t[t.diabetes_rse < 30].drop(columns='diabetes_rse') t.unstack(0) chis_segment_estimate(df, 'diabetes', 'am3')
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
MuJoCo More detailed instructions in this tutorial. Institutional MuJoCo license.
#@title Edit and run mjkey = """ REPLACE THIS LINE WITH YOUR MUJOCO LICENSE KEY """.strip() mujoco_dir = "$HOME/.mujoco" # Install OpenGL deps !apt-get update && apt-get install -y --no-install-recommends \ libgl1-mesa-glx libosmesa6 libglew2.0 # Fetch MuJoCo binaries from Roboti !wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip !unzip -o -q mujoco.zip -d "$mujoco_dir" # Copy over MuJoCo license !echo "$mjkey" > "$mujoco_dir/mjkey.txt" # Configure dm_control to use the OSMesa rendering backend %env MUJOCO_GL=osmesa # Install dm_control !pip install dm_control
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Machine-locked MuJoCo license.
#@title Add your MuJoCo License and run mjkey = """ """.strip() mujoco_dir = "$HOME/.mujoco" # Install OpenGL dependencies !apt-get update && apt-get install -y --no-install-recommends \ libgl1-mesa-glx libosmesa6 libglew2.0 # Get MuJoCo binaries !wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip !unzip -o -q mujoco.zip -d "$mujoco_dir" # Copy over MuJoCo license !echo "$mjkey" > "$mujoco_dir/mjkey.txt" # Configure dm_control to use the OSMesa rendering backend %env MUJOCO_GL=osmesa # Install dm_control, including extra dependencies needed for the locomotion # mazes. !pip install dm_control[locomotion_mazes]
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
RWRL
!git clone https://github.com/google-research/realworldrl_suite.git !pip install realworldrl_suite/
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
RL Unplugged
!git clone https://github.com/deepmind/deepmind-research.git %cd deepmind-research
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Imports
import collections import copy from typing import Mapping, Sequence import acme from acme import specs from acme.agents.tf import actors from acme.agents.tf import d4pg from acme.tf import networks from acme.tf import utils as tf2_utils from acme.utils import loggers from acme.wrappers import single_precision from acme.tf import utils as tf2_utils import numpy as np import realworldrl_suite.environments as rwrl_envs from reverb import replay_sample import six from rl_unplugged import rwrl import sonnet as snt import tensorflow as tf
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Data
domain_name = 'cartpole' #@param task_name = 'swingup' #@param difficulty = 'easy' #@param combined_challenge = 'easy' #@param combined_challenge_str = str(combined_challenge).lower() tmp_path = '/tmp/rwrl' gs_path = f'gs://rl_unplugged/rwrl' data_path = (f'combined_challenge_{combined_challenge_str}/{domain_name}/' f'{task_name}/offline_rl_challenge_{difficulty}') !mkdir -p {tmp_path}/{data_path} !gsutil cp -r {gs_path}/{data_path}/* {tmp_path}/{data_path} num_shards_str, = !ls {tmp_path}/{data_path}/* | wc -l num_shards = int(num_shards_str)
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Dataset and environment
#@title Auxiliary functions def flatten_observation(observation): """Flattens multiple observation arrays into a single tensor. Args: observation: A mutable mapping from observation names to tensors. Returns: A flattened and concatenated observation array. Raises: ValueError: If `observation` is not a `collections.MutableMapping`. """ if not isinstance(observation, collections.MutableMapping): raise ValueError('Can only flatten dict-like observations.') if isinstance(observation, collections.OrderedDict): keys = six.iterkeys(observation) else: # Keep a consistent ordering for other mappings. keys = sorted(six.iterkeys(observation)) observation_arrays = [tf.reshape(observation[key], [-1]) for key in keys] return tf.concat(observation_arrays, 0) def preprocess_fn(sample): o_tm1, a_tm1, r_t, d_t, o_t = sample.data[:5] o_tm1 = flatten_observation(o_tm1) o_t = flatten_observation(o_t) return replay_sample.ReplaySample( info=sample.info, data=(o_tm1, a_tm1, r_t, d_t, o_t)) batch_size = 10 #@param environment = rwrl_envs.load( domain_name=domain_name, task_name=f'realworld_{task_name}', environment_kwargs=dict(log_safety_vars=False, flat_observation=True), combined_challenge=combined_challenge) environment = single_precision.SinglePrecisionWrapper(environment) environment_spec = specs.make_environment_spec(environment) act_spec = environment_spec.actions obs_spec = environment_spec.observations dataset = rwrl.dataset( tmp_path, combined_challenge=combined_challenge_str, domain=domain_name, task=task_name, difficulty=difficulty, num_shards=num_shards, shuffle_buffer_size=10) dataset = dataset.map(preprocess_fn).batch(batch_size)
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
D4PG learner
#@title Auxiliary functions def make_networks( action_spec: specs.BoundedArray, hidden_size: int = 1024, num_blocks: int = 4, num_mixtures: int = 5, vmin: float = -150., vmax: float = 150., num_atoms: int = 51, ): """Creates networks used by the agent.""" num_dimensions = np.prod(action_spec.shape, dtype=int) policy_network = snt.Sequential([ networks.LayerNormAndResidualMLP( hidden_size=hidden_size, num_blocks=num_blocks), # Converts the policy output into the same shape as the action spec. snt.Linear(num_dimensions), # Note that TanhToSpec applies tanh to the input. networks.TanhToSpec(action_spec) ]) # The multiplexer concatenates the (maybe transformed) observations/actions. critic_network = snt.Sequential([ networks.CriticMultiplexer( critic_network=networks.LayerNormAndResidualMLP( hidden_size=hidden_size, num_blocks=num_blocks), observation_network=tf2_utils.batch_concat), networks.DiscreteValuedHead(vmin, vmax, num_atoms) ]) return { 'policy': policy_network, 'critic': critic_network, } # Create the networks to optimize. online_networks = make_networks(act_spec) target_networks = copy.deepcopy(online_networks) # Create variables. tf2_utils.create_variables(online_networks['policy'], [obs_spec]) tf2_utils.create_variables(online_networks['critic'], [obs_spec, act_spec]) tf2_utils.create_variables(target_networks['policy'], [obs_spec]) tf2_utils.create_variables(target_networks['critic'], [obs_spec, act_spec]) # The learner updates the parameters (and initializes them). learner = d4pg.D4PGLearner( policy_network=online_networks['policy'], critic_network=online_networks['critic'], target_policy_network=target_networks['policy'], target_critic_network=target_networks['critic'], dataset=dataset, discount=0.99, target_update_period=100)
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Evaluation
# Create a logger. logger = loggers.TerminalLogger(label='evaluation', time_delta=1.) # Create an environment loop. loop = acme.EnvironmentLoop( environment=environment, actor=actors.DeprecatedFeedForwardActor(online_networks['policy']), logger=logger) loop.run(5)
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
The nbformat API returns a special dict. You don't need to worry about the details of the structure. The nbconvert API exposes some basic exporters for common formats and defaults. You will start by using one of them. First you will import it, then instantiate it using most of the defaults, and finally you will process notebook downloaded early.
from IPython.config import Config from IPython.nbconvert import HTMLExporter # The `basic` template is used here. # Later you'll learn how to configure the exporter. html_exporter = HTMLExporter(config=Config({'HTMLExporter':{'default_template':'basic'}})) (body, resources) = html_exporter.from_notebook_node(jake_notebook)
notebooks/1 - IPython Notebook Examples/IPython Project Examples/Notebook/Using nbconvert as a Library.ipynb
tylere/docker-tmpnb-ee
apache-2.0
Use the IPython configuration/Traitlets system to enable it. If you have already set IPython configuration options, this system is familiar to you. Configuration options will always of the form: ClassName.attribute_name = value You can create a configuration object a couple of different ways. Everytime you launch IPython, configuration objects are created from reading config files in your profile directory. Instead of writing a config file, you can also do it programatically using a dictionary. The following creates a config object, that enables the figure extracter, and passes it to an HTMLExporter. The output is compared to an HTMLExporter without the config object.
from IPython.config import Config c = Config({ 'ExtractOutputPreprocessor':{'enabled':True} }) exportHTML = HTMLExporter() exportHTML_and_figs = HTMLExporter(config=c) (_, resources) = exportHTML.from_notebook_node(jake_notebook) (_, resources_with_fig) = exportHTML_and_figs.from_notebook_node(jake_notebook) print('resources without the "figures" key:') print(list(resources)) print('') print('ditto, notice that there\'s one more field:') print(list(resources_with_fig)) list(resources_with_fig['outputs'])
notebooks/1 - IPython Notebook Examples/IPython Project Examples/Notebook/Using nbconvert as a Library.ipynb
tylere/docker-tmpnb-ee
apache-2.0
Custom Preprocessor There are an endless number of transformations that you may want to apply to a notebook. This is why we provide a way to register your own preprocessors that will be applied to the notebook after the default ones. To do so, you'll have to pass an ordered list of Preprocessors to the Exporter's constructor. For simple cell-by-cell transformations, Preprocessor can be created using a decorator. For more complex operations, you need to subclass Preprocessor and define a call method (as seen below). All transforers have a flag that allows you to enable and disable them via a configuration object.
from IPython.nbconvert.preprocessors import Preprocessor import IPython.config print("Four relevant docstring") print('=============================') print(Preprocessor.__doc__) print('=============================') print(Preprocessor.preprocess.__doc__) print('=============================') print(Preprocessor.preprocess_cell.__doc__) print('=============================')
notebooks/1 - IPython Notebook Examples/IPython Project Examples/Notebook/Using nbconvert as a Library.ipynb
tylere/docker-tmpnb-ee
apache-2.0
Example The following demonstration was requested in an IPython GitHub issue, the ability to exclude a cell by index. Inject cells is similar, and won't be covered here. If you want to inject static content at the beginning/end of a notebook, use a custom template.
from IPython.utils.traitlets import Integer class PelicanSubCell(Preprocessor): """A Pelican specific preprocessor to remove some of the cells of a notebook""" # I could also read the cells from nbc.metadata.pelican is someone wrote a JS extension # But I'll stay with configurable value. start = Integer(0, config=True, help="first cell of notebook to be converted") end = Integer(-1, config=True, help="last cell of notebook to be converted") def preprocess(self, nb, resources): #nbc = deepcopy(nb) nbc = nb # don't print in real preprocessor !!! print("I'll keep only cells from ", self.start, "to ", self.end, "\n\n") nbc.cells = nb.cells[self.start:self.end] return nbc, resources # I create this on the fly, but this could be loaded from a DB, and config object support merging... c = Config() c.PelicanSubCell.enabled = True c.PelicanSubCell.start = 4 c.PelicanSubCell.end = 6
notebooks/1 - IPython Notebook Examples/IPython Project Examples/Notebook/Using nbconvert as a Library.ipynb
tylere/docker-tmpnb-ee
apache-2.0
To illustrate the main ideas, we are going to use a fake data set from this website. This file contains artificial names, addresses, companies, phone numbers etc. for fictitious US characters. Here is the complete list of variables The main purpose of this dataset is testing. Straight from the website "Always test your software with a "worst-case scenario" amount of sample data, to get an accurate sense of its performance in the real world." The only issue is that the samples are stored in zipped csv files, i.e. we do not have url for the csv file directly. How to download a zipfile and read the csv file inside
# we need some extra tools to download and handle zip files import zipfile as zf import requests, io url = "https://www.briandunning.com/sample-data/us-500.zip" r = requests.get(url) file = zf.ZipFile(io.BytesIO(r.content)) file file.namelist() # there is one csv file inside file_csv = file.open(file.namelist()[0]) # compare the type with file above (this is readable by pandas) file_csv df = pd.read_csv(file_csv) # What do we have? print("Variables and types:\n\n", df.dtypes, sep='') df.head() # Sometimes this one-liner works too #pd.read_csv(url, compression='zip') # not in this case
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
(1) Selecting data using loc loc is primarily label-location based indexer. That is, it selects rows and columns by their labels (variable names for columns, index values for rows). also works with a boolean array. The syntax is python data.loc[&lt;row selection&gt;, &lt;column selection&gt;] First, set an arbitrary index variable
dff = df.set_index(['last_name']) dff.head()
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
Now we can directly select rows by their index (last_name) values (just like we do with columns)
dff.loc['Butt'] # multiple rows dff.loc[['Butt', 'Venere']] # select a subset of the data (subDataFrame) dff.loc[['Butt', 'Foller'], ['city', 'email']] # ranges of index labels dff.loc[['Butt', 'Foller'], 'address':'phone2']
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
Boolean indexing using loc The most common method to work with data Pass an array of True/False values to the .loc to select the rows/columns with True values.
dff.loc[dff['city'] == 'New Orleans']
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
In fact, we don't need the loc indexer for this kind of task
dff[dff['city'] == 'New Orleans']
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
But what if we don't want all variables?
dff.loc[dff['city'] == 'New Orleans', ['company_name', 'zip']]
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
How would you get the same dataframe without loc?
dff[dff['city'] == 'New Orleans'][['company_name', 'zip']] # matter of taste
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
Recall the string methods applicable to DataFrames
dff[dff['email'].str.endswith("gmail.com")].head()
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
and the isin method?
dff.loc[dff['city'].isin(['New Orleans', 'New York'])] # intersection fo the two? gmails = dff['email'].str.endswith("gmail.com") NYNO = (dff['city'] == 'New Orleans') | (dff['city'] =='New York') dff[gmails & NYNO]
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
A tricky one: we can pass a function that returns True/False values to .apply() and evaluate it at each row
def short_company_name(x): """ returns True if x contains less than 2 words """ return len(x.split(' ')) < 2 dff.loc[dff['company_name'].apply(short_company_name)]
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
(2) Selecting data using iloc iloc is primarily used for integer position based indexing. That is, it selects rows and columns by number, in the order that they appear in the data frame. Numbers are from $0$ to df.shape-1 of both axes. The syntax is python data.iloc[&lt;row selection&gt;, &lt;column selection&gt;]
# Rows: df.iloc[0] # first row df.iloc[-1] # last row # Columns: df.iloc[:, 0] # first column = first variable (first_name) df.iloc[:, -1] # last column (web)
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
For multiple columns and rows, use slicer
df.iloc[:5] # first five rows df.iloc[:, :2] # first two columns df.iloc[[0, 4, 7, 25], # 1st, 5th, 6th, 26th row [0, 5, 6]] # 1st 6th 7th columns. df.iloc[:5, 5:8] # first 5 rows and 5th, 6th, 7th colum
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
(3) Selecting data using ix ix is hybrid of loc and iloc. In general, 1. it is label-location based and acts just like loc 2. However, it also supports integer-location based selection just like iloc where passed an integer Second option only works where the index of the DataFrame is NOT an integer. The syntax is python data.ix[&lt;row selection&gt;, &lt;column selection&gt;] Explicit usage of loc and iloc is preferred
# ix indexing works just the same as loc when passed labels dff.ix['Butt', 'city'] == dff.loc['Butt', 'city'] # ix indexing works the same as iloc when passed integers. dff.ix[33, 7] == dff.iloc[33, 7]
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
Hierarchical indexing with loc and iloc Multi-level indexing allows us to work with higher dimensional data while storing info in lower dimensional data structures like 2D DataFrame or 1D Series. More on this here. Read the WEO dataset that we have already used a couple of times
url_weo = 'http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/WEOOct2016all.xls' # (1) define the column indices col_indices = [1, 2, 3] + list(range(9, 46)) # (2) download the dataset weo = pd.read_csv(url_weo, sep = '\t', usecols=col_indices, skipfooter=1, engine='python', na_values=['n/a', '--'], thousands =',') # (3) change column labels to something more intuitive weo = weo.rename(columns={'WEO Subject Code': 'Variable', 'Subject Descriptor': 'Description'}) # (4) create debt and deficits dataframe variables = ['GGXWDG_NGDP', 'GGXCNL_NGDP'] data = weo[weo['Variable'].isin(variables)] data['Variable'] = data['Variable'].replace(to_replace=['GGXWDG_NGDP', 'GGXCNL_NGDP'], value=['Debt', 'Surplus']) data1 = data.set_index(['ISO', 'Variable']) data1.head() data1.ix[('AFG', slice(None)), '1980':'1983']
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
We setup the Veldhuizen and Lamont multiobjective optimization problem 2 (vlmop2). The objectives of vlmop2 are very easy to model. Ideal for illustrating Bayesian multiobjective optimization.
# Objective def vlmop2(x): transl = 1 / np.sqrt(2) part1 = (x[:, [0]] - transl) ** 2 + (x[:, [1]] - transl) ** 2 part2 = (x[:, [0]] + transl) ** 2 + (x[:, [1]] + transl) ** 2 y1 = 1 - np.exp(-1 * part1) y2 = 1 - np.exp(-1 * part2) return np.hstack((y1, y2)) # Setup input domain domain = gpflowopt.domain.ContinuousParameter('x1', -2, 2) + \ gpflowopt.domain.ContinuousParameter('x2', -2, 2) # Plot def plotfx(): X = gpflowopt.design.FactorialDesign(101, domain).generate() Z = vlmop2(X) shape = (101, 101) axes = [] plt.figure(figsize=(15, 5)) for i in range(Z.shape[1]): axes = axes + [plt.subplot2grid((1, 2), (0, i))] axes[-1].contourf(X[:,0].reshape(shape), X[:,1].reshape(shape), Z[:,i].reshape(shape)) axes[-1].set_title('Objective {}'.format(i+1)) axes[-1].set_xlabel('x1') axes[-1].set_ylabel('x2') axes[-1].set_xlim([domain.lower[0], domain.upper[0]]) axes[-1].set_ylim([domain.lower[1], domain.upper[1]]) return axes plotfx();
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
Multiobjective acquisition function We can model the belief of each objective by one GP prior or model each objective separately using a GP prior. We illustrate the latter approach here. A set of data points arranged in a Latin Hypercube is evaluated on the vlmop2 function. In multiobjective optimization the definition of improvement is ambigious. Here we define improvement using the contributing hypervolume which will determine the focus on density and uniformity of the Pareto set during sampling. For instance, due to the nature of the contributing hypervolume steep slopes in the Pareto front will be sampled less densely. The hypervolume-based probability of improvement is based on the model(s) of the objective functions (vlmop2) and aggregates all the information in one cost function which is a balance between: improving our belief of the objectives (high uncertainty) favoring points improving the Pareto set (large contributing hypervolume with a likely higher uncertainty) focussing on augmenting the Pareto set (small contributing hypervolume but with low uncertainty).
# Initial evaluations design = gpflowopt.design.LatinHyperCube(11, domain) X = design.generate() Y = vlmop2(X) # One model for each objective objective_models = [gpflow.gpr.GPR(X.copy(), Y[:,[i]].copy(), gpflow.kernels.Matern52(2, ARD=True)) for i in range(Y.shape[1])] for model in objective_models: model.likelihood.variance = 0.01 hvpoi = gpflowopt.acquisition.HVProbabilityOfImprovement(objective_models)
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
Running the Bayesian optimizer The optimization surface of multiobjective acquisition functions can be even more challenging than, e.g., standard expected improvement. Hence, a hybrid optimization scheme is preferred: a Monte Carlo optimization step first, then optimize the point with the best value. We then run the Bayesian Optimization and allow it to select up to 20 additional decisions.
# First setup the optimization strategy for the acquisition function # Combining MC step followed by L-BFGS-B acquisition_opt = gpflowopt.optim.StagedOptimizer([gpflowopt.optim.MCOptimizer(domain, 1000), gpflowopt.optim.SciPyOptimizer(domain)]) # Then run the BayesianOptimizer for 20 iterations optimizer = gpflowopt.BayesianOptimizer(domain, hvpoi, optimizer=acquisition_opt, verbose=True) result = optimizer.optimize([vlmop2], n_iter=20) print(result) print(optimizer.acquisition.pareto.front.value)
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
For multiple objectives the returned OptimizeResult object contains the identified Pareto set instead of just a single optimum. Note that this is computed on the raw data Y. The hypervolume-based probability of improvement operates on the Pareto set derived from the model predictions of the training data (to handle noise). This latter Pareto set can be found as optimizer.acquisition.pareto.front.value. Plotting the results Lets plot the belief of the final models and acquisition function.
def plot(): grid_size = 51 # 101 shape = (grid_size, grid_size) Xeval = gpflowopt.design.FactorialDesign(grid_size, domain).generate() Yeval_1, _ = hvpoi.models[0].predict_f(Xeval) Yeval_2, _ = hvpoi.models[1].predict_f(Xeval) Yevalc = hvpoi.evaluate(Xeval) plots = [((0,0), 1, 1, 'Objective 1 model', Yeval_1[:,0]), ((0,1), 1, 1, 'Objective 2 model', Yeval_2[:,0]), ((1,0), 2, 2, 'hypervolume-based PoI', Yevalc)] plt.figure(figsize=(7,7)) for i, (plot_pos, plot_rowspan, plot_colspan, plot_title, plot_data) in enumerate(plots): data = hvpoi.data[0] ax = plt.subplot2grid((3, 2), plot_pos, rowspan=plot_rowspan, colspan=plot_colspan) ax.contourf(Xeval[:,0].reshape(shape), Xeval[:,1].reshape(shape), plot_data.reshape(shape)) ax.scatter(data[:,0], data[:,1], c='w') ax.set_title(plot_title) ax.set_xlabel('x1') ax.set_ylabel('x2') ax.set_xlim([domain.lower[0], domain.upper[0]]) ax.set_ylim([domain.lower[1], domain.upper[1]]) plt.tight_layout() # Plot representing the model belief, and the belief mapped to EI and PoF plot() for model in objective_models: print(model)
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
Finally, we can extract and plot the Pareto front ourselves using the pareto.non_dominated_sort function on the final data matrix Y. The non-dominated sort returns the Pareto set (non-dominated solutions) as well as a dominance vector holding the number of dominated points for each point in Y. For example, we could only select the points with dom == 2, or dom == 0 (the latter retrieves the non-dominated solutions). Here we choose to use the dominance vector to color the points.
# plot pareto front plt.figure(figsize=(9, 4)) R = np.array([1.5, 1.5]) print('R:', R) hv = hvpoi.pareto.hypervolume(R) print('Hypervolume indicator:', hv) plt.figure(figsize=(7, 7)) pf, dom = gpflowopt.pareto.non_dominated_sort(hvpoi.data[1]) plt.scatter(hvpoi.data[1][:,0], hvpoi.data[1][:,1], c=dom) plt.title('Pareto set') plt.xlabel('Objective 1') plt.ylabel('Objective 2')
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
<center> Doing Math with Python </center> <center> <p> <b>Amit Saha</b> <p>May 29, PyCon US 2016 Education Summit <p>Portland, Oregon </center> ## About me - Software Engineer at [Freelancer.com](https://www.freelancer.com) HQ in Sydney, Australia - Author of "Doing Math with Python" (No Starch Press, 2015) - Writes for Linux Voice, Linux Journal, etc. - [Blog](http://echorand.me), [GitHub](http://github.com/amitsaha) #### Contact - [@echorand](http://twitter.com/echorand) - [Email](mailto:amitsaha.in@gmail.com) ### This talk - a proposal, a hypothesis, a statement *Python can lead to a more enriching learning and teaching experience in the classroom*
As I will attempt to describe in the next slides, Python is an amazing way to lead to a more fun learning and teaching experience. It can be a basic calculator, a fancy calculator and Math, Science, Geography.. Tools that will help us in that quest are:
notebooks/.ipynb_checkpoints/slides-checkpoint.ipynb
doingmathwithpython/pycon-us-2016
mit
(Main) Tools <img align="center" src="collage/logo_collage.png"></img> Python - a scientific calculator Python 3 is my favorite calculator (not Python 2 because 1/2 = 0) fabs(), abs(), sin(), cos(), gcd(), log() (See math) Descriptive statistics (See statistics) Python - a scientific calculator Develop your own functions: unit conversion, finding correlation, .., anything really Use PYTHONSTARTUP to extend the battery of readily available mathematical functions $ PYTHONSTARTUP=~/work/dmwp/pycon-us-2016/startup_math.py idle3 -s Unit conversion functions ``` unit_conversion() 1. Kilometers to Miles 2. Miles to Kilometers 3. Kilograms to Pounds 4. Pounds to Kilograms 5. Celsius to Fahrenheit 6. Fahrenheit to Celsius Which conversion would you like to do? 6 Enter temperature in fahrenheit: 98 Temperature in celsius: 36.66666666666667 ``` Finding linear correlation ``` x = [1, 2, 3, 4] y = [2, 4, 6.1, 7.9] find_corr_x_y(x, y) 0.9995411791453812 ``` Python - a really fancy calculator SymPy - a pure Python symbolic math library from sympy import awesomeness - don't try that :)
When you bring in SymPy to the picture, things really get awesome. You are suddenly writing computer programs which are capable of speaking algebra. You are no more limited to numbers. # Create graphs from algebraic expressions from sympy import Symbol, plot x = Symbol('x') p = plot(2*x**2 + 2*x + 2) # Solve equations from sympy import solve, Symbol x = Symbol('x') solve(2*x + 1) # Limits from sympy import Symbol, Limit, sin x = Symbol('x') Limit(sin(x)/x, x, 0).doit() # Derivative from sympy import Symbol, Derivative, sin, init_printing x = Symbol('x') init_printing() Derivative(sin(x)**(2*x+1), x).doit() # Indefinite integral from sympy import Symbol, Integral, sqrt, sin, init_printing x = Symbol('x') init_printing() Integral(sqrt(x)).doit() # Definite integral from sympy import Symbol, Integral, sqrt x = Symbol('x') Integral(sqrt(x), (x, 0, 2)).doit()
notebooks/.ipynb_checkpoints/slides-checkpoint.ipynb
doingmathwithpython/pycon-us-2016
mit
Python - Making other subjects more lively <img align="center" src="collage/collage1.png"></img> matplotlib basemap Interactive Jupyter Notebooks Bringing Science to life Animation of a Projectile motion Drawing fractals Interactively drawing a Barnsley Fern The world is your graph paper Showing places on a digital map Great base for the future Statistics and Graphing data -> Data Science Differential Calculus -> Machine learning Application of differentiation Use gradient descent to find a function's minimum value Predict the college admission score based on high school math score Use gradient descent as the optimizer for single variable linear regression model
### TODO: digit recognition using Neural networks ### Scikitlearn, pandas, scipy, statsmodel
notebooks/.ipynb_checkpoints/slides-checkpoint.ipynb
doingmathwithpython/pycon-us-2016
mit
Reading in files is quite easy. There are several readers in pydsd in either pydsd.io or pydsd.io.aux_readers depending on their level of support. In this case we will use the read_parsivel_arm_netcdf function. Generally we design readers for each new format we encounter so if you find a format that we cannot read, feel free to forward it to us, or even better submit a reader yourself. They are very easy to implement.
dsd = pydsd.read_parsivel_arm_netcdf(filename)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
So at this point we have the drop size distribution read in. Let's start by looking at the format of the dsd object. A full listing of the features can be found at http://josephhardinee.github.io/PyDSD/pydsd.html#module-pydsd.DropSizeDistribution . Generally data is stored in the fields dictionary. This is a dictionary of dictionaries where each key corresponds to a variable. Inside each of these dictionaries the data is stored in data while other metadata is attached at the top level.
dsd.fields.keys() dsd.fields['Nd']
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
We can now start to plot and visualize some of this data. PyDSD has several built in plotting functions, and you can always pull the data out yourself for more custom plotting routines. Many built-in plotting routines are available in the pydsd.plot module.
pydsd.plot.plot_dsd(dsd) plt.title('Drop Size Distribution - November 12, 2018')
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Plotting routines usually take in extra arguments to help customize the plot. Please check the documentation for the full list of arguments. We'll revisit plotting in a bit once we've calculated a few more interesting parameters. Depending on the type of Disdrometer, the drop spectra may be stored. PyDSD has some capabilities for working with spectra data including filtering, and reducing this back down to a DSD measurement. TODO: Examples of spectra processing DSD Estimation PyDSD has routines implemented for calculating various parameterizations of the Drop Size Distribution. A default set of these can be called using the DropSizeDistribution.calculate_dsd_parameterization class function. Let's do this and see what we get.
dsd.calculate_dsd_parameterization()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Now PyDSD will calculate various parameters and store these on the dsd object in the fields dictionary.
dsd.fields.keys()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
And we can dig down further and see what is in one of these objects. Similar to the defaults read in, this will store the values in the data member, and metadata attached to the object.
dsd.fields['D0']
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Now we can use some built in plotting functions to examine these. The first plot type is a time series plot of arbitrary 1D parameters. For example we can plot the D0 variable corresponding to median drop diameter. The plots know how to handle time and various pieces of metadata. Note that you can pass through arguments accepted by plt.plot and we will pass these on so you can customize the plot.
pydsd.plot.plot_ts(dsd, 'D0', marker='.')
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
These plots accept an axis argument as well incase you want to use this to make side by side comparison plots.
plt.figure(figsize=(12,6)) ax = plt.subplot(1,2,1) pydsd.plot.plot_ts(dsd, 'D0', marker='.', ax =ax) ax = plt.subplot(1,2,2) pydsd.plot.plot_ts(dsd, 'Nw', marker='.', ax =ax) plt.tight_layout()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
We have other standard types of plots built in to make life easier. For instance a normal thing to do is compare the relationship of the median drop diameter, with the normalized intercept parameter (Nw).
pydsd.plot.plot_NwD0(dsd)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1