markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Now lets construct a more complicated loop to do what we want First we do some things we don't need to do in the loop. Let's reload our atlas, and re-initiate our masker and correlation_measure
from nilearn.input_data import NiftiLabelsMasker from nilearn.connectome import ConnectivityMeasure from nilearn import datasets # load atlas multiscale = datasets.fetch_atlas_basc_multiscale_2015() atlas_filename = multiscale.scale064 # initialize masker (change verbosity) masker = NiftiLabelsMasker(labels_img=atlas_filename, standardize=True, memory='nilearn_cache', verbose=0) # initialize correlation measure, set to vectorize correlation_measure = ConnectivityMeasure(kind='correlation', vectorize=True, discard_diagonal=True)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Okay -- now that we have that taken care of, let's run our big loop! **NOTE**: On a laptop, this might a few minutes.
all_features = [] # here is where we will put the data (a container) for i,sub in enumerate(data): # extract the timeseries from the ROIs in the atlas time_series = masker.fit_transform(sub, confounds=confounds[i]) # create a region x region correlation matrix correlation_matrix = correlation_measure.fit_transform([time_series])[0] # add to our container all_features.append(correlation_matrix) # keep track of status print('finished %s of %s'%(i+1,len(data))) # Let's save the data to disk import numpy as np np.savez_compressed('MAIN_BASC064_subsamp_features',a = all_features)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
In case you do not want to run the full loop on your computer, you can load the output of the loop here!
feat_file = 'MAIN_BASC064_subsamp_features.npz' X_features = np.load(feat_file)['a'] X_features.shape
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Okay so we've got our features. We can visualize our feature matrix
import matplotlib.pyplot as plt plt.imshow(X_features, aspect='auto') plt.colorbar() plt.title('feature matrix') plt.xlabel('features') plt.ylabel('subjects')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Get Y (our target) and assess its distribution
# Let's load the phenotype data pheno_path = os.path.join(wdir, 'participants.tsv') import pandas pheno = pandas.read_csv(pheno_path, sep='\t').sort_values('participant_id') pheno.head()
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Looks like there is a column labeling age. Let's capture it in a variable
y_age = pheno['Age']
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Maybe we should have a look at the distribution of our target variable
import matplotlib.pyplot as plt import seaborn as sns sns.distplot(y_age)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Prepare data for machine learningHere, we will define a "training sample" where we can play around with our models. We will also set aside a "validation" sample that we will not touch until the end We want to be sure that our training and test sample are matched! We can do that with a "stratified split". This dataset has a variable indicating AgeGroup. We can use that to make sure our training and testing sets are balanced!
age_class = pheno['AgeGroup'] age_class.value_counts() from sklearn.model_selection import train_test_split # Split the sample to training/validation with a 60/40 ratio, and # stratify by age class, and also shuffle the data. X_train, X_val, y_train, y_val = train_test_split( X_features, # x y_age, # y test_size = 0.4, # 60%/40% split shuffle = True, # shuffle dataset # before splitting stratify = age_class, # keep # distribution # of ageclass # consistent # betw. train # & test sets. random_state = 123 # same shuffle each # time ) # print the size of our training and test groups print('training:', len(X_train), 'testing:', len(X_val))
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Let's visualize the distributions to be sure they are matched
sns.distplot(y_train) sns.distplot(y_val)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Run your first model!Machine learning can get pretty fancy pretty quickly. We'll start with a fairly standard regression model called a Support Vector Regressor (SVR). While this may seem unambitious, simple models can be very robust. And we probably don't have enough data to create more complex models (but we can try later).For more information, see this excellent resource:https://hal.inria.fr/hal-01824205 Let's fit our first model!
from sklearn.svm import SVR l_svr = SVR(kernel='linear') # define the model l_svr.fit(X_train, y_train) # fit the model
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Well... that was easy. Let's see how well the model learned the data!
# predict the training data based on the model y_pred = l_svr.predict(X_train) # caluclate the model accuracy acc = l_svr.score(X_train, y_train)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Let's view our results and plot them all at once!
# print results print('accuracy (R2)', acc) sns.regplot(y_pred,y_train) plt.xlabel('Predicted Age')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
HOLY COW! Machine learning is amazing!!! Almost a perfect fit!...which means there's something wrong. What's the problem here?
from sklearn.model_selection import train_test_split # Split the sample to training/test with a 75/25 ratio, and # stratify by age class, and also shuffle the data. age_class2 = pheno.loc[y_train.index,'AgeGroup'] X_train2, X_test, y_train2, y_test = train_test_split( X_train, # x y_train, # y test_size = 0.25, # 75%/25% split shuffle = True, # shuffle dataset # before splitting stratify = age_class2, # keep # distribution # of ageclass # consistent # betw. train # & test sets. random_state = 123 # same shuffle each # time ) # print the size of our training and test groups print('training:', len(X_train2), 'testing:', len(X_test)) from sklearn.metrics import mean_absolute_error # fit model just to training data l_svr.fit(X_train2,y_train2) # predict the *test* data based on the model trained on X_train2 y_pred = l_svr.predict(X_test) # caluclate the model accuracy acc = l_svr.score(X_test, y_test) mae = mean_absolute_error(y_true=y_test,y_pred=y_pred) # print results print('accuracy (R2) = ', acc) print('MAE = ',mae) sns.regplot(y_pred,y_test) plt.xlabel('Predicted Age')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Not perfect, but as predicting with unseen data goes, not too bad! Especially with a training sample of "only" 69 subjects. But we can do better! For example, we can increase the size our training set while simultaneously reducing bias by instead using 10-fold cross-validation
from sklearn.model_selection import cross_val_predict, cross_val_score # predict y_pred = cross_val_predict(l_svr, X_train, y_train, cv=10) # scores acc = cross_val_score(l_svr, X_train, y_train, cv=10) mae = cross_val_score(l_svr, X_train, y_train, cv=10, scoring='neg_mean_absolute_error')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
We can look at the accuracy of the predictions for each fold of the cross-validation
for i in range(10): print('Fold {} -- Acc = {}, MAE = {}'.format(i, acc[i],-mae[i]))
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
We can also look at the overall accuracy of the model
from sklearn.metrics import r2_score overall_acc = r2_score(y_train, y_pred) overall_mae = mean_absolute_error(y_train,y_pred) print('R2:',overall_acc) print('MAE:',overall_mae) sns.regplot(y_pred, y_train) plt.xlabel('Predicted Age')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Not too bad at all! But more importantly, this is a more accurate estimation of our model's predictive efficacy. Our sample size is larger and this is based on several rounds of prediction of unseen data. For example, we can now see that the effect is being driven by the model's successful parsing of adults vs. children, but is not performing so well within the adult or children group. This was not evident during our previous iteration of the model Tweak your modelIt's very important to learn when and where its appropriate to "tweak" your model.Since we have done all of the previous analysis in our training data, it's fine to try out different models. But we **absolutely cannot** "test" it on our left out data. If we do, we are in great danger of overfitting.It is not uncommon to try other models, or tweak hyperparameters. In this case, due to our relatively small sample size, we are probably not powered sufficiently to do so, and we would once again risk overfitting. However, for the sake of demonstration, we will do some tweaking. We will try a few different examples:* Normalizing our target data* Tweaking our hyperparameters* Trying a more complicated model* Feature selection Normalize the target data
# Create a log transformer function and log transform Y (age) from sklearn.preprocessing import FunctionTransformer log_transformer = FunctionTransformer(func = np.log, validate=True) log_transformer.fit(y_train.values.reshape(-1,1)) y_train_log = log_transformer.transform(y_train.values.reshape(-1,1))[:,0] sns.distplot(y_train_log) plt.title("Log-Transformed Age")
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Now let's go ahead and cross-validate our model once again with this new log-transformed target
# predict y_pred = cross_val_predict(l_svr, X_train, y_train_log, cv=10) # scores acc = r2_score(y_train_log, y_pred) mae = mean_absolute_error(y_train_log,y_pred) print('R2:',acc) print('MAE:',mae) sns.regplot(y_pred, y_train_log) plt.xlabel('Predicted Log Age') plt.ylabel('Log Age')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Seems like a definite improvement, right? I think we can agree on that. But we can't forget about interpretability? The MAE is much less interpretable now. Tweak the hyperparametersMany machine learning algorithms have hyperparameters that can be "tuned" to optimize model fitting. Careful parameter tuning can really improve a model, but haphazard tuning will often lead to overfitting.Our SVR model has multiple hyperparameters. Let's explore some approaches for tuning them
SVR?
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
One way is to plot a "Validation Curve" -- this will let us view changes in training and validation accuracy of a model as we shift its hyperparameters. We can do this easily with sklearn.
from sklearn.model_selection import validation_curve C_range = 10. ** np.arange(-3, 8) # A range of different values for C train_scores, valid_scores = validation_curve(l_svr, X_train, y_train_log, param_name= "C", param_range = C_range, cv=10) # A bit of pandas magic to prepare the data for a seaborn plot tScores = pandas.DataFrame(train_scores).stack().reset_index() tScores.columns = ['C','Fold','Score'] tScores.loc[:,'Type'] = ['Train' for x in range(len(tScores))] vScores = pandas.DataFrame(valid_scores).stack().reset_index() vScores.columns = ['C','Fold','Score'] vScores.loc[:,'Type'] = ['Validate' for x in range(len(vScores))] ValCurves = pandas.concat([tScores,vScores]).reset_index(drop=True) ValCurves.head() # And plot! # g = sns.lineplot(x='C',y='Score',hue='Type',data=ValCurves) # g.set_xticks(range(10)) # g.set_xticklabels(C_range, rotation=90) g = sns.factorplot(x='C',y='Score',hue='Type',data=ValCurves) plt.xticks(range(10)) g.set_xticklabels(C_range, rotation=90)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
It looks like accuracy is better for higher values of C, and plateaus somewhere between 0.1 and 1. The default setting is C=1, so it looks like we can't really improve by changing C. But our SVR model actually has two hyperparameters, C and epsilon. Perhaps there is an optimal combination of settings for these two parameters.We can explore that somewhat quickly with a grid search, which is once again easily achieved with sklearn. Because we are fitting the model multiple times witih cross-validation, this will take some time
from sklearn.model_selection import GridSearchCV C_range = 10. ** np.arange(-3, 8) epsilon_range = 10. ** np.arange(-3, 8) param_grid = dict(epsilon=epsilon_range, C=C_range) grid = GridSearchCV(l_svr, param_grid=param_grid, cv=10) grid.fit(X_train, y_train_log)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Now that the grid search has completed, let's find out what was the "best" parameter combination
print(grid.best_params_)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
And if redo our cross-validation with this parameter set?
y_pred = cross_val_predict(SVR(kernel='linear',C=0.10,epsilon=0.10, gamma='auto'), X_train, y_train_log, cv=10) # scores acc = r2_score(y_train_log, y_pred) mae = mean_absolute_error(y_train_log,y_pred) print('R2:',acc) print('MAE:',mae) sns.regplot(y_pred, y_train_log) plt.xlabel('Predicted Log Age') plt.ylabel('Log Age')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Perhaps unsurprisingly, the model fit is actually exactly the same as what we had with our defaults. There's a reason they are defaults ;-)Grid search can be a powerful and useful tool. But can you think of a way that, if not properly utilized, it could lead to overfitting? You can find a nice set of tutorials with links to very helpful content regarding how to tune hyperparameters and while be aware of over- and under-fitting here:https://scikit-learn.org/stable/modules/learning_curve.html Trying a more complicated model In principle, there is no real reason to do this. Perhaps one could make an argument for quadratic relationship with age, but we probably don't have enough subjects to learn a complicated non-linear model. But for the sake of demonstration, we can give it a shot.We'll use a validation curve to see the result of our model if, instead of fitting a linear model, we instead try to the fit a 2nd, 3rd, ... 8th order polynomial.
validation_curve? from sklearn.model_selection import validation_curve degree_range = list(range(1,8)) # A range of different values for C train_scores, valid_scores = validation_curve(SVR(kernel='poly', gamma='scale' ), X=X_train, y=y_train_log, param_name= "degree", param_range = degree_range, cv=10) # A bit of pandas magic to prepare the data for a seaborn plot tScores = pandas.DataFrame(train_scores).stack().reset_index() tScores.columns = ['Degree','Fold','Score'] tScores.loc[:,'Type'] = ['Train' for x in range(len(tScores))] vScores = pandas.DataFrame(valid_scores).stack().reset_index() vScores.columns = ['Degree','Fold','Score'] vScores.loc[:,'Type'] = ['Validate' for x in range(len(vScores))] ValCurves = pandas.concat([tScores,vScores]).reset_index(drop=True) ValCurves.head() # And plot! # g = sns.lineplot(x='Degree',y='Score',hue='Type',data=ValCurves) # g.set_xticks(range(10)) # g.set_xticklabels(degree_range, rotation=90) g = sns.factorplot(x='Degree',y='Score',hue='Type',data=ValCurves) plt.xticks(range(10)) g.set_xticklabels(degree_range, rotation=90)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
It appears that we cannot improve our model by increasing the complexity of the fit. If one looked only at the training data, one might surmise that a 2nd order fit could be a slightly better model. But that improvement does not generalize to the validation data.
# y_pred = cross_val_predict(SVR(kernel='rbf', gamma='scale'), X_train, y_train_log, cv=10) # # scores # acc = r2_score(y_train_log, y_pred) # mae = mean_absolute_error(y_train_log,y_pred) # print('R2:',acc) # print('MAE:',mae) # sns.regplot(y_pred, y_train_log) # plt.xlabel('Predicted Log Age') # plt.ylabel('Log Age')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Feature selectionRight now, we have 2016 features. Are all of those really going to contribute to the model stably? Intuitively, models tend to perform better when there are fewer, more important features than when there are many, less imortant features. The tough part is figuring out which features are useful or important.Here will quickly try a basic feature seclection strategy The SelectPercentile() function will select the top X% of features based on univariate tests. This is a way of identifying theoretically more useful features. But remember, significance != prediction! We are also in danger of overfitting here. For starters, if we want to test this with 10-fold cross-validation, we will need to do a separate feature selection within each fold! That means we'll need to do the cross-validation manually instead of using cross_val_predict().
from sklearn.feature_selection import SelectPercentile, f_regression from sklearn.model_selection import KFold from sklearn.pipeline import Pipeline # Build a tiny pipeline that does feature selection (top 20% of features), # and then prediction with our linear svr model. model = Pipeline([ ('feature_selection',SelectPercentile(f_regression,percentile=20)), ('prediction', l_svr) ]) y_pred = [] # a container to catch the predictions from each fold y_index = [] # just in case, the index for each prediciton # First we create 10 splits of the data skf = KFold(n_splits=10, shuffle=True, random_state=123) # For each split, assemble the train and test samples for tr_ind, te_ind in skf.split(X_train): X_tr = X_train[tr_ind] y_tr = y_train_log[tr_ind] X_te = X_train[te_ind] y_index += list(te_ind) # store the index of samples to predict # and run our pipeline model.fit(X_tr, y_tr) # fit the data to the model using our mini pipeline predictions = model.predict(X_te).tolist() # get the predictions for this fold y_pred += predictions # add them to the list of predictions
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Alrighty, let's see if only using the top 20% of features improves the model at all...
acc = r2_score(y_train_log[y_index], y_pred) mae = mean_absolute_error(y_train_log[y_index],y_pred) print('R2:',acc) print('MAE:',mae) sns.regplot(np.array(y_pred), y_train_log[y_index]) plt.xlabel('Predicted Log Age') plt.ylabel('Log Age')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Nope, in fact it got a bit worse. It seems we're getting "value at the margins" so to speak. This is a very good example of how significance != prediction, as demonstrated in this figure from Bzdok et al., 2018 *bioRxiv*![Bzdok2018](https://www.biorxiv.org/content/biorxiv/early/2018/05/21/327437/F1.large.jpg?width=800&height=600&carousel=1) See here for an explanation of different feature selection options and how to implement them in sklearn: https://scikit-learn.org/stable/modules/feature_selection.htmlAnd here is a thoughtful tutorial covering feature selection for novel machine learners: https://www.datacamp.com/community/tutorials/feature-selection-python So there you have it. We've tried many different strategies, but most of our "tweaks" haven't really lead to improvements in the model. This is not always the case, but it is not uncommon. Can you think of some reasons why?Moving on to our validation data, we probably should just stick to a basic model, though predicting log age might be a good idea! Can our model predict age in completely un-seen data?Now that we've fit a model we think has possibly learned how to decode age based on rs-fmri signal, let's put it to the test. We will train our model on all of the training data, and try to predict the age of the subjects we left out at the beginning of this section. Because we performed a log transformation on our training data, we will need to transform our testing data using the *same information!* But that's easy because we stored our transformation in an object!
# Notice how we use the Scaler that was fit to X_train and apply to X_test, # rather than creating a new Scaler for X_test y_val_log = log_transformer.transform(y_val.values.reshape(-1,1))[:,0]
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
And now for the moment of truth! No cross-validation needed here. We simply fit the model with the training data and use it to predict the testing dataI'm so nervous. Let's just do it all in one cell
l_svr.fit(X_train, y_train_log) # fit to training data y_pred = l_svr.predict(X_val) # classify age class using testing data acc = l_svr.score(X_val, y_val_log) # get accuracy (r2) mae = mean_absolute_error(y_val_log, y_pred) # get mae # print results print('accuracy (r2) =', acc) print('mae = ',mae) # plot results sns.regplot(y_pred, y_val_log) plt.xlabel('Predicted Log Age') plt.ylabel('Log Age')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
***Wow!!*** Congratulations. You just trained a machine learning model that used real rs-fmri data to predict the age of real humans.The proper thing to do at this point would be to repeat the train-validation split multiple times. This will ensure the results are not specific to this validation set, and will give you some confidence intervals around your results.As an assignment, you can give that a try below. Create 10 different splits of the entire dataset, fit the model and get your predictions. Then, plot the range of predictions.
# SPACE FOR YOUR ASSIGNMENT
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
So, it seems like something in this data does seem to be systematically related to age ... but what? Interpreting model feature importancesInterpreting the feature importances of a machine learning model is a real can of worms. This is an area of active research. Unfortunately, it's hard to trust the feature importance of some models. You can find a whole tutorial on this subject here:http://gael-varoquaux.info/interpreting_ml_tuto/index.htmlFor now, we'll just eschew better judgement and take a look at our feature importances. While we can't ascribe any biological relevance to the features, it can still be helpful to know what the model is using to make its predictions. This is a good way to, for example, establish whether your model is actually learning based on a confound! Could you think of some examples? We can access the feature importances (weights) used my the model
l_svr.coef_
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
lets plot these weights to see their distribution better
plt.bar(range(l_svr.coef_.shape[-1]),l_svr.coef_[0]) plt.title('feature importances') plt.xlabel('feature') plt.ylabel('weight')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Or perhaps it will be easier to visualize this information as a matrix similar to the one we started withWe can use the correlation measure from before to perform an inverse transform
correlation_measure.inverse_transform(l_svr.coef_).shape from nilearn import plotting feat_exp_matrix = correlation_measure.inverse_transform(l_svr.coef_)[0] plotting.plot_matrix(feat_exp_matrix, figure=(10, 8), labels=range(feat_exp_matrix.shape[0]), reorder=False, tri='lower')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Let's see if we can throw those features onto an actual brain.First, we'll need to gather the coordinates of each ROI of our atlas
coords = plotting.find_parcellation_cut_coords(atlas_filename)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
And now we can use our feature matrix and the wonders of nilearn to create a connectome map where each node is an ROI, and each connection is weighted by the importance of the feature to the model
plotting.plot_connectome(feat_exp_matrix, coords, colorbar=True)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Whoa!! That's...a lot to process. Maybe let's threshold the edges so that only the most important connections are visualized
plotting.plot_connectome(feat_exp_matrix, coords, colorbar=True, edge_threshold=0.035)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
That's definitely an improvement, but it's still a bit hard to see what's going on.Nilearn has a new feature that let's use view this data interactively!
plotting.view_connectome(feat_exp_matrix, coords, threshold='98%')
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Iterators Often an important piece of data analysis is repeating a similar calculation, over and over, in an automated fashion.For example, you may have a table of a names that you'd like to split into first and last, or perhaps of dates that you'd like to convert to some standard format.One of Python's answers to this is the *iterator* syntax.We've seen this already with the ``range`` iterator:
for i in range(10): print(i, end=' ')
0 1 2 3 4 5 6 7 8 9
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
Here we're going to dig a bit deeper.It turns out that in Python 3, ``range`` is not a list, but is something called an *iterator*, and learning how it works is key to understanding a wide class of very useful Python functionality. Iterating over listsIterators are perhaps most easily understood in the concrete case of iterating through a list.Consider the following:
for value in [2, 4, 6, 8, 10]: # do some operation print(value + 1, end=' ')
3 5 7 9 11
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
The familiar "``for x in y``" syntax allows us to repeat some operation for each value in the list.The fact that the syntax of the code is so close to its English description ("*for [each] value in [the] list*") is just one of the syntactic choices that makes Python such an intuitive language to learn and use.But the face-value behavior is not what's *really* happening.When you write something like "``for val in L``", the Python interpreter checks whether it has an *iterator* interface, which you can check yourself with the built-in ``iter`` function:
iter([2, 4, 6, 8, 10])
_____no_output_____
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
It is this iterator object that provides the functionality required by the ``for`` loop.The ``iter`` object is a container that gives you access to the next object for as long as it's valid, which can be seen with the built-in function ``next``:
I = iter([2, 4, 6, 8, 10]) print(next(I)) print(next(I)) print(next(I))
6
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
What is the purpose of this level of indirection?Well, it turns out this is incredibly useful, because it allows Python to treat things as lists that are *not actually lists*. ``range()``: A List Is Not Always a ListPerhaps the most common example of this indirect iteration is the ``range()`` function in Python 3 (named ``xrange()`` in Python 2), which returns not a list, but a special ``range()`` object:
range(10)
_____no_output_____
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
``range``, like a list, exposes an iterator:
iter(range(10))
_____no_output_____
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
So Python knows to treat it *as if* it's a list:
for i in range(10): print(i, end=' ')
0 1 2 3 4 5 6 7 8 9
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
The benefit of the iterator indirection is that *the full list is never explicitly created!*We can see this by doing a range calculation that would overwhelm our system memory if we actually instantiated it (note that in Python 2, ``range`` creates a list, so running the following will not lead to good things!):
N = 10 ** 12 for i in range(N): if i >= 10: break print(i, end=', ')
0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
If ``range`` were to actually create that list of one trillion values, it would occupy tens of terabytes of machine memory: a waste, given the fact that we're ignoring all but the first 10 values!In fact, there's no reason that iterators ever have to end at all!Python's ``itertools`` library contains a ``count`` function that acts as an infinite range:
from itertools import count for i in count(): if i >= 10: break print(i, end=', ')
0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
Had we not thrown-in a loop break here, it would go on happily counting until the process is manually interrupted or killed (using, for example, ``ctrl-C``). Useful IteratorsThis iterator syntax is used nearly universally in Python built-in types as well as the more data science-specific objects we'll explore in later sections.Here we'll cover some of the more useful iterators in the Python language: ``enumerate``Often you need to iterate not only the values in an array, but also keep track of the index.You might be tempted to do things this way:
L = [2, 4, 6, 8, 10] for i in range(len(L)): print(i, L[i])
0 2 1 4 2 6 3 8 4 10
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
Although this does work, Python provides a cleaner syntax using the ``enumerate`` iterator:
for i, val in enumerate(L): print(i, val)
0 2 1 4 2 6 3 8 4 10
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
This is the more "Pythonic" way to enumerate the indices and values in a list. ``zip``Other times, you may have multiple lists that you want to iterate over simultaneously.You could certainly iterate over the index as in the non-Pythonic example we looked at previously, but it is better to use the ``zip`` iterator, which zips together iterables:
L = [2, 4, 6, 8, 10] R = [3, 6, 9, 12, 15] for lval, rval in zip(L, R): print(lval, rval)
2 3 4 6 6 9 8 12 10 15
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
Any number of iterables can be zipped together, and if they are different lengths, the shortest will determine the length of the ``zip``. ``map`` and ``filter``The ``map`` iterator takes a function and applies it to the values in an iterator:
# find the first 10 square numbers square = lambda x: x ** 2 for val in map(square, range(10)): print(val, end=' ')
0 1 4 9 16 25 36 49 64 81
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
The ``filter`` iterator looks similar, except it only passes-through values for which the filter function evaluates to True:
# find values up to 10 for which x % 2 is zero is_even = lambda x: x % 2 == 0 for val in filter(is_even, range(10)): print(val, end=' ')
0 2 4 6 8
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
The ``map`` and ``filter`` functions, along with the ``reduce`` function (which lives in Python's ``functools`` module) are fundamental components of the *functional programming* style, which, while not a dominant programming style in the Python world, has its outspoken proponents (see, for example, the [pytoolz](https://toolz.readthedocs.org/en/latest/) library). Iterators as function argumentsWe saw in [``*args`` and ``**kwargs``: Flexible Arguments](*args-and-**kwargs:-Flexible-Arguments). that ``*args`` and ``**kwargs`` can be used to pass sequences and dictionaries to functions.It turns out that the ``*args`` syntax works not just with sequences, but with any iterator:
print(*range(10))
0 1 2 3 4 5 6 7 8 9
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
So, for example, we can get tricky and compress the ``map`` example from before into the following:
print(*map(lambda x: x ** 2, range(10)))
0 1 4 9 16 25 36 49 64 81
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
Using this trick lets us answer the age-old question that comes up in Python learners' forums: why is there no ``unzip()`` function which does the opposite of ``zip()``?If you lock yourself in a dark closet and think about it for a while, you might realize that the opposite of ``zip()`` is... ``zip()``! The key is that ``zip()`` can zip-together any number of iterators or sequences. Observe:
L1 = (1, 2, 3, 4) L2 = ('a', 'b', 'c', 'd') z = zip(L1, L2) print(*z) z = zip(L1, L2) new_L1, new_L2 = zip(*z) print(new_L1, new_L2)
(1, 2, 3, 4) ('a', 'b', 'c', 'd')
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
Ponder this for a while. If you understand why it works, you'll have come a long way in understanding Python iterators! Specialized Iterators: ``itertools``We briefly looked at the infinite ``range`` iterator, ``itertools.count``.The ``itertools`` module contains a whole host of useful iterators; it's well worth your while to explore the module to see what's available.As an example, consider the ``itertools.permutations`` function, which iterates over all permutations of a sequence:
from itertools import permutations p = permutations(range(3)) print(*p)
(0, 1, 2) (0, 2, 1) (1, 0, 2) (1, 2, 0) (2, 0, 1) (2, 1, 0)
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
Similarly, the ``itertools.combinations`` function iterates over all unique combinations of ``N`` values within a list:
from itertools import combinations c = combinations(range(4), 2) print(*c)
(0, 1) (0, 2) (0, 3) (1, 2) (1, 3) (2, 3)
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
Somewhat related is the ``product`` iterator, which iterates over all sets of pairs between two or more iterables:
from itertools import product p = product('ab', range(3)) print(*p)
('a', 0) ('a', 1) ('a', 2) ('b', 0) ('b', 1) ('b', 2)
CC0-1.0
10-Iterators.ipynb
jas076100/WhirlwindTourOfPython-master
Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, G.F. Forsyth, C.D. Cooper. Spreading out We're back! This is the fourth notebook of _Spreading out: parabolic PDEs,_ Module 4 of the course [**"Practical Numerical Methods with Python"**](https://openedx.seas.gwu.edu/courses/course-v1:MAE+MAE6286+2017/about). In the [previous notebook](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/04_spreadout/04_03_Heat_Equation_2D_Explicit.ipynb), we solved a 2D problem for the first time, using an explicit scheme. We know explicit schemes have stability constraints that might make them impractical in some cases, due to requiring a very small time step. Implicit schemes are unconditionally stable, offering the advantage of larger time steps; in [notebook 2](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/04_spreadout/04_02_Heat_Equation_1D_Implicit.ipynb), we look at the 1D implicit solution of diffusion. Already, that was quite a lot of work: setting up a matrix of coefficients and a right-hand-side vector, while taking care of the boundary conditions, and then solving the linear system. And now, we want to do implicit schemes in 2D—are you ready for this challenge? 2D Heat conduction We already studied 2D heat conduction in the previous lesson, but now we want to work out how to build an implicit solution scheme. To refresh your memory, here is the heat equation again:$$\begin{equation}\frac{\partial T}{\partial t} = \alpha \left(\frac{\partial^2 T}{\partial x^2} + \frac{\partial^2 T}{\partial y^2} \right)\end{equation}$$Our previous solution used a Dirichlet boundary condition on the left and bottom boundaries, with $T(x=0)=T(y=0)=100$, and a Neumann boundary condition with zero flux on the top and right edges, with $q_x=q_y=0$.$$\left( \left.\frac{\partial T}{\partial y}\right|_{y=0.1} = q_y \right) \quad \text{and} \quad \left( \left.\frac{\partial T}{\partial x}\right|_{x=0.1} = q_x \right)$$Figure 1 shows a sketch of the problem set up for our hypothetical computer chip with two hot edges and two insulated edges. Figure 1: Simplified microchip problem setup. Implicit schemes in 2D An implicit discretization will evaluate the spatial derivatives at the next time level, $t^{n+1}$, using the unknown values of the solution variable. For the 2D heat equation with central difference in space, that is written as:$$\begin{equation} \begin{split} & \frac{T^{n+1}_{i,j} - T^n_{i,j}}{\Delta t} = \\ & \quad \alpha \left( \frac{T^{n+1}_{i+1, j} - 2T^{n+1}_{i,j} + T^{n+1}_{i-1,j}}{\Delta x^2} + \frac{T^{n+1}_{i, j+1} - 2T^{n+1}_{i,j} + T^{n+1}_{i,j-1}}{\Delta y^2} \right) \\ \end{split}\end{equation}$$This equation looks better when we put what we *don't know* on the left and what we *do know* on the right. Make sure to work this out yourself on a piece of paper.$$\begin{equation} \begin{split} & -\frac{\alpha \Delta t}{\Delta x^2} \left( T^{n+1}_{i-1,j} + T^{n+1}_{i+1,j} \right) + \left( 1 + 2 \frac{\alpha \Delta t}{\Delta x^2} + 2 \frac{\alpha \Delta t}{\Delta y^2} \right) T^{n+1}_{i,j} \\& \quad \quad \quad -\frac{\alpha \Delta t}{\Delta y^2} \left( T^{n+1}_{i,j-1} + T^{n+1}_{i,j+1} \right) = T^n_{i,j} \\ \end{split}\end{equation}$$To make this discussion easier, let's assume that the mesh spacing is the same in both directions and $\Delta x=\Delta y = \delta$:$$\begin{equation}-T^{n+1}_{i-1,j} - T^{n+1}_{i+1,j} + \left(\frac{\delta^2}{\alpha \Delta t} + 4 \right) T^{n+1}_{i,j} - T^{n+1}_{i,j-1}-T^{n+1}_{i,j+1} = \frac{\delta^2}{\alpha \Delta t}T^n_{i,j}\end{equation}$$Just like in the one-dimensional case, $T_{i,j}$ appears in the equation for $T_{i-1,j}$, $T_{i+1,j}$, $T_{i,j+1}$ and $T_{i,j-1}$, and we can form a linear system to advance in time. But, how do we construct the matrix in this case? What are the $(i+1,j)$, $(i-1,j)$, $(i,j+1)$, and $(i,j-1)$ positions in the matrix?With explicit schemes we don't need to worry about these things. We can lay out the data just as it is in the physical problem. We had an array `T` that was a 2-dimensional matrix. To fetch the temperature in the next node in the $x$ direction $(T_{i+1,j})$ we just did `T[j,i+1]`, and likewise in the $y$ direction $(T_{i,j+1})$ was in `T[j+1,i]`. In implicit schemes, we need to think a bit harder about how the data is mapped to the physical problem.Also, remember from the [notebook on 1D-implicit schemes](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/04_spreadout/04_02_Heat_Equation_1D_Implicit.ipynb) that the linear system had $N-2$ elements? We applied boundary conditions on nodes $i=0$ and $i=N-1$, and they were not modified by the linear system. In 2D, this becomes a bit more complicated.Let's use Figure 1, representing a set of grid nodes in two dimensions, to guide the discussion. Figure 2: Layout of matrix elements in 2D problem Say we have the 2D domain of size $L_x\times L_y$ discretized in $n_x$ and $n_y$ points. We can divide the nodes into boundary nodes (empty circles) and interior nodes (filled circles).The boundary nodes, as the name says, are on the boundary. They are the nodes with indices $(i=0,j)$, $(i=n_x-1,j)$, $(i,j=0)$, and $(i,j=n_y-1)$, and boundary conditions are enforced there.The interior nodes are not on the boundary, and the finite-difference equation acts on them. If we leave the boundary nodes aside for the moment, then the grid will have $(n_x-2)\cdot(n_y-2)$ nodes that need to be updated on each time step. This is the number of unknowns in the linear system. The matrix of coefficients will have $\left( (n_x-2)\cdot(n_y-2) \right)^2$ elements (most of them zero!).To construct the matrix, we will iterate over the nodes in an x-major order: index $i$ will run faster. The order will be * $(i=1,j=1)$* $(i=2,j=1)$ ...* $(i=nx-2,j=1)$* $(i=1,j=2)$* $(i=2,j=2)$ ... * $(i=n_x-2,j=n_y-2)$. That is the ordering represented by dotted line on Figure 1. Of course, if you prefer to organize the nodes differently, feel free to do so!Because we chose this ordering, the equation for nodes $(i-1,j)$ and $(i+1,j)$ will be just before and after $(i,j)$, respectively. But what about $(i,j-1)$ and $(i,j+1)$? Even though in the physical problem they are very close, the equations are $n_x-2$ places apart! This can tie your head in knots pretty quickly. _The only way to truly understand it is to make your own diagrams and annotations on a piece of paper and reconstruct this argument!_ Boundary conditions Before we attempt to build the matrix, we need to think about boundary conditions. There is some bookkeeping to be done here, so bear with us for a moment.Say, for example, that the left and bottom boundaries have Dirichlet boundary conditions, and the top and right boundaries have Neumann boundary conditions.Let's look at each case:**Bottom boundary:** The equation for $j=1$ (interior points adjacent to the bottom boundary) uses values from $j=0$, which are known. Let's put that on the right-hand side of the equation. We get this equation for all points across the $x$-axis that are adjacent to the bottom boundary:$$\begin{equation} \begin{split} -T^{n+1}_{i-1,1} - T^{n+1}_{i+1,1} + \left( \frac{\delta^2}{\alpha \Delta t} + 4 \right) T^{n+1}_{i,1} - T^{n+1}_{i,j+1} \qquad & \\ = \frac{\delta^2}{\alpha \Delta t} T^n_{i,1} + T^{n+1}_{i,0} & \\ \end{split}\end{equation}$$**Left boundary:**Like for the bottom boundary, the equation for $i=1$ (interior points adjacent to the left boundary) uses known values from $i=0$, and we will put that on the right-hand side:$$\begin{equation} \begin{split} -T^{n+1}_{2,j} + \left( \frac{\delta^2}{\alpha \Delta t} + 4 \right) T^{n+1}_{1,j} - T^{n+1}_{1,j-1} - T^{n+1}_{1,j+1} \qquad & \\ = \frac{\delta^2}{\alpha \Delta t} T^n_{1,j} + T^{n+1}_{0,j} & \\ \end{split}\end{equation}$$**Right boundary:**Say the boundary condition is $\left. \frac{\partial T}{\partial x} \right|_{x=L_x} = q_x$. Its finite-difference approximation is$$\begin{equation} \frac{T^{n+1}_{n_x-1,j} - T^{n+1}_{n_x-2,j}}{\delta} = q_x\end{equation}$$We can write $T^{n+1}_{n_x-1,j} = \delta q_x + T^{n+1}_{n_x-2,j}$ to get the finite difference equation for $i=n_x-2$:$$\begin{equation} \begin{split} -T^{n+1}_{n_x-3,j} + \left( \frac{\delta^2}{\alpha \Delta t} + 3 \right) T^{n+1}_{n_x-2,j} - T^{n+1}_{n_x-2,j-1} - T^{n+1}_{n_x-2,j+1} \qquad & \\ = \frac{\delta^2}{\alpha \Delta t} T^n_{n_x-2,j} + \delta q_x & \\ \end{split}\end{equation}$$Not sure about this? Grab pen and paper! _Please_, check this yourself. It will help you understand!**Top boundary:**Neumann boundary conditions specify the derivative normal to the boundary: $\left. \frac{\partial T}{\partial y} \right|_{y=L_y} = q_y$. No need to repeat what we did for the right boundary, right? The equation for $j=n_y-2$ is$$\begin{equation} \begin{split} -T^{n+1}_{i-1,n_y-2} - T^{n+1}_{i+1,n_y-2} + \left( \frac{\delta^2}{\alpha \Delta t} + 3 \right) T^{n+1}_{i,n_y-2} - T^{n+1}_{i,n_y-3} \qquad & \\ = \frac{\delta^2}{\alpha \Delta t} T^n_{i,n_y-2} + \delta q_y & \\ \end{split}\end{equation}$$So far, we have then 5 possible cases: bottom, left, right, top, and interior points. Does this cover everything? What about corners? **Bottom-left corner**At $T_{1,1}$ there is a Dirichlet boundary condition at $i=0$ and $j=0$. This equation is:$$\begin{equation} \begin{split} -T^{n+1}_{2,1} + \left( \frac{\delta^2}{\alpha \Delta t} + 4 \right) T^{n+1}_{1,1} - T^{n+1}_{1,2} \qquad & \\ = \frac{\delta^2}{\alpha \Delta t} T^n_{1,1} + T^{n+1}_{0,1} + T^{n+1}_{1,0} & \\ \end{split}\end{equation}$$**Top-left corner:**At $T_{1,n_y-2}$ there is a Dirichlet boundary condition at $i=0$ and a Neumann boundary condition at $i=n_y-1$. This equation is:$$\begin{equation} \begin{split} -T^{n+1}_{2,n_y-2} + \left( \frac{\delta^2}{\alpha \Delta t} + 3 \right) T^{n+1}_{1,n_y-2} - T^{n+1}_{1,n_y-3} \qquad & \\ = \frac{\delta^2}{\alpha \Delta t} T^n_{1,n_y-2} + T^{n+1}_{0,n_y-2} + \delta q_y & \\ \end{split}\end{equation}$$**Top-right corner**At $T_{n_x-2,n_y-2}$, there are Neumann boundary conditions at both $i=n_x-1$ and $j=n_y-1$. The finite difference equation is then$$\begin{equation} \begin{split} -T^{n+1}_{n_x-3,n_y-2} + \left( \frac{\delta^2}{\alpha \Delta t} + 2 \right) T^{n+1}_{n_x-2,n_y-2} - T^{n+1}_{n_x-2,n_y-3} \qquad & \\ = \frac{\delta^2}{\alpha \Delta t} T^n_{n_x-2,n_y-2} + \delta(q_x + q_y) & \\ \end{split}\end{equation}$$**Bottom-right corner**To calculate $T_{n_x-2,1}$ we need to consider a Dirichlet boundary condition to the bottom and a Neumann boundary condition to the right. We will get a similar equation to the top-left corner!$$\begin{equation} \begin{split} -T^{n+1}_{n_x-3,1} + \left( \frac{\delta^2}{\alpha \Delta t} + 3 \right) T^{n+1}_{n_x-2,1} - T^{n+1}_{n_x-2,2} \qquad & \\ = \frac{\delta^2}{\alpha \Delta t} T^n_{n_x-2,1} + T^{n+1}_{n_x-2,0} + \delta q_x & \\ \end{split}\end{equation}$$Okay, now we are actually ready. We have checked every possible case! The linear system Like in the previous lesson introducing implicit schemes, we will solve a linear system at every time step:$$[A][T^{n+1}_\text{int}] = [b]+[b]_{b.c.}$$The coefficient matrix now takes some more work to figure out and to build in code. There is no substitute for you working this out patiently on paper!The structure of the matrix can be described as a series of diagonal blocks, and lots of zeros elsewhere. Look at Figure 3, representing the block structure of the coefficient matrix, and refer back to Figure 2, showing the discretization grid in physical space. The first row of interior points, adjacent to the bottom boundary, generates the matrix block labeled $A_1$. The top row of interior points, adjacent to the top boundary generates the matrix block labeled $A_3$. All other interior points in the grid generate similar blocks, labeled $A_2$ on Figure 3. Figure 3: Sketch of coefficient-matrix blocks. Figure 4: Grid points corresponding to each matrix-block type. The matrix block $A_1$ isThe block matrix $A_2$ isThe block matrix $A_3$ is Vector $T^{n+1}_\text{int}$ contains the temperature of the interior nodes in the next time step. It is:$$\begin{equation}T^{n+1}_\text{int} = \left[\begin{array}{c}T^{n+1}_{1,1}\\T^{n+1}_{2,1} \\\vdots \\T^{n+1}_{n_x-2,1} \\T^{n+1}_{2,1} \\\vdots \\T^{n+1}_{n_x-2,n_y-2}\end{array}\right]\end{equation}$$Remember the x-major ordering we chose! Finally, the right-hand side is\begin{equation}[b]+[b]_{b.c.} = \left[\begin{array}{c}\sigma^\prime T^n_{1,1} + T^{n+1}_{0,1} + T^{n+1}_{1,0} \\\sigma^\prime T^n_{2,0} + T^{n+1}_{2,0} \\\vdots \\\sigma^\prime T^n_{n_x-2,1} + T^{n+1}_{n_x-2,0} + \delta q_x \\\sigma^\prime T^n_{1,2} + T^{n+1}_{0,2} \\\vdots \\\sigma^\prime T^n_{n_x-2,n_y-2} + \delta(q_x + q_y)\end{array}\right]\end{equation} where $\sigma^\prime = 1/\sigma = \delta^2/\alpha \Delta t$. The matrix looks very ugly, but it is important you understand it! Think about it. Can you answer: * Why a -1 factor appears $n_x-2$ columns after the diagonal? What about $n_x-2$ columns before the diagonal? * Why in row $n_x-2$ the position after the diagonal contains a 0? * Why in row $n_x-2$ the diagonal is $\sigma^\prime + 3$ rather than $\sigma^\prime + 4$? * Why in the last row the diagonal is $\sigma^\prime + 2$ rather than $\sigma^\prime + 4$? If you can answer those questions, you are in good shape to continue! Let's write a function that will generate the matrix and right-hand side for the heat conduction problem in the previous notebook. Remember, we had Dirichlet boundary conditions in the left and bottom, and zero-flux Neumann boundary condition on the top and right $(q_x=q_y=0)$. Also, we'll import `scipy.linalg.solve` because we need to solve a linear system.
import numpy from scipy import linalg def lhs_operator(M, N, sigma): """ Assembles and returns the implicit operator of the system for the 2D diffusion equation. We use a Dirichlet condition at the left and bottom boundaries and a Neumann condition (zero-gradient) at the right and top boundaries. Parameters ---------- M : integer Number of interior points in the x direction. N : integer Number of interior points in the y direction. sigma : float Value of alpha * dt / dx**2. Returns ------- A : numpy.ndarray The implicit operator as a 2D array of floats of size M*N by M*N. """ A = numpy.zeros((M * N, M * N)) for j in range(N): for i in range(M): I = j * M + i # row index # Get index of south, west, east, and north points. south, west, east, north = I - M, I - 1, I + 1, I + M # Setup coefficients at corner points. if i == 0 and j == 0: # bottom-left corner A[I, I] = 1.0 / sigma + 4.0 A[I, east] = -1.0 A[I, north] = -1.0 elif i == M - 1 and j == 0: # bottom-right corner A[I, I] = 1.0 / sigma + 3.0 A[I, west] = -1.0 A[I, north] = -1.0 elif i == 0 and j == N - 1: # top-left corner A[I, I] = 1.0 / sigma + 3.0 A[I, south] = -1.0 A[I, east] = -1.0 elif i == M - 1 and j == N - 1: # top-right corner A[I, I] = 1.0 / sigma + 2.0 A[I, south] = -1.0 A[I, west] = -1.0 # Setup coefficients at side points (excluding corners). elif i == 0: # left side A[I, I] = 1.0 / sigma + 4.0 A[I, south] = -1.0 A[I, east] = -1.0 A[I, north] = -1.0 elif i == M - 1: # right side A[I, I] = 1.0 / sigma + 3.0 A[I, south] = -1.0 A[I, west] = -1.0 A[I, north] = -1.0 elif j == 0: # bottom side A[I, I] = 1.0 / sigma + 4.0 A[I, west] = -1.0 A[I, east] = -1.0 A[I, north] = -1.0 elif j == N - 1: # top side A[I, I] = 1.0 / sigma + 3.0 A[I, south] = -1.0 A[I, west] = -1.0 A[I, east] = -1.0 # Setup coefficients at interior points. else: A[I, I] = 1.0 / sigma + 4.0 A[I, south] = -1.0 A[I, west] = -1.0 A[I, east] = -1.0 A[I, north] = -1.0 return A def rhs_vector(T, M, N, sigma, Tb): """ Assembles and returns the right-hand side vector of the system for the 2D diffusion equation. We use a Dirichlet condition at the left and bottom boundaries and a Neumann condition (zero-gradient) at the right and top boundaries. Parameters ---------- T : numpy.ndarray The temperature distribution as a 1D array of floats. M : integer Number of interior points in the x direction. N : integer Number of interior points in the y direction. sigma : float Value of alpha * dt / dx**2. Tb : float Boundary value for Dirichlet conditions. Returns ------- b : numpy.ndarray The right-hand side vector as a 1D array of floats of size M*N. """ b = 1.0 / sigma * T # Add Dirichlet term at points located next # to the left and bottom boundaries. for j in range(N): for i in range(M): I = j * M + i if i == 0: b[I] += Tb if j == 0: b[I] += Tb return b
_____no_output_____
CC-BY-3.0
lessons/04_spreadout/04_04_Heat_Equation_2D_Implicit.ipynb
Fluidentity/numerical-mooc
The solution of the linear system $(T^{n+1}_\text{int})$ contains the temperatures of the interior points at the next time step in a 1D array. We will also create a function that will take the values of $T^{n+1}_\text{int}$ and put them in a 2D array that resembles the physical domain.
def map_1d_to_2d(T_1d, nx, ny, Tb): """ Maps a 1D array of the temperature at the interior points to a 2D array that includes the boundary values. Parameters ---------- T_1d : numpy.ndarray The temperature at the interior points as a 1D array of floats. nx : integer Number of points in the x direction of the domain. ny : integer Number of points in the y direction of the domain. Tb : float Boundary value for Dirichlet conditions. Returns ------- T : numpy.ndarray The temperature distribution in the domain as a 2D array of size ny by nx. """ T = numpy.zeros((ny, nx)) # Get the value at interior points. T[1:-1, 1:-1] = T_1d.reshape((ny - 2, nx - 2)) # Use Dirichlet condition at left and bottom boundaries. T[:, 0] = Tb T[0, :] = Tb # Use Neumann condition at right and top boundaries. T[:, -1] = T[:, -2] T[-1, :] = T[-2, :] return T
_____no_output_____
CC-BY-3.0
lessons/04_spreadout/04_04_Heat_Equation_2D_Implicit.ipynb
Fluidentity/numerical-mooc
And to advance in time, we will use
def btcs_implicit_2d(T0, nt, dt, dx, alpha, Tb): """ Computes and returns the distribution of the temperature after a given number of time steps. The 2D diffusion equation is integrated using Euler implicit in time and central differencing in space, with a Dirichlet condition at the left and bottom boundaries and a Neumann condition (zero-gradient) at the right and top boundaries. Parameters ---------- T0 : numpy.ndarray The initial temperature distribution as a 2D array of floats. nt : integer Number of time steps to compute. dt : float Time-step size. dx : float Grid spacing in the x and y directions. alpha : float Thermal diffusivity of the plate. Tb : float Boundary value for Dirichlet conditions. Returns ------- T : numpy.ndarray The temperature distribution as a 2D array of floats. """ # Get the number of points in each direction. ny, nx = T0.shape # Get the number of interior points in each direction. M, N = nx - 2, ny - 2 # Compute the constant sigma. sigma = alpha * dt / dx**2 # Create the implicit operator of the system. A = lhs_operator(M, N, sigma) # Integrate in time. T = T0[1:-1, 1:-1].flatten() # interior points as a 1D array I, J = int(M / 2), int(N / 2) # indices of the center for n in range(nt): # Compute the right-hand side of the system. b = rhs_vector(T, M, N, sigma, Tb) # Solve the system with scipy.linalg.solve. T = linalg.solve(A, b) # Check if the center of the domain has reached T = 70C. if T[J * M + I] >= 70.0: break print('[time step {}] Center at T={:.2f} at t={:.2f} s' .format(n + 1, T[J * M + I], (n + 1) * dt)) # Returns the temperature in the domain as a 2D array. return map_1d_to_2d(T, nx, ny, Tb)
_____no_output_____
CC-BY-3.0
lessons/04_spreadout/04_04_Heat_Equation_2D_Implicit.ipynb
Fluidentity/numerical-mooc
Remember, we want the function to tell us when the center of the plate reaches $70^\circ C$. Dig deeper For demonstration purposes, these functions are very explicit. But you can see a trend here, right? Say we start with a matrix with `1/sigma+4` in the main diagonal, and `-1` on the 4 other corresponding diagonals. Now, we have to modify the matrix only where the boundary conditions are affecting. We saw the impact of the Dirichlet and Neumann boundary condition on each position of the matrix, we just need to know in which position to perform those changes. A function that maps `i` and `j` into `row_number` would be handy, right? How about `row_number = (j-1)*(nx-2)+(i-1)`? By feeding `i` and `j` to that equation, you know exactly where to operate on the matrix. For example, `i=nx-2, j=2`, which is in row `row_number = 2*nx-5`, is next to a Neumann boundary condition: we have to substract one out of the main diagonal (`A[2*nx-5,2*nx-5]-=1`), and put a zero in the next column (`A[2*nx-5,2*nx-4]=0`). This way, the function can become much simpler!Can you use this information to construct a more general function `lhs_operator`? Can you make it such that the type of boundary condition is an input to the function? Heat diffusion in 2D Let's recast the 2D heat conduction from the previous notebook, and solve it with an implicit scheme.
# Set parameters. Lx = 0.01 # length of the plate in the x direction Ly = 0.01 # length of the plate in the y direction nx = 21 # number of points in the x direction ny = 21 # number of points in the y direction dx = Lx / (nx - 1) # grid spacing in the x direction dy = Ly / (ny - 1) # grid spacing in the y direction alpha = 1e-4 # thermal diffusivity # Define the locations along a gridline. x = numpy.linspace(0.0, Lx, num=nx) y = numpy.linspace(0.0, Ly, num=ny) # Compute the initial temperature distribution. Tb = 100.0 # temperature at the left and bottom boundaries T0 = 20.0 * numpy.ones((ny, nx)) T0[:, 0] = Tb T0[0, :] = Tb
_____no_output_____
CC-BY-3.0
lessons/04_spreadout/04_04_Heat_Equation_2D_Implicit.ipynb
Fluidentity/numerical-mooc
We are ready to go!
# Set the time-step size based on CFL limit. sigma = 0.25 dt = sigma * min(dx, dy)**2 / alpha # time-step size nt = 300 # number of time steps to compute # Compute the temperature along the rod. T = btcs_implicit_2d(T0, nt, dt, dx, alpha, Tb)
[time step 257] Center at T=70.00 at t=0.16 s
CC-BY-3.0
lessons/04_spreadout/04_04_Heat_Equation_2D_Implicit.ipynb
Fluidentity/numerical-mooc
And plot,
from matplotlib import pyplot %matplotlib inline # Set the font family and size to use for Matplotlib figures. pyplot.rcParams['font.family'] = 'serif' pyplot.rcParams['font.size'] = 16 # Plot the filled contour of the temperature. pyplot.figure(figsize=(8.0, 5.0)) pyplot.xlabel('x [m]') pyplot.ylabel('y [m]') levels = numpy.linspace(20.0, 100.0, num=51) contf = pyplot.contourf(x, y, T, levels=levels) cbar = pyplot.colorbar(contf) cbar.set_label('Temperature [C]') pyplot.axis('scaled', adjustable='box');
_____no_output_____
CC-BY-3.0
lessons/04_spreadout/04_04_Heat_Equation_2D_Implicit.ipynb
Fluidentity/numerical-mooc
Try this out with different values of `sigma`! You'll see that it will always give a stable solution!Does this result match the explicit scheme from the previous notebook? Do they take the same amount of time to reach $70^\circ C$ in the center of the plate? Now that we can use higher values of `sigma`, we need fewer time steps for the center of the plate to reach $70^\circ C$! Of course, we need to be careful that `dt` is small enough to resolve the physics correctly. --- The cell below loads the style of the notebook
from IPython.core.display import HTML css_file = '../../styles/numericalmoocstyle.css' HTML(open(css_file, 'r').read())
_____no_output_____
CC-BY-3.0
lessons/04_spreadout/04_04_Heat_Equation_2D_Implicit.ipynb
Fluidentity/numerical-mooc
DATA 512: A1 Data Curation AssignmentBy: Megan Nalani Chun Step 1: Gathering the data Gather Wikipedia traffic from Jan 1, 2008 - August 30, 2020 - Legacy Pagecounts API provides desktop and mobile traffic data from Dec. 2007 - July 2016 - Pageviews API provides desktop, mobile web, and mobile app traffic data from July 2015 - last month. First, import the json and requests libraries to call the Pagecounts and Pageviews APIs and save the output in json format.
import json import requests
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Second, set the location of the endpoints and header information. This information is needed to call the Pagecounts and Pageviews APIs.
endpoint_legacy = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}' endpoint_pageviews = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}' headers = { 'User-Agent': 'https://github.com/NalaniKai/', 'From': 'nalani23@uw.edu' }
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Third, define a function to call the APIs taking in the endpoint and parameters. This function returns the data in json format.
def api_call(endpoint, parameters): call = requests.get(endpoint.format(**parameters), headers=headers) response = call.json() return response
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Fourth, define a function to make the api_call() with the correct parameters and save the output to json file in format apiname_accesstype_startyearmonth_endyearmonth.json
def get_data(api, access_dict, params, endpoint, access_name): start = 0 #index of start date end = 1 #index of end data year_month = 6 #size of YYYYMM for access_type, start_end_dates in access_dict.items(): #get data for all access types in API params[access_name] = access_type data = api_call(endpoint, params) with open(api + "_" + access_type + "_" + start_end_dates[start][:year_month] + "_" + start_end_dates[end][:year_month] + ".json", 'w') as f: json.dump(data, f) #save data
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Fifth, define the parameters for the legacy page count API and call the get_data function. This will pull the data and save it.
api = "pagecounts" access_type_legacy = {"desktop-site": ["2008010100", "2016070100"], #access type: start year_month, end year_month "mobile-site": ["2014100100", "2016070100"]} #used to save outputs with correct filenames #https://wikimedia.org/api/rest_v1/#!/Legacy_data/get_metrics_legacy_pagecounts_aggregate_project_access_site_granularity_start_end params_legacy = {"project" : "en.wikipedia.org", "granularity" : "monthly", "start" : "2008010100", "end" : "2016080100" #will get data through July 2016 } get_data(api, access_type_legacy, params_legacy, endpoint_legacy, "access-site")
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Sixth, define the parameters for the page views API and call the get_data function. This will pull the data and save it.
api = "pageviews" start_end_dates = ["2015070100", "2020080100"] #start year_month, end year_month access_type_pageviews = {"desktop": start_end_dates, #access type: start year_month, end year_month "mobile-app": start_end_dates, "mobile-web": start_end_dates } #https://wikimedia.org/api/rest_v1/#!/Pageviews_data/get_metrics_pageviews_aggregate_project_access_agent_granularity_start_end params_pageviews = {"project" : "en.wikipedia.org", "access" : "mobile-web", "agent" : "user", #remove crawler traffic "granularity" : "monthly", "start" : "2008010100", "end" : '2020090100' #will get data through August 2020 } get_data(api, access_type_pageviews, params_pageviews, endpoint_pageviews, "access")
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Step 2: Processing the data First, create a function to read in all data files and extract the list of records from items.
def read_json(filename): with open(filename, 'r') as f: return json.load(f)["items"]
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Second, use the read_json function to get a list of records for each file.
pageviews_mobile_web = read_json("pageviews_mobile-web_201507_202008.json") pageviews_mobile_app = read_json("pageviews_mobile-app_201507_202008.json") pageviews_desktop = read_json("pageviews_desktop_201507_202008.json") pagecounts_mobile = read_json("pagecounts_mobile-site_201410_201607.json") pagecounts_desktop = read_json("pagecounts_desktop-site_200801_201607.json")
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Third, create a total mobile traffic count for each month using the mobile-app and mobile-web data from the pageviews API. The list of [timestamp, view_count] pairs data structure will enable easy transformation into a dataframe.
pageviews_mobile = [[r1["timestamp"], r0["views"] + r1["views"]] for r0 in pageviews_mobile_web for r1 in pageviews_mobile_app if r0["timestamp"] == r1["timestamp"]]
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Fourth, get the timestamps and values in the [timestamp, view_count] format for the desktop pageviews, desktop pagecounts, and mobile pagecounts.
pageviews_desktop = [[record["timestamp"], record["views"]] for record in pageviews_desktop] pagecounts_desktop = [[record["timestamp"], record["count"]] for record in pagecounts_desktop] pagecounts_mobile = [[record["timestamp"], record["count"]] for record in pagecounts_mobile]
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Fifth, import pandas library and transform data into dataframes.
import pandas as pd pageview_desktop_views = pd.DataFrame(pageviews_desktop, columns=["timestamp", "pageview_desktop_views"]) pageview_mobile_views = pd.DataFrame(pageviews_mobile, columns=["timestamp", "pageview_mobile_views"]) pagecounts_desktop = pd.DataFrame(pagecounts_desktop, columns=["timestamp", "pagecount_desktop_views"]) pagecounts_mobile = pd.DataFrame(pagecounts_mobile, columns=["timestamp", "pagecount_mobile_views"])
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Sixth, join page view dataframes and calculate total for all views.
df_pageviews = pd.merge(pageview_desktop_views, pageview_mobile_views, how="outer", on="timestamp") df_pageviews["pageview_all_views"] = df_pageviews["pageview_desktop_views"] + df_pageviews["pageview_mobile_views"] df_pageviews.head()
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Seventh, join page count dataframes. Then fill in NaN values with 0 to calculate total for all counts.
df_pagecounts = pd.merge(pagecounts_desktop, pagecounts_mobile, how="outer", on="timestamp") df_pagecounts["pagecount_mobile_views"] = df_pagecounts["pagecount_mobile_views"].fillna(0) df_pagecounts["pagecount_all_views"] = df_pagecounts["pagecount_desktop_views"] + df_pagecounts["pagecount_mobile_views"] df_pagecounts.head()
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Eighth, join page count and page view dataframes into one table. Filling in missing values with 0.
df = pd.merge(df_pagecounts, df_pageviews, how="outer", on="timestamp") df = df.fillna(0) df.head()
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Ninth, separate the timestamp into the year and month in YYYY and MM format for all the data. Remove the timestamp column.
df["year"] = df["timestamp"].apply(lambda date: date[:4]) df["month"] = df["timestamp"].apply(lambda date: date[4:6]) df.drop("timestamp", axis=1, inplace=True) df.head()
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Tenth, save processed data to csv file without the index column.
df.to_csv("en-wikipedia_traffic_200801-202008.csv", index=False)
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Step 3: Analyze the data First, fill 0 values with numpy.nan values so these values are not plotted on the chart.
import numpy as np df.replace(0, np.nan, inplace=True)
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Second, transform the year and month into a datetime.date type which will be used for the x-axis in the chart.
from datetime import date date = df.apply(lambda r: date(int(r["year"]), int(r["month"]), 1), axis=1)
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Third, divide all page view counts by 1e6 so the chart is easier to read. Y-axis will be the values shown x 1,000,000.
pc_mobile = df["pagecount_mobile_views"] / 1e6 pv_mobile = df["pageview_mobile_views"] / 1e6 pc_desktop = df["pagecount_desktop_views"] / 1e6 pv_desktop = df["pageview_desktop_views"] / 1e6 pv_total = df["pageview_all_views"] / 1e6 pc_total = df["pagecount_all_views"] / 1e6
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
Fourth, plot the data in a time series for desktop (main site), mobile, and the total all up. The dashed lines are data from the pagecount API and the solid lines are the data from the pageview API without crawler traffic.
import matplotlib.pyplot as plt from matplotlib.dates import YearLocator #create plot and assign time series to plot fig, ax1 = plt.subplots(figsize=(18,6)) ax1.plot(date, pc_desktop, label="main site", color="green", ls="--") ax1.plot(date, pv_desktop, label="_Hidden label", color="green", ls="-") ax1.plot(date, pc_mobile, label="mobile site", color="blue", ls="--") ax1.plot(date, pv_mobile, label="_Hidden label", color="blue", ls="-") ax1.plot(date, pc_total, label="total", color="black", ls="--") ax1.plot(date, pv_total, label="_Hidden label", color="black", ls="-") ax1.xaxis.set_major_locator(YearLocator()) #show every year on the x-axis #set caption caption = "May 2015: a new pageview definition took effect, which eliminated all crawler traffic. Solid lines mark new definition." fig.text(.5, .01, caption, ha='center', color="red") #set labels for x-axis, y-axis, and title plt.xlabel("Date") plt.ylabel("Page Views (x 1,000,000)") plt.title("Page Views on English Wikipedia (x 1,000,000)") plt.ylim(ymin=0) #start y-axis at 0 plt.grid(True) #turn on background grid plt.legend(loc="upper left") #save chart to png file filename = "Time Series.png" plt.savefig(filename) plt.show() #display chart
_____no_output_____
MIT
data-512-a1/StepByStepNotebook.ipynb
NalaniKai/data-512
İki saniye periyotlu kare dalga Sekil 3.24
# Tanimlar import matplotlib.pyplot as plt import numpy as np t=np.linspace(0,10, 1000) freqs=np.r_[0.5:6.5] fig, ax=plt.subplots(7,1, sharex=True, sharey=True) sumSines=np.zeros(t.shape); cnt=1; for f in freqs: s= np.sin(2*np.pi*f*t)/(2*f) ax[cnt].plot(t,s) sumSines+=s cnt+=1 ax[0].plot(t,sumSines, 'r') plt.show()
_____no_output_____
Apache-2.0
tr/Sekil_3_26.ipynb
bosmanoglu/Digital_Image_Processing_Book
load data and embedding
# load confing import yaml import pickle with open('config.yaml', 'r') as f: conf = yaml.load(f) BATCH_SIZE = conf["MODEL"]["BATCH_SIZE"] MAX_EPOCHS = conf["MODEL"]["MAX_EPOCHS"] WORD2VEC_MODEL = conf["EMBEDDING"]["WORD2VEC_MODEL"] print('load data ...') X_train = np.load('data/X_train.npy') y_train = np.load('data/y_train.npy') X_test = np.load('data/X_test.npy') y_test = np.load('data/y_test.npy') X_val = np.load('data/X_val.npy') y_val = np.load('data/y_val.npy') with open('data/word_index.pkl', 'rb') as f: word_index = pickle.load(f) # load embedding from gensim import models word2vec_model = models.KeyedVectors.load_word2vec_format(WORD2VEC_MODEL, binary=True) embeddings_index, embedding_dim = get_embeddings_index(word2vec_model) embedding_layer = get_embedding_layer(word_index, embeddings_index, embedding_dim, True) word2vec_model = None
_____no_output_____
Apache-2.0
train.ipynb
jsandersen/WU-RNN
train model
lstm = get_model(2, embedding_layer) callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=2) lstm.compile( loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(0.0001), metrics=['accuracy']) lstm.summary() # train model history = lstm.fit( X_train, y_train, batch_size=BATCH_SIZE, epochs=MAX_EPOCHS, callbacks=[callback], validation_data=(X_test, y_test) )
_____no_output_____
Apache-2.0
train.ipynb
jsandersen/WU-RNN
history
from matplotlib import pyplot as plt # "Accuracy" plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() # "Loss" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show()
_____no_output_____
Apache-2.0
train.ipynb
jsandersen/WU-RNN
Define Functions
# Performce cross validation using xgboost def xgboostcv(X, y, fold, n_estimators, lr, depth, n_jobs, gamma, min_cw, subsample, colsample): uid = np.unique(fold) model_pred = np.zeros(X.shape[0]) model_valid_loss = np.zeros(len(uid)) model_train_loss = np.zeros(len(uid)) for i in uid: x_valid = X[fold==i] x_train = X[fold!=i] y_valid = y[fold==i] y_train = y[fold!=i] model = XGBRegressor(n_estimators=n_estimators, learning_rate=lr, max_depth = depth, n_jobs = n_jobs, gamma = gamma, min_child_weight = min_cw, subsample = subsample, colsample_bytree = colsample, random_state=1234) model.fit(x_train, y_train) pred = model.predict(x_valid) model_pred[fold==i] = pred model_valid_loss[uid==i] = mean_squared_error(y_valid, pred) model_train_loss[uid==i] = mean_squared_error(y_train, model.predict(x_train)) return {'pred':model_pred, 'valid_loss':model_valid_loss, 'train_loss':model_train_loss} # Compute MSE for xgboost cross validation def xgboostcv_mse(n, p, depth, g, min_cw, subsample, colsample): model_cv = xgboostcv(X_train, y_train, fold_train, int(n)*100, 10**p, int(depth), n_nodes, 10**g, min_cw, subsample, colsample) MSE = mean_squared_error(y_train, model_cv['pred']) return -MSE # Display model performance metrics for each cv iteration def cv_performance(model, y, fold): uid = np.unique(fold) pred = np.round(model['pred']) y = y.reshape(-1) model_valid_mse = np.zeros(len(uid)) model_valid_mae = np.zeros(len(uid)) model_valid_r2 = np.zeros(len(uid)) for i in uid: pred_i = pred[fold==i] y_i = y[fold==i] model_valid_mse[uid==i] = mean_squared_error(y_i, pred_i) model_valid_mae[uid==i] = np.abs(pred_i-y_i).mean() model_valid_r2[uid==i] = r2_score(y_i, pred_i) results = pd.DataFrame(0, index=uid, columns=['valid_mse', 'valid_mae', 'valid_r2', 'valid_loss', 'train_loss']) results['valid_mse'] = model_valid_mse results['valid_mae'] = model_valid_mae results['valid_r2'] = model_valid_r2 results['valid_loss'] = model['valid_loss'] results['train_loss'] = model['train_loss'] print(results) # Display overall model performance metrics def cv_overall_performance(y, y_pred): overall_MSE = mean_squared_error(y, y_pred) overall_MAE = (np.abs(y_pred-y)).mean() overall_RMSE = np.sqrt(np.square(y_pred-y).mean()) overall_R2 = r2_score(y, y_pred) print("XGB overall MSE: %0.4f" %overall_MSE) print("XGB overall MAE: %0.4f" %overall_MAE) print("XGB overall RMSE: %0.4f" %overall_RMSE) print("XGB overall R^2: %0.4f" %overall_R2) # Plot variable importance def plot_importance(model, columns): importances = pd.Series(model.feature_importances_, index = columns).sort_values(ascending=False) n = len(columns) plt.figure(figsize=(10,15)) plt.barh(np.arange(n)+0.5, importances) plt.yticks(np.arange(0.5,n+0.5), importances.index) plt.tick_params(axis='both', which='major', labelsize=22) plt.ylim([0,n]) plt.gca().invert_yaxis() plt.savefig('variable_importance.png', dpi = 150) # Save xgboost model def save(obj, path): pkl_fl = open(path, 'wb') pickle.dump(obj, pkl_fl) pkl_fl.close() # Load xgboost model def load(path): f = open(path, 'rb') obj = pickle.load(f) f.close() return(obj)
_____no_output_____
BSD-3-Clause
hourly_model/XGB.ipynb
NREL/TrafficVolumeEstimation
Parameter Values
# Set a few values validation_only = False # Whether test model with test data n_nodes = 96 # Number of computing nodes used for hyperparamter tunning trained = False # If a trained model exits cols_drop = ['StationId', 'Date', 'PenRate', 'NumberOfLanes', 'Dir', 'FC', 'Month'] # Columns to be dropped if trained: params = load('params.dat') xgb_cv = load('xgb_cv.dat') xgb = load('xgb.dat')
_____no_output_____
BSD-3-Clause
hourly_model/XGB.ipynb
NREL/TrafficVolumeEstimation
Read Data
if validation_only: raw_data_train = pd.read_csv("final_train_data.csv") data = raw_data_train.drop(cols_drop, axis=1) if 'Dir' in data.columns: data[['Dir']] = data[['Dir']].astype('category') one_hot = pd.get_dummies(data[['Dir']]) data = data.drop(['Dir'], axis = 1) data = data.join(one_hot) if 'FC' in data.columns: data[['FC']] = data[['FC']].astype('category') one_hot = pd.get_dummies(data[['FC']]) data = data.drop(['FC'], axis = 1) data = data.join(one_hot) week_dict = {"DayOfWeek": {'Monday': 1, 'Tuesday': 2, 'Wednesday': 3, 'Thursday': 4, 'Friday': 5, 'Saturday': 6, 'Sunday': 7}} data = data.replace(week_dict) X = data.drop(['Volume', 'fold'], axis=1) X_col = X.columns y = data[['Volume']] fold_train = data[['fold']].values.reshape(-1) X_train = X.values y_train = y.values else: raw_data_train = pd.read_csv("final_train_data.csv") raw_data_test = pd.read_csv("final_test_data.csv") raw_data_test1 = pd.DataFrame(np.concatenate((raw_data_test.values, np.zeros(raw_data_test.shape[0]).reshape(-1, 1)), axis=1), columns = raw_data_test.columns.append(pd.Index(['fold']))) raw_data = pd.DataFrame(np.concatenate((raw_data_train.values, raw_data_test1.values), axis=0), columns = raw_data_train.columns) data = raw_data.drop(cols_drop, axis=1) if 'Dir' in data.columns: data[['Dir']] = data[['Dir']].astype('category') one_hot = pd.get_dummies(data[['Dir']]) data = data.drop(['Dir'], axis = 1) data = data.join(one_hot) if 'FC' in data.columns: data[['FC']] = data[['FC']].astype('category') one_hot = pd.get_dummies(data[['FC']]) data = data.drop(['FC'], axis = 1) data = data.join(one_hot) week_dict = {"DayOfWeek": {'Monday': 1, 'Tuesday': 2, 'Wednesday': 3, 'Thursday': 4, 'Friday': 5, 'Saturday': 6, 'Sunday': 7}} data = data.replace(week_dict) X = data.drop(['Volume'], axis=1) y = data[['Volume']] X_train = X.loc[X.fold!=0, :] fold_train = X_train[['fold']].values.reshape(-1) X_col = X_train.drop(['fold'], axis = 1).columns X_train = X_train.drop(['fold'], axis = 1).values y_train = y.loc[X.fold!=0, :].values X_test = X.loc[X.fold==0, :] X_test = X_test.drop(['fold'], axis = 1).values y_test = y.loc[X.fold==0, :].values X_col # Explain variable names X_name_dict = {'Temp': 'Temperature', 'WindSp': 'Wind Speed', 'Precip': 'Precipitation', 'Snow': 'Snow', 'Long': 'Longitude', 'Lat': 'Latitude', 'NumberOfLanes': 'Number of Lanes', 'SpeedLimit': 'Speed Limit', 'FRC': 'TomTom FRC', 'DayOfWeek': 'Day of Week', 'Month': 'Month', 'Hour': 'Hour', 'AvgSp': 'Average Speed', 'ProbeCount': 'Probe Count', 'Dir_E': 'Direction(East)', 'Dir_N': 'Direction(North)', 'Dir_S': 'Direction(South)', 'Dir_W': 'Direction(West)', 'FC_3R': 'FHWA FC(3R)', 'FC_3U': 'FHWA FC(3U)', 'FC_4R': 'FHWA FC(4R)', 'FC_4U': 'FHWA FC(4U)', 'FC_5R': 'FHWA FC(5R)', 'FC_5U': 'FHWA FC(5U)', 'FC_7R': 'FHWA FC(7R)', 'FC_7U': 'FHWA FC(7U)'} data.head() X_train.shape if validation_only == False: print(X_test.shape)
_____no_output_____
BSD-3-Clause
hourly_model/XGB.ipynb
NREL/TrafficVolumeEstimation
Cross Validation & Hyperparameter Optimization
# Set hyperparameter ranges for Bayesian optimization xgboostBO = BayesianOptimization(xgboostcv_mse, {'n': (1, 10), 'p': (-4, 0), 'depth': (2, 10), 'g': (-3, 0), 'min_cw': (1, 10), 'subsample': (0.5, 1), 'colsample': (0.5, 1) }) # Use Bayesian optimization to tune hyperparameters import time start_time = time.time() xgboostBO.maximize(init_points=10, n_iter = 50) print('-'*53) print('Final Results') print('XGBOOST: %f' % xgboostBO.max['target']) print("--- %s seconds ---" % (time.time() - start_time)) # Save the hyperparameters the yield the highest model performance params = xgboostBO.max['params'] save(params, 'params.dat') params # Perform cross validation using the optimal hyperparameters xgb_cv = xgboostcv(X_train, y_train, fold_train, int(params['n'])*100, 10**params['p'], int(params['depth']), n_nodes, 10**params['g'], params['min_cw'], params['subsample'], params['colsample']) # Display cv results for each iteration cv_performance(xgb_cv, y_train, fold_train) # Display overall cv results cv_pred = xgb_cv['pred'] cv_pred[cv_pred<0] = 0 cv_overall_performance(y_train.reshape(-1), cv_pred) # Save the cv results save(xgb_cv, 'xgb_cv.dat')
_____no_output_____
BSD-3-Clause
hourly_model/XGB.ipynb
NREL/TrafficVolumeEstimation
Model Test
# Build a xgboost using all the training data with the optimal hyperparameter xgb = XGBRegressor(n_estimators=int(params['n'])*100, learning_rate=10**params['p'], max_depth = int(params['depth']), n_jobs = n_nodes, gamma = 10**params['g'], min_child_weight = params['min_cw'], subsample = params['subsample'], colsample_bytree = params['colsample'], random_state=1234) xgb.fit(X_train, y_train) # Test the trained model with test data if validation_only == False: y_pred = xgb.predict(X_test) y_pred[y_pred<0] = 0 cv_overall_performance(y_test.reshape(-1), y_pred) # Plot variable importance col_names = [X_name_dict[i] for i in X_col] plot_importance(xgb, col_names) # Save the trained xgboost model save(xgb, 'xgb.dat') # Produce cross validation estimates or estimates for test data train_data_pred = pd.DataFrame(np.concatenate((raw_data_train.values, cv_pred.reshape(-1, 1)), axis=1), columns = raw_data_train.columns.append(pd.Index(['PredVolume']))) train_data_pred.to_csv('train_data_pred.csv', index = False) if validation_only == False: test_data_pred = pd.DataFrame(np.concatenate((raw_data_test.values, y_pred.reshape(-1, 1)), axis=1), columns = raw_data_test.columns.append(pd.Index(['PredVolume']))) test_data_pred.to_csv('test_data_pred.csv', index = False)
_____no_output_____
BSD-3-Clause
hourly_model/XGB.ipynb
NREL/TrafficVolumeEstimation
Plot Estimations vs. Observations
# Prepare data to plot estimated and observed values if validation_only: if trained: plot_df = pd.read_csv("train_data_pred.csv") else: plot_df = train_data_pred else: if trained: plot_df = pd.read_csv("test_data_pred.csv") else: plot_df = test_data_pred plot_df = plot_df.sort_values(by=['StationId', 'Date', 'Dir', 'Hour']) plot_df = plot_df.set_index(pd.Index(range(plot_df.shape[0]))) # Define a function to plot estimated and observed values for a day def plot_daily_estimate(frc): indices = plot_df.index[(plot_df.FRC == frc) & (plot_df.Hour == 0)].tolist() from_index = np.random.choice(indices, 1)[0] to_index = from_index + 23 plot_df_sub = plot_df.loc[from_index:to_index, :] time = pd.date_range(plot_df_sub.Date.iloc[0] + ' 00:00:00', periods=24, freq='H') plt.figure(figsize=(20,10)) plt.plot(time, plot_df_sub.PredVolume, 'b-', label='XGBoost', lw=2) plt.plot(time, plot_df_sub.Volume, 'r--', label='Observed', lw=3) plt.tick_params(axis='both', which='major', labelsize=24) plt.ylabel('Volume (vehs/hr)', fontsize=24) plt.xlabel("Time", fontsize=24) plt.legend(loc='upper left', shadow=True, fontsize=24) plt.title('Station ID: {0}, MAE={1}, FRC = {2}'.format( plot_df_sub.StationId.iloc[0], round(np.abs(plot_df_sub.PredVolume-plot_df_sub.Volume).mean()), plot_df_sub.FRC.iloc[0]), fontsize=40) plt.savefig('frc_{0}.png'.format(frc), dpi = 150) return(plot_df_sub) # Define a function to plot estimated and observed values for a week def plot_weekly_estimate(frc): indices = plot_df.index[(plot_df.FRC == frc) & (plot_df.Hour == 0) & (plot_df.DayOfWeek == 'Monday')].tolist() from_index = np.random.choice(indices, 1)[0] to_index = from_index + 24*7-1 plot_df_sub = plot_df.loc[from_index:to_index, :] time = pd.date_range(plot_df_sub.Date.iloc[0] + ' 00:00:00', periods=24*7, freq='H') plt.figure(figsize=(20,10)) plt.plot(time, plot_df_sub.PredVolume, 'b-', label='XGBoost', lw=2) plt.plot(time, plot_df_sub.Volume, 'r--', label='Observed', lw=3) plt.tick_params(axis='both', which='major', labelsize=24) plt.ylabel('Volume (vehs/hr)', fontsize=24) plt.xlabel("Time", fontsize=24) plt.legend(loc='upper left', shadow=True, fontsize=24) plt.title('Station ID: {0}, MAE={1}, FRC = {2}'.format( plot_df_sub.StationId.iloc[0], round(np.abs(plot_df_sub.PredVolume-plot_df_sub.Volume).mean()), plot_df_sub.FRC.iloc[0]), fontsize=40) plt.savefig('frc_{0}.png'.format(frc), dpi = 150) return(plot_df_sub) # Plot estimated and observed values for a day frc2_daily_plot = plot_daily_estimate(2) save(frc2_daily_plot, 'frc2_daily_plot.dat') # Plot estimated and observed values for a week frc3_weekly_plot = plot_weekly_estimate(3) save(frc3_weekly_plot, 'frc3_weekly_plot.dat')
_____no_output_____
BSD-3-Clause
hourly_model/XGB.ipynb
NREL/TrafficVolumeEstimation
First import the "datavis" module
import sys sys.path.append('..') import numpy as np import datavis import vectorized_datavis def test_se_to_sd(): """ Test that the value returned is a float value """ sdev = datavis.se_to_sd(0.5, 1000) assert isinstance(sdev, float),\ "Returned data type is not a float number" test_se_to_sd() def test_ci_to_sd(): """ Test that the value returned is a float value """ sdev = datavis.ci_to_sd(0.2, 0.4) assert isinstance(sdev, float),\ "Returned data type is not a float number" test_ci_to_sd() def test_datagen(): """ Test that the data returned is a numpy.ndarray """ randdata = datavis.datagen(25, 0.2, 0.4) assert isinstance(randdata, np.ndarray),\ "Returned data type is not a numpy.ndarray" test_datagen() def test_correctdatatype(): """ Test that the statistical parameters returned are float numbers """ fmean, fsdev, fserror, fuci, flci = datavis.correctdatatype(0.2, 0.4) assert isinstance(fmean, float),\ "Returned data type is not a float number" test_correctdatatype() def test_compounddata(): """ Test that the data returned are numpy.ndarrays """ datagenerated1, datagenerated2, datagenerated3 = \ datavis.compounddata\ (mean1=24.12,sdev1=3.87,mean2=24.43,sdev2=3.94,mean3=24.82,sdev3=3.95,size=1000) assert isinstance(datagenerated1, np.ndarray),\ "Returned data are not numpy.ndarrays" test_compounddata() def test_databinning(): """ Test that the data returned are numpy.ndarrays """ datagenerated1, datagenerated2, datagenerated3 = \ datavis.compounddata\ (mean1=24.12,sdev1=3.87,mean2=24.43,sdev2=3.94,mean3=24.82,sdev3=3.95,size=1000) bins = np.linspace(10,40,num=30) yhist1, yhist2, yhist3 = \ datavis.databinning\ (datagenerated1, datagenerated2, datagenerated3,bins_list=bins) assert isinstance(yhist1, np.ndarray),\ "Returned data are not numpy.ndarrays" test_databinning() def test_pdfgen(): """ Test that the data returned are numpy.ndarrays """ bins = np.linspace(10,40,num=30) mean1 = 24.12 sdev1 = 3.87 mean2 = 24.43 sdev2 = 3.94 mean3 = 24.82 sdev3 = 3.95 pdf1, pdf2, pdf3 = datavis.pdfgen\ (mean1, sdev1, mean2, sdev2, mean3, sdev3, bins_list=bins) assert isinstance(pdf1, np.ndarray),\ "Returned data are not numpy.ndarrays" test_pdfgen() def test_percent_overlap(): """ Test that the data returned is a tuple """ mean1 = 24.12 sdev1 = 3.87 mean2 = 24.43 sdev2 = 3.94 mean3 = 24.82 sdev3 = 3.95 overlap_11_perc, overlap_12_perc, overlap_13_perc = \ datavis.percent_overlap\ (mean1, sdev1, mean2, sdev2, mean3, sdev3) assert isinstance\ (datavis.percent_overlap\ (mean1, sdev1, mean2, sdev2, mean3, sdev3), tuple),\ "Returned data is not numpy.float64 type" test_percent_overlap() mean = np.array([24.12, 24.43, 24.82]) sdev = np.array([3.87, 3.94, 3.95]) vectorized_datavis.compounddata(mean, sdev) def test_compounddata(): """ Test that the data returned are numpy.ndarrays """ mean = np.array([24.12, 24.43, 24.82]) sdev = np.array([3.87, 3.94, 3.95]) datagenerated = vectorized_datavis.compounddata(mean, sdev) assert isinstance(datagenerated, np.ndarray),\ "Returned data are not numpy.ndarrays" test_compounddata() def test_databinning(): """ Test that the data returned are numpy.ndarrays """ mean = np.array([24.12, 24.43, 24.82]) sdev = np.array([3.87, 3.94, 3.95]) datagenerated = vectorized_datavis.compounddata(mean, sdev) bins = np.linspace(10, 40, num=30) yhist = vectorized_datavis.databinning(datagenerated, bins_list=bins) assert isinstance(yhist, np.ndarray),\ "Returned data are not numpy.ndarrays" test_databinning() def test_pdfgen(): """ Test that the data returned are numpy.ndarrays """ bins = np.linspace(10,40,num=30) mean = np.array([24.12, 24.43, 24.82]) sdev = np.array([3.87, 3.94, 3.95]) pdf = vectorized_datavis.pdfgen(mean, sdev, bins_list=bins) assert isinstance(pdf, np.ndarray),\ "Returned data are not numpy.ndarrays" test_pdfgen() def test_percent_overlap(): """ Test that the data returned is a numpy.ndarray """ mean = np.array([24.12, 24.43, 24.82]) sdev = np.array([3.87, 3.94, 3.95]) overlap_perc_1w = vectorized_datavis.percent_overlap\ (mean, sdev) assert isinstance(overlap_perc_1w, np.ndarray),\ "Returned data is not a numpy.ndarray" test_percent_overlap()
_____no_output_____
MIT
examples/Unit Tests.ipynb
Genes-N-Risks/genocode
IDS Instruction: Regression(Lisa Mannel) Simple linear regression First we import the packages necessary fo this instruction:
import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, mean_absolute_error
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Consider the data set "df" with feature variables "x" and "y" given below.
df1 = pd.DataFrame({'x': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], 'y': [1, 3, 2, 5, 7, 8, 8, 9, 10, 12]}) print(df1)
x y 0 0 1 1 1 3 2 2 2 3 3 5 4 4 7 5 5 8 6 6 8 7 7 9 8 8 10 9 9 12
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
To get a first impression of the given data, let's have a look at its scatter plot:
plt.scatter(df1.x, df1.y, color = "y", marker = "o", s = 40) plt.xlabel('x') plt.ylabel('y') plt.title('first overview of the data') plt.show()
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
We can already see a linear correlation between x and y. Assume the feature x to be descriptive, while y is our target feature. We want a linear function, y=ax+b, that predicts y as accurately as possible based on x. To achieve this goal we use linear regression from the sklearn package.
#define the set of descriptive features (in this case only 'x' is in that set) and the target feature (in this case 'y') descriptiveFeatures1=df1[['x']] targetFeature1=df1['y'] #define the classifier classifier = LinearRegression() #train the classifier model1 = classifier.fit(descriptiveFeatures1, targetFeature1)
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Now we can use the classifier to predict y. We print the predictions as well as the coefficient and bias (*intercept*) of the linear function.
#use the classifier to make prediction targetFeature1_predict = classifier.predict(descriptiveFeatures1) print(targetFeature1_predict) #print coefficient and intercept print('Coefficients: \n', classifier.coef_) print('Intercept: \n', classifier.intercept_)
[ 1.23636364 2.40606061 3.57575758 4.74545455 5.91515152 7.08484848 8.25454545 9.42424242 10.59393939 11.76363636] Coefficients: [1.16969697] Intercept: 1.2363636363636399
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience