markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
对填补好的数据进行建模
#对所有数据进行建模,取得MSE结果 X = [X_full,X_missing_mean,X_missing_0,X_missing_reg] mse = [] std = [] for x in X: estimator = RandomForestRegressor(random_state=0, n_estimators=100) scores = cross_val_score(estimator,x,y_full,scoring='neg_mean_squared_error',cv=5).mean() mse.append(scores * -1) x_labels = ['Full data', 'Zero Imputation', 'Mean Imputation', 'Regressor Imputation'] colors = ['r', 'g', 'b', 'orange'] plt.figure(figsize=(12, 6)) ax = plt.subplot(111) for i in np.arange(len(mse)): ax.barh(i, mse[i],color=colors[i], alpha=0.6, align='center') ax.set_title('Imputation Techniques with Boston Data') ax.set_xlim(left=np.min(mse) * 0.9, right=np.max(mse) * 1.1) ax.set_yticks(np.arange(len(mse))) ax.set_xlabel('MSE') ax.set_yticklabels(x_labels) plt.show()
_____no_output_____
MIT
code/randomForest/bostonRegression.ipynb
Knowledge-Precipitation-Tribe/Machine-Learning
Gradient-boosting decision tree (GBDT)In this notebook, we will present the gradient boosting decision treealgorithm and contrast it with AdaBoost.Gradient-boosting differs from AdaBoost due to the following reason: insteadof assigning weights to specific samples, GBDT will fit a decision tree onthe residuals error (hence the name "gradient") of the previous tree.Therefore, each new tree in the ensemble predicts the error made by theprevious learner instead of predicting the target directly.In this section, we will provide some intuition about the way learners arecombined to give the final prediction. In this regard, let's go back to ourregression problem which is more intuitive for demonstrating the underlyingmachinery.
import pandas as pd import numpy as np # Create a random number generator that will be used to set the randomness rng = np.random.RandomState(0) def generate_data(n_samples=50): """Generate synthetic dataset. Returns `data_train`, `data_test`, `target_train`.""" x_max, x_min = 1.4, -1.4 len_x = x_max - x_min x = rng.rand(n_samples) * len_x - len_x / 2 noise = rng.randn(n_samples) * 0.3 y = x ** 3 - 0.5 * x ** 2 + noise data_train = pd.DataFrame(x, columns=["Feature"]) data_test = pd.DataFrame(np.linspace(x_max, x_min, num=300), columns=["Feature"]) target_train = pd.Series(y, name="Target") return data_train, data_test, target_train data_train, data_test, target_train = generate_data() import matplotlib.pyplot as plt import seaborn as sns sns.scatterplot(x=data_train["Feature"], y=target_train, color="black", alpha=0.5) _ = plt.title("Synthetic regression dataset")
_____no_output_____
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
As we previously discussed, boosting will be based on assembling a sequenceof learners. We will start by creating a decision tree regressor. We will setthe depth of the tree so that the resulting learner will underfit the data.
from sklearn.tree import DecisionTreeRegressor tree = DecisionTreeRegressor(max_depth=3, random_state=0) tree.fit(data_train, target_train) target_train_predicted = tree.predict(data_train) target_test_predicted = tree.predict(data_test)
_____no_output_____
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
Using the term "test" here refers to data that was not used for training.It should not be confused with data coming from a train-test split, as itwas generated in equally-spaced intervals for the visual evaluation of thepredictions.
# plot the data sns.scatterplot(x=data_train["Feature"], y=target_train, color="black", alpha=0.5) # plot the predictions line_predictions = plt.plot(data_test["Feature"], target_test_predicted, "--") # plot the residuals for value, true, predicted in zip(data_train["Feature"], target_train, target_train_predicted): lines_residuals = plt.plot([value, value], [true, predicted], color="red") plt.legend([line_predictions[0], lines_residuals[0]], ["Fitted tree", "Residuals"]) _ = plt.title("Prediction function together \nwith errors on the training set")
_____no_output_____
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
TipIn the cell above, we manually edited the legend to get only a single labelfor all the residual lines.Since the tree underfits the data, its accuracy is far from perfect on thetraining data. We can observe this in the figure by looking at the differencebetween the predictions and the ground-truth data. We represent these errors,called "Residuals", by unbroken red lines.Indeed, our initial tree was not expressive enough to handle the complexityof the data, as shown by the residuals. In a gradient-boosting algorithm, theidea is to create a second tree which, given the same data `data`, will tryto predict the residuals instead of the vector `target`. We would thereforehave a tree that is able to predict the errors made by the initial tree.Let's train such a tree.
residuals = target_train - target_train_predicted tree_residuals = DecisionTreeRegressor(max_depth=5, random_state=0) tree_residuals.fit(data_train, residuals) target_train_predicted_residuals = tree_residuals.predict(data_train) target_test_predicted_residuals = tree_residuals.predict(data_test) sns.scatterplot(x=data_train["Feature"], y=residuals, color="black", alpha=0.5) line_predictions = plt.plot( data_test["Feature"], target_test_predicted_residuals, "--") # plot the residuals of the predicted residuals for value, true, predicted in zip(data_train["Feature"], residuals, target_train_predicted_residuals): lines_residuals = plt.plot([value, value], [true, predicted], color="red") plt.legend([line_predictions[0], lines_residuals[0]], ["Fitted tree", "Residuals"], bbox_to_anchor=(1.05, 0.8), loc="upper left") _ = plt.title("Prediction of the previous residuals")
_____no_output_____
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
We see that this new tree only manages to fit some of the residuals. We willfocus on a specific sample from the training set (i.e. we know that thesample will be well predicted using two successive trees). We will use thissample to explain how the predictions of both trees are combined. Let's firstselect this sample in `data_train`.
sample = data_train.iloc[[-2]] x_sample = sample['Feature'].iloc[0] target_true = target_train.iloc[-2] target_true_residual = residuals.iloc[-2]
_____no_output_____
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
Let's plot the previous information and highlight our sample of interest.Let's start by plotting the original data and the prediction of the firstdecision tree.
# Plot the previous information: # * the dataset # * the predictions # * the residuals sns.scatterplot(x=data_train["Feature"], y=target_train, color="black", alpha=0.5) plt.plot(data_test["Feature"], target_test_predicted, "--") for value, true, predicted in zip(data_train["Feature"], target_train, target_train_predicted): lines_residuals = plt.plot([value, value], [true, predicted], color="red") # Highlight the sample of interest plt.scatter(sample, target_true, label="Sample of interest", color="tab:orange", s=200) plt.xlim([-1, 0]) plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left") _ = plt.title("Tree predictions")
_____no_output_____
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
Now, let's plot the residuals information. We will plot the residualscomputed from the first decision tree and show the residual predictions.
# Plot the previous information: # * the residuals committed by the first tree # * the residual predictions # * the residuals of the residual predictions sns.scatterplot(x=data_train["Feature"], y=residuals, color="black", alpha=0.5) plt.plot(data_test["Feature"], target_test_predicted_residuals, "--") for value, true, predicted in zip(data_train["Feature"], residuals, target_train_predicted_residuals): lines_residuals = plt.plot([value, value], [true, predicted], color="red") # Highlight the sample of interest plt.scatter(sample, target_true_residual, label="Sample of interest", color="tab:orange", s=200) plt.xlim([-1, 0]) plt.legend() _ = plt.title("Prediction of the residuals")
_____no_output_____
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
For our sample of interest, our initial tree is making an error (smallresidual). When fitting the second tree, the residual in this case isperfectly fitted and predicted. We will quantitatively check this predictionusing the fitted tree. First, let's check the prediction of the initial treeand compare it with the true value.
print(f"True value to predict for " f"f(x={x_sample:.3f}) = {target_true:.3f}") y_pred_first_tree = tree.predict(sample)[0] print(f"Prediction of the first decision tree for x={x_sample:.3f}: " f"y={y_pred_first_tree:.3f}") print(f"Error of the tree: {target_true - y_pred_first_tree:.3f}")
True value to predict for f(x=-0.517) = -0.393 Prediction of the first decision tree for x=-0.517: y=-0.145 Error of the tree: -0.248
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
As we visually observed, we have a small error. Now, we can use the secondtree to try to predict this residual.
print(f"Prediction of the residual for x={x_sample:.3f}: " f"{tree_residuals.predict(sample)[0]:.3f}")
Prediction of the residual for x=-0.517: -0.248
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
We see that our second tree is capable of predicting the exact residual(error) of our first tree. Therefore, we can predict the value of `x` bysumming the prediction of all the trees in the ensemble.
y_pred_first_and_second_tree = ( y_pred_first_tree + tree_residuals.predict(sample)[0] ) print(f"Prediction of the first and second decision trees combined for " f"x={x_sample:.3f}: y={y_pred_first_and_second_tree:.3f}") print(f"Error of the tree: {target_true - y_pred_first_and_second_tree:.3f}")
Prediction of the first and second decision trees combined for x=-0.517: y=-0.393 Error of the tree: 0.000
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
We chose a sample for which only two trees were enough to make the perfectprediction. However, we saw in the previous plot that two trees were notenough to correct the residuals of all samples. Therefore, one needs toadd several trees to the ensemble to successfully correct the error(i.e. the second tree corrects the first tree's error, while the third treecorrects the second tree's error and so on).We will compare the generalization performance of random-forest and gradientboosting on the California housing dataset.
from sklearn.datasets import fetch_california_housing from sklearn.model_selection import cross_validate data, target = fetch_california_housing(return_X_y=True, as_frame=True) target *= 100 # rescale the target in k$ from sklearn.ensemble import GradientBoostingRegressor gradient_boosting = GradientBoostingRegressor(n_estimators=200) cv_results_gbdt = cross_validate( gradient_boosting, data, target, scoring="neg_mean_absolute_error", n_jobs=2, ) print("Gradient Boosting Decision Tree") print(f"Mean absolute error via cross-validation: " f"{-cv_results_gbdt['test_score'].mean():.3f} +/- " f"{cv_results_gbdt['test_score'].std():.3f} k$") print(f"Average fit time: " f"{cv_results_gbdt['fit_time'].mean():.3f} seconds") print(f"Average score time: " f"{cv_results_gbdt['score_time'].mean():.3f} seconds") from sklearn.ensemble import RandomForestRegressor random_forest = RandomForestRegressor(n_estimators=200, n_jobs=2) cv_results_rf = cross_validate( random_forest, data, target, scoring="neg_mean_absolute_error", n_jobs=2, ) print("Random Forest") print(f"Mean absolute error via cross-validation: " f"{-cv_results_rf['test_score'].mean():.3f} +/- " f"{cv_results_rf['test_score'].std():.3f} k$") print(f"Average fit time: " f"{cv_results_rf['fit_time'].mean():.3f} seconds") print(f"Average score time: " f"{cv_results_rf['score_time'].mean():.3f} seconds")
_____no_output_____
MIT
sklearn/notes/ensemble_gradient_boosting.ipynb
shamik-biswas-rft/CodeSnippets
Untill now, we overview the functionality that Jupyter notebook provides for us. How does length of employee titles correlate to salary?
position_title = white_house["Position Title"] title_length = position_title.apply(len) salary = white_house["Salary"] from scipy.stats.stats import pearsonr pearsonr(title_length, salary) plt.scatter(title_length, salary) plt.xlabel("title length") plt.ylabel("salary") plt.title("Title length - Salary Scatter Plot") plt.show()
_____no_output_____
MIT
White House/White House.ipynb
frankbearzou/Data-analysis
How much does the White House pay in total salary?
white_house["Salary"].sum()
_____no_output_____
MIT
White House/White House.ipynb
frankbearzou/Data-analysis
Who are the highest and lowest paid staffers?
max_salary = white_house["Salary"].max() max_salary_column = white_house["Salary"] == max_salary white_house.loc[max_salary_column].reset_index(drop = True) min_salary = white_house["Salary"].min() min_salary_column = white_house["Salary"] == min_salary white_house.loc[min_salary_column].reset_index(drop = True)
_____no_output_____
MIT
White House/White House.ipynb
frankbearzou/Data-analysis
What words are the most common in titles?
words = {} for title in position_title: title_list = title.split() for word in title_list: if word not in words: words[word] = 1 else: words[word] += 1 import operator sorted_words = sorted(words.items(), key=operator.itemgetter(1), reverse = True) sorted_words
_____no_output_____
MIT
White House/White House.ipynb
frankbearzou/Data-analysis
Saving frames in parallel
import xarray as xr from xmovie import Movie ds = xr.tutorial.open_dataset('air_temperature').isel(time=slice(0,200)) # Creating the movie object mov = Movie(ds.air, vmin=230, vmax=310)
_____no_output_____
MIT
docs/examples/parallel.ipynb
zmoon/xmovie
The creation of a movie can take quite long for datasets with many timesteps, creating many frames in a loop.
%%time mov.save('movie.mov', overwrite_existing=True)
Movie created at movie.mov CPU times: user 34.7 s, sys: 1.02 s, total: 35.7 s Wall time: 59.4 s
MIT
docs/examples/parallel.ipynb
zmoon/xmovie
You can speed up the frame creation by activating the `parallel` option. This will save the frames using dask.For this to work you need to chunk the input dataarray with a single step along the dimension that represent your frames (`framedim`).
mov_parallel = Movie(ds.air.chunk({'time':1}), vmin=230, vmax=310) %%time mov_parallel.save( 'movie_parallel.mov', parallel=True, overwrite_existing=True, )
Movie created at movie_parallel.mov CPU times: user 38.8 s, sys: 1.46 s, total: 40.3 s Wall time: 48.3 s
MIT
docs/examples/parallel.ipynb
zmoon/xmovie
You can pass arguments to the dask `.compute()` call with `parallel_compute_kwargs` to tune for your particular setup.
%%time mov_parallel.save( 'movie_parallel_modified.mov', parallel=True, overwrite_existing=True, parallel_compute_kwargs=dict(scheduler="processes", num_workers=8) )
Movie created at movie_parallel.mov CPU times: user 4.84 s, sys: 249 ms, total: 5.09 s Wall time: 33.6 s
MIT
docs/examples/parallel.ipynb
zmoon/xmovie
2A.i - Modèle relationnel, analyse d'incidents dans le transport aérien - correctionManipulation de données avec les dataframes, jointures. Correction inachevée...
from jyquickhelper import add_notebook_menu add_notebook_menu()
_____no_output_____
MIT
_doc/notebooks/td2a/td2a_correction_session_5.ipynb
mohamedelkansouli/Ensae_py
DonnéesLe code suivant télécharge les données nécessaires [tp_2a_5_compagnies.zip](http://www.xavierdupre.fr/enseignement/complements/tp_2a_5_compagnies.zip).
import pyensae pyensae.download_data("tp_2a_5_compagnies.zip")
_____no_output_____
MIT
_doc/notebooks/td2a/td2a_correction_session_5.ipynb
mohamedelkansouli/Ensae_py
Introduction 1.1 Some Apparently Simple Questions 1.2 An Alternative Analytic FrameworkSolved to a high degree of accuracy using numerical method
!pip install --user quantecon import numpy as np import numpy.linalg as la from numba import * from __future__ import division #from quantecon.quad import qnwnorm
_____no_output_____
MIT
Chapter01.ipynb
lnsongxf/Applied_Computational_Economics_and_Finance
Suppose now that the economist is presented with a demand function$$q = 0.5* p^{-0.2} + 0.5*p^{-0.5}$$one that is the sum a domestic demand term and an export demand term.suppose that the economist is asked to find the price that clears themarket of, say, a quantity of 2 units.
#%pylab inline %pylab notebook # pylab Populating the interactive namespace from numpy and matplotlib # numpy for numerical computation # matplotlib for ploting #http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot p = np.linspace(0.01,0.5, 100) q = .5 * p **-.2 + .5 * p ** -.5 - 2 plot(q,p) x1,x2,y1,y2 = 2, 2, 0, 0.5 plot((x1, x2), (y1, y2), 'k-') # example 1.2 p = 0.25 for i in range(100): deltap = (.5 * p **-.2 + .5 * p ** -.5 - 2)/(.1 * p **-1.2 + .25 * p **-1.5) p = p + deltap if abs(deltap) < 1.e-8: # accuracy break #https://stackoverflow.com/questions/20457038/python-how-to-round-down-to-2-decimals print('The market clean price is {:0.2f} '.format(p))
The market clean price is 0.15
MIT
Chapter01.ipynb
lnsongxf/Applied_Computational_Economics_and_Finance
Consider now the rational expectations commodity market model with governmentintervention. The source of difficulty in solving this problem is the need toevaluate the truncated expectation of a continuous distribution.The economist would replace the original normal yield distributionwith a discrete distribution that has identical lower moments, say one that assumesvalues y1; y2; ... ; yn with probabilities w1; w2; ...; wn.
# https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/quad.py def qnwnorm(n, mu=None, sig2=None, usesqrtm=False): """ Computes nodes and weights for multivariate normal distribution Parameters ---------- n : int or array_like(float) A length-d iterable of the number of nodes in each dimension mu : scalar or array_like(float), optional(default=zeros(d)) The means of each dimension of the random variable. If a scalar is given, that constant is repeated d times, where d is the number of dimensions sig2 : array_like(float), optional(default=eye(d)) A d x d array representing the variance-covariance matrix of the multivariate normal distribution. Returns ------- nodes : np.ndarray(dtype=float) Quadrature nodes weights : np.ndarray(dtype=float) Weights for quadrature nodes Notes ----- Based of original function ``qnwnorm`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ n = np.asarray(n) d = n.size if mu is None: mu = np.zeros((d,1)) else: mu = np.asarray(mu).reshape(-1, 1) if sig2 is None: sig2 = np.eye(d) else: sig2 = np.asarray(sig2).reshape(d, d) if all([x.size == 1 for x in [n, mu, sig2]]): nodes, weights = _qnwnorm1(n) else: nodes = [] weights = [] for i in range(d): _1d = _qnwnorm1(n[i]) nodes.append(_1d[0]) weights.append(_1d[1]) nodes = gridmake(*nodes) weights = ckron(*weights[::-1]) if usesqrtm: new_sig2 = la.sqrtm(sig2) else: # cholesky new_sig2 = la.cholesky(sig2) if d > 1: nodes = new_sig2.dot(nodes) + mu # Broadcast ok else: # nodes.dot(sig) will not be aligned in scalar case. nodes = nodes * new_sig2 + mu return nodes.squeeze(), weights def _qnwnorm1(n): """ Compute nodes and weights for quadrature of univariate standard normal distribution Parameters ---------- n : int The number of nodes Returns ------- nodes : np.ndarray(dtype=float) An n element array of nodes nodes : np.ndarray(dtype=float) An n element array of weights Notes ----- Based of original function ``qnwnorm1`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002. """ maxit = 100 pim4 = 1 / np.pi**(0.25) m = np.fix((n + 1) / 2).astype(int) nodes = np.zeros(n) weights = np.zeros(n) for i in range(m): if i == 0: z = np.sqrt(2*n+1) - 1.85575 * ((2 * n + 1)**(-1 / 6.1)) elif i == 1: z = z - 1.14 * (n ** 0.426) / z elif i == 2: z = 1.86 * z + 0.86 * nodes[0] elif i == 3: z = 1.91 * z + 0.91 * nodes[1] else: z = 2 * z + nodes[i-2] its = 0 while its < maxit: its += 1 p1 = pim4 p2 = 0 for j in range(1, n+1): p3 = p2 p2 = p1 p1 = z * math.sqrt(2.0/j) * p2 - math.sqrt((j - 1.0) / j) * p3 pp = math.sqrt(2 * n) * p2 z1 = z z = z1 - p1/pp if abs(z - z1) < 1e-14: break if its == maxit: raise ValueError("Failed to converge in _qnwnorm1") nodes[n - 1 - i] = z nodes[i] = -z weights[i] = 2 / (pp*pp) weights[n - 1 - i] = weights[i] weights /= math.sqrt(math.pi) nodes = nodes * math.sqrt(2.0) return nodes, weights # example 1.2 y, w = qnwnorm(10, 1, 0.1) a = 1 for it in range(100): aold = a p = 3 - 2 * a * y f = w.dot(np.maximum(p, 1)) a = 0.5 + 0.5 * f if abs(a - aold) < 1.e-8: break print('The rational expectations equilibrium acreage is {:0.2f} '.format(a) ) print('The expected market price is {:0.2f} '.format(np.dot(w, p)) ) print('The expected effective producer price is {:0.2f} '.format(f) )
The rational expectations equilibrium acreage is 1.10 The expected market price is 0.81 The expected effective producer price is 1.19
MIT
Chapter01.ipynb
lnsongxf/Applied_Computational_Economics_and_Finance
Linear AlgebraThose exercises will involve vector and matrix math, the NumPy Python package.This exercise will be divided into two parts: 1. Math checkupWhere you will do some of the math by hand. 2. NumPy and Spark linear algebraYou will do some exercise using the NumPy package.In the following exercises you will need to replace the code parts in the cell that starts with following comment: "Replace the ``"To go through the notebook fill in the ``:s with appropriate code in the cells. To run a cell press Shift-Enter to run it and advance to the following cell or Ctrl-Enter to only run the code in the cell. You should do the exercises from the top to the bottom in this notebook, because following cells may depend on code in previous cells.If you want to execute these lines in a python script, you will need to create first a spark context:
#from pyspark import SparkContext, StorageLevel \ #from pyspark.sql import SQLContext \ #sc = SparkContext(master="local[*]") \ #sqlContext = SQLContext(sc) \
_____no_output_____
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
But since we are using the notebooks, those lines are not needed here. 1. Math checkup 1.1 Euclidian norm$$\mathbf{v} = \begin{bmatrix} 666 \\ 1337 \\ 1789 \\ 1066 \\ 1945 \\ \end{bmatrix} \qquad \|\mathbf{v}\| = ? $$Calculate the euclidian norm for the $\mathbf{v}$ using the following definition:$$\|\mathbf{v}\|_2 = \sqrt{\sum\limits_{i=1}^n {x_i}^2} = \sqrt{{x_1}^2+\cdots+{x_n}^2}$$
#Replace the <INSERT> import math import numpy as np v = [666, 1337, 1789, 1066, 1945] rdd = sc.parallelize(v) #sumOfSquares = rdd.map(<INSERT>).reduce(<INSERT>) sumOfSquares = rdd.map(lambda x: x*x ).reduce(lambda x,y : x+y) norm = math.sqrt(sumOfSquares) # <INSERT round to 8 decimals > norm = format(norm, '.8f') norm_numpy= np.linalg.norm(v) print("norm: "+str(norm) +" norm_numpy: "+ str(norm_numpy)) #Helper function to check results import hashlib def hashCheck(x, hashCompare): #Defining a help function hash = hashlib.md5(str(x).encode('utf-8')).hexdigest() print(hash) if hash == hashCompare: print('Yay, you succeeded!') else: print('Try again!') def check(x,y,label): if(x == y): print("Yay, "+label+" is correct!") else: print("Nay, "+label+" is incorrect, please try again!") def checkArray(x,y,label): if np.allclose(x,y): print("Yay, "+label+" is correct!") else: print("Nay, "+label+" is incorrect, please try again!") #Check if the norm is correct hashCheck(norm_numpy, '6de149ccbc081f9da04a0bbd8fe05d8c')
b20f04e15c77f3ba100346dc128daa01 Try again!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
1.2 Transpose$$\mathbf{A} = \begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9\\ \end{bmatrix} \qquad \mathbf{A}^T = ?$$Tranpose is an operation on matrices that swaps the row for the columns.$$\begin{bmatrix} 2 & 7 \\ 3 & 11\\ 5 & 13\\ \end{bmatrix}^T \Rightarrow \begin{bmatrix} 2 & 3 & 5 \\ 7 & 11 & 13\\ \end{bmatrix}$$Do the transpose of A by hand and write it in:
#Replace the <INSERT> #Input aT like this: AT = [[1, 2, 3],[4, 5, 6],[7, 8, 9]] #At = <INSERT> A= np.matrix([[1, 2, 3],[4, 5, 6],[7, 8, 9]]) print(A) print("\n") At = np.matrix.transpose(A) print (At) At =[[1,4, 7],[2, 5, 8],[3, 6, 9]] print("\n") print (At) #Check if the transpose is correct hashCheck(At, '1c8dc4c2349277cbe5b7c7118989d8a5')
1c8dc4c2349277cbe5b7c7118989d8a5 Yay, you succeeded!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
1.3 Scalar matrix multiplication$$\mathbf{A} = 3\times\begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9\\ \end{bmatrix}=?\qquad\mathbf{B} = 5\times\begin{bmatrix} 1\\ -4\\ 7\\ \end{bmatrix}=?$$The operation is done element-wise, e.g. $k\times\mathbf{A}=\mathbf{C}$ then $k\times a_{i,j}={k}c_{i,j}$.$$ 2 \times \begin{bmatrix} 1 & 6 \\ 4 & 8 \\ \end{bmatrix} = \begin{bmatrix} 2\times1& 2\times6 \\ 2\times4 & 2\times8\\ \end{bmatrix} = \begin{bmatrix} 2& 12 \\ 8 & 16\\ \end{bmatrix} $$ $$ 11 \times \begin{bmatrix} 2 \\ 3 \\ 5 \\ \end{bmatrix} = \begin{bmatrix} 11\times2 \\ 11\times3 \\ 11\times5 \\ \end{bmatrix} = \begin{bmatrix} 22\\ 33\\ 55\\ \end{bmatrix} $$Do the scalar multiplications of $\mathbf{A}$ and $\mathbf{B}$ by hand and write them in:
#Replace the <INSERT> #Input A like this: A = [[1, 2, 3],[4, 5, 6],[7, 8, 9]] #And B like this: B = [1, -4, 7] #A = <INSERT> #B = <INSERT> A = np.array([[1, 2, 3],[4, 5, 6],[7, 8, 9]]) print(3*A) print ("\n") B = np.array([1, -4, 7]) print (5*B) print ("\n") A = [[ 3, 6, 9], [12, 15,18], [21, 24, 27]] B = [5, -20, 35] #Check if the scalar matrix multiplication is correct hashCheck(A, '91b9508ec9099ee4d2c0a6309b0d69de') hashCheck(B, '88bddc0ee0eab409cee011770363d007')
91b9508ec9099ee4d2c0a6309b0d69de Yay, you succeeded! 88bddc0ee0eab409cee011770363d007 Yay, you succeeded!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
1.4 Dot product$$c_1=\begin{bmatrix} 11 \\ 2 \\ \end{bmatrix} \cdot \begin{bmatrix} 3 \\ 5 \\ \end{bmatrix} =?\qquadc_2=\begin{bmatrix} 1 \\ 2 \\ 3 \\ \end{bmatrix} \cdot \begin{bmatrix} 4 \\ 5 \\ 6 \\ \end{bmatrix} =?$$The operations are done element-wise, e.g. $\mathbf{v}\cdot\mathbf{w}=k$ then $\sum v_i \times w_i =k$$$ \begin{bmatrix} 2 \\ 3 \\ 5 \\ \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 4 \\ 6 \\ \end{bmatrix} = 2\times1+3\times4+5\times6=44 $$ Calculate the values of $c_1$ and $c_2$ by hand and write them in:
#Replace the <INSERT> #Input c1 and c2 like this: c = 1337 #c1 = <INSERT> #c2 = <INSERT> c1_1 = np.array([11,2]) c1_2 = np.array([3,5]) c1 = c1_1.dot(c1_2) print (c1) c1 = 43 c2_1 = np.array([1,2,3]) c2_2 = np.array([4,5,6]) c2 = c2_1.dot(c2_2) print (c2) c2 = 32 #Check if the dot product is correct hashCheck(c1, '17e62166fc8586dfa4d1bc0e1742c08b') hashCheck(c2, '6364d3f0f495b6ab9dcf8d3b5c6e0b01')
17e62166fc8586dfa4d1bc0e1742c08b Yay, you succeeded! 6364d3f0f495b6ab9dcf8d3b5c6e0b01 Yay, you succeeded!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
1.5 Matrix multiplication $$ \mathbf{A}= \begin{bmatrix} 682 & 848 & 794 & 954 \\ 700 & 1223 & 1185 & 816 \\ 942 & 428 & 324 & 526 \\ 321 & 543 & 532 & 614 \\ \end{bmatrix} \qquad \mathbf{B}= \begin{bmatrix} 869 & 1269 & 1306 & 358 \\ 1008 & 836 & 690 & 366 \\ 973 & 619 & 407 & 1149 \\ 323 & 42 & 405 & 117 \\ \end{bmatrix} \qquad \mathbf{A}\times\mathbf{B}=\mathbf{C}=? $$The $c_{i,j}$ entry is the dot product of the i-th row in $\mathbf{A}$ and the j-th column in $\mathbf{B}$Calculate $\mathbf{C}$ by implementing the naive matrix multiplication algotrithm with $\mathcal{O}(n^3)$ run time, by using the tree nested for-loops below:
# The convention is to import NumPy as the alias np import numpy as np A = [[ 682, 848, 794, 954], [ 700, 1223, 1185, 816], [ 942, 428, 324, 526], [ 321, 543, 532, 614]] B = [[ 869, 1269, 1306, 358], [1008, 836, 690, 366], [ 973, 619, 407, 1149], [ 323, 42, 405, 117]] C = [[0]*4 for i in range(4)] #Iterate through rows of A for i in range(len(A)): #Iterate through columns of B for j in range(len(B[0])): #Iterate through rows of B for k in range(len(B)): C[i][j] += A[i][k] * B[k][j] print(np.matrix(C)) print(np.matrix(A)*np.matrix(B)) #Check if the matrix multiplication is correct hashCheck(C, 'f6b7b0500a6355e8e283f732ec28fa76')
f6b7b0500a6355e8e283f732ec28fa76 Yay, you succeeded!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
2. NumPy and Spark linear algebraA python library to utilize arrays is NumPy. The library is optimized to be fast and memory efficient, and provide abstractions corresponding to vectors, matrices and the operations done on these objects.Numpy's array class is called ndarray, it is also known by the alias array. This is a multidimensional array of fixed-size that contains numerical elements of one type, e.g. floats or integers. 2.1 Scalar matrix multiplication using NumPy$$\mathbf{A} = \begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9\\ \end{bmatrix}\quad5\times\mathbf{A}=\mathbf{C}=?\qquad\mathbf{B} = \begin{bmatrix} 1&-4& 7\\ \end{bmatrix} \quad3\times\mathbf{B}=\mathbf{D}=?$$Utilizing the np.array() function create the above matrix $\mathbf{A}$ and vector $\mathbf{B}$ and multiply it by 5 and 3 correspondingly.Note that if you use a Python list of integers to create an array you will get a one-dimensional array, which is, for our purposes, equivalent to a vector.Calculate C and D by inputting the following statements:
#Replace the <INSERT>. You will use np.array() A = np.array([[1, 2, 3],[4,5,6],[7,8,9]]) B = np.array([1,-4, 7]) C = A *5 D = 3 * B print(A) print(B) print(C) print(D) #Check if the scalar matrix multiplication is correct checkArray(C,[[5, 10, 15],[20, 25, 30],[35, 40, 45]], "the scalar multiplication") checkArray(D,[3, -12, 21], "the scalar multiplication")
Yay, the scalar multiplication is correct! Yay, the scalar multiplication is correct!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
2.2 Dot product and element-wise multiplicationBoth dot product and element-wise multiplication is supported by ndarrays.Element-wise multiplication is the standard between two arrays, of the same dimension, using the operator *. The dot product you can use either np.dot() or np.array.dot(). The dot product is a commutative operation, i.e. the order of the arrays doe not matter, e.g. if you have the ndarrays x and y, you can write the dot product as any of the following four ways: np.dot(x, y), np.dot(y, x), x.dot(y), or y.dot(x).Calculate the element wise product and the dot product by filling in the following statements:
#Replace the <INSERT> u = np.arange(0, 5) v = np.arange(5, 10) elementWise = np.multiply(u,v) dotProduct = np.dot(u,v) print(elementWise) print(dotProduct) #Check if the dot product and element wise is correct checkArray(elementWise,[0,6,14,24,36], "the element wise multiplication") check(dotProduct, 80, "the dot product")
Yay, the element wise multiplication is correct! Yay, the dot product is correct!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
2.3 Cosine similarityThe cosine similarity between two vectors is defined as the following equation:$$cosine\_similarity(u,v)=\cos\theta=\frac{\mathbf{u}\cdot\mathbf{v}}{\|u\|\|v\|}$$The norm of a vector $\|v\|$ can be calculated by using np.linalg.norm().Implement the following function that calculates the cosine similarity:
def cosine_similarity(u,v): dotProduct = np.dot(u,v) normProduct = np.linalg.norm(u)*np.linalg.norm(v) return dotProduct/normProduct u = np.array([2503,2992,1042]) v = np.array([2217,2761,990]) w = np.array([0,1,1]) x = np.array([1,0,1]) uv = cosine_similarity(u,v) wx = cosine_similarity(w,x) print(uv) print(wx) #Check if the cosine similarity is correct check(round(uv,5),0.99974,"cosine similarity between u and v") check(round(wx,5),0.5,"cosine similarity between w and x")
Yay, cosine similarity between u and v is correct! Yay, cosine similarity between w and x is correct!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
2.4 Matrix mathTo represent matrices, you can use the following class: np.matrix(). To create a matrix object either pass it a two-dimensional ndarray, or a list of lists to the function, or a string e.g. '1 2; 3 4'. Instead of element-wise multiplication, the operator *, does matrix multiplication.To transpose a matrix, you can use either np.matrix.transpose() or .T on the matrix object.To calculate the inverse of a matrix, you can use np.linalg.inv() or .I on the matrix object, remember that the inverse of a matrix is only defined on square matrices, and is does not always exist (for sufficient requirements of invertibility look up the: The invertible matrix theorem) and it will then raise a LinAlgError. If you multiply the original matrix with its inverse, you get the identity matrix, which is a square matrix with ones on the main diagonal and zeros elsewhere., e.g. $\mathbf{A} \mathbf{A}^{-1} = \mathbf{I_n}$In the following exercise, you should calculate $\mathbf{A}^T$ multiply it by $\mathbf{A}$ and then inverting the product $\mathbf{AA}^T$ and finally multiply $\mathbf{AA}^T[\mathbf{AA}^T]^{-1}=\mathbf{I}_n$ to get the identity matrix:
#Replace the <INSERT> #We generate a Vandermonde matrix A = np.mat(np.vander([2,3], 5)) print(A) #Calculate the transpose of A At = np.transpose(A) print(At) #Calculate the multiplication of A and A^T AAt = np.dot(A,At) print(AAt) #Calculate the inverse of AA^T AAtInv = np.linalg.inv(AAt) print(AAtInv) #Calculate the multiplication of AA^T and (AA^T)^-1 I = np.dot(AAt,AAtInv) print(I) #To get the identity matrix we round it because of numerical precision I = I.round(13) #Check if the matrix math is correct checkArray(I,[[1.,0.], [0.,1.]], "the matrix math")
Yay, the matrix math is correct!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
2.5 SlicesIt is possible to select subsets of one-dimensional arrays using slices. The basic syntax for slices is $\mathbf{v}$[i:j:k] where i is the starting index, j is the stopping index, and k is the step ($k\neq0$), the default value for k, if it is not specified, is 1. If no i is specified, the default value is 0, and if no j is specified, the default value is the end of the array.For example [0,1,2,3,4][:3] = [0,1,2] i.e. the three first elements of the array. You can use negative indices also, for example [0,1,2,3,4][-3:] = [2,3,4] i.e. the three last elements.The following function can be used to concenate 2 or more arrays: np.concatenate, the syntax is np.concatenate((a1, a2, ...)).Slice the following array in 3 pieces and concenate them together to form the original array:
#Replace the <INSERT> v = np.arange(1, 9) print(v) #The first two elements of v v1 = v[-2:] #The last two elements of v v3 = v[:-2] #The middle four elements of v v2 = v[3:7] print(v1) print(v2) print(v3) #Concatenating the three vectors to get the original array u = np.concatenate((v1, v2, v3))
[1 2 3 4 5 6 7 8] [7 8] [4 5 6 7] [1 2 3 4 5 6]
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
2.6 StackingThere exist many functions provided by the NumPy library to manipulate existing arrays. We will try out two of these methods np.hstack() which takes two or more arrays and stack them horizontally to make a single array (column wise, equvivalent to np.concatenate), and np.vstack() which takes two or more arrays and stack them vertically (row wise). The syntax is the following np.vstack((a1, a2, ...)).Stack the two following array $\mathbf{u}$ and $\mathbf{v}$ to create a 1x20 and a 2x10 array:
#Replace the <INSERT> u = np.arange(1, 11) v = np.arange(11, 21) #A 1x20 array oneRow = np.hstack((u,v)) print(oneRow) #A 2x10 array twoRows = np.vstack((u,v)) print(twoRows) #Check if the stacks are correct checkArray(oneRow,[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20], "the hstack") checkArray(twoRows,[[1,2,3,4,5,6,7,8,9,10],[11,12,13,14,15,16,17,18,19,20]], "the vstack")
Yay, the hstack is correct! Yay, the vstack is correct!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
2.7 PySpark's DenseVectorIn PySpark there exists a DenseVector class within the module pyspark.mllib.linalg. The DenseVector stores the values as a NumPy array and delegates the calculations to this object. You can create a new DenseVector by using DenseVector() and passing it an NumPy array or a Python list.The DenseVector class implements several functions, one important is the dot product, DenseVector.dot(), which operates just like np.ndarray.dot().The DenseVector save all values as np.float64, so even if you pass it an integer vector, the resulting vector will contain floats. Using the DenseVector in a distributed setting, can be done by either passing functions that contain them to resilient distributed dataset (RDD) transformations or by distributing them directly as RDDs.Create the DenseVector $\mathbf{u}$ containing the 10 elements [0.1,0.2,...,1.0] and the DenseVector $\mathbf{v}$ containing the 10 elements [1.0,2.0,...,10.0] and calculate the dot product of $\mathbf{u}$ and $\mathbf{v}$:
#To use the DenseVector first import it from pyspark.mllib.linalg import DenseVector #Replace the <INSERT> #[0.1,0.2,...,1.0] u = DenseVector((0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1)) print(u) #[1.0,2.0,...,10.0] v = DenseVector((1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0)) print(v) #The dot product between u and v dotProduct = np.dot(u,v) #Check if the dense vectors are correct check(dotProduct, 38.5, "the dense vectors")
Yay, the dense vectors is correct!
Apache-2.0
lab_exercises/solutions/lab3_1_NumpyAlgebra_Solution.ipynb
EPCCed/prace-spark-for-data-scientists
Metadata Organization Imports
import pandas as pd import numpy as np import os.path import glob import pathlib import functools import time import re import gc from nilearn.input_data import NiftiMasker import nibabel as nib from nilearn import image from joblib import Parallel, delayed
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Load configs (all patterns/files/folderpaths)
import configurations configs = configurations.Config('sub-xxx-resamp-intersected')
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Function to find all the regressor file paths
def timer(func): """Print the runtime of the decorated function""" @functools.wraps(func) def wrapper(*args, **kwargs): print(f'Calling {func.__name__!r}') startTime = time.perf_counter() value = func(*args, **kwargs) endTime = time.perf_counter() runTime = endTime - startTime print(f'Finished {func.__name__!r} in {runTime:.4f} secs') return value return wrapper
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Function to find all the BOLD NII file paths
@timer def find_paths(relDataFolder, subj, sess, func, patt): paths = list(pathlib.Path(relDataFolder).glob( os.path.join(subj, sess, func, patt) ) ) return paths
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Find all the regressor file paths
regressor_paths = find_paths(relDataFolder=configs.dataDir, subj='sub-*', sess='ses-*', func='func', patt=configs.confoundsFilePattern) regressor_paths
Calling 'find_paths' Finished 'find_paths' in 0.0247 secs
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Find all the BOLD NII file paths
nii_paths = find_paths(relDataFolder=configs.dataDir, subj='sub-*', sess='ses-*', func='func', patt=configs.maskedImagePattern) nii_paths
Calling 'find_paths' Finished 'find_paths' in 0.0224 secs
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Read the participants.tsv file to find summaries of the subjects
participant_info_df = pd.read_csv( configs.participantsSummaryFile, sep='\t' ) participant_info_df
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Get a mapping Dataframe of subject and which session is the sleep deprived one
@timer def map_sleepdep(participant_info): df = pd.DataFrame(participant_info.loc[:,['participant_id', 'Sl_cond']]) df.replace('sub-', '', inplace=True, regex=True) return df.rename(columns={'participant_id':'subject', 'Sl_cond':'sleepdep_session'}) sleepdep_map = map_sleepdep(participant_info_df) sleepdep_map
Calling 'map_sleepdep' Finished 'map_sleepdep' in 0.0026 secs
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Get Dataframe of subject, session, task, path
@timer def get_bids_components(paths): components_list = [] for i, path in enumerate(paths): filename = path.stem dirpath = path.parents[0] matches = re.search( '[a-z0-9]+\-([a-z0-9]+)_[a-z0-9]+\-([a-z0-9]+)_[a-z0-9]+\-([a-z0-9]+)', filename ) subject = matches.group(1) session = matches.group(2) task = matches.group(3) confound_file = path.with_name( 'sub-'+subject+'_ses-'+session+'_task-'+task+'_desc-confounds_regressors.tsv' ) components_list.append([subject, session, task, path.__str__(), confound_file.__str__(), 0] ) df = pd.DataFrame(components_list, columns=['subject', 'session', 'task', 'path', 'confound_path', 'sleepdep'] ) return df bids_comp_df = get_bids_components(nii_paths) bids_comp_df
Calling 'get_bids_components' Finished 'get_bids_components' in 0.0019 secs
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Combine logically sleepdep_map and components_df into 1 dataframe
sleep_bids_comb_df = bids_comp_df.merge(sleepdep_map, how='left')
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Response column 'sleepdep' imputed from 'session' 'sleepdep_session'
for i in range(len(sleep_bids_comb_df)): if (int(sleep_bids_comb_df['session'].iloc[i]) == int(sleep_bids_comb_df['sleepdep_session'].iloc[i])): sleep_bids_comb_df['sleepdep'].iloc[i] = 1 sleep_bids_comb_df
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Get confounds that can be used further clean up the signal or for prediction
def get_important_confounds(regressor_paths, important_reg_list, start, end): regressors_df_list = [] for paths in regressor_paths: regressors_all = pd.DataFrame(pd.read_csv(paths, sep="\t")) regressors_selected = pd.DataFrame(regressors_all[important_reg_list].loc[start:end-1]) regressors_df_list.append(pd.DataFrame(regressors_selected.stack(0)).transpose()) concatenated_df = pd.concat(regressors_df_list, ignore_index=True) concatenated_df.columns = [col[1] + '-' + str(col[0]) for col in concatenated_df.columns.values] return concatenated_df important_reg_list = ['csf', 'white_matter', 'global_signal', 'trans_x', 'trans_y', 'trans_z', 'rot_x', 'rot_y', 'rot_z', 'csf_derivative1', 'white_matter_derivative1', 'global_signal_derivative1', 'trans_x_derivative1', 'trans_y_derivative1', 'trans_z_derivative1', 'rot_x_derivative1', 'rot_y_derivative1', 'rot_z_derivative1', 'csf_power2', 'white_matter_power2', 'global_signal_power2', 'trans_x_power2', 'trans_y_power2', 'trans_z_power2', 'rot_x_power2', 'rot_y_power2', 'rot_z_power2', 'csf_derivative1_power2', 'white_matter_derivative1_power2', 'global_signal_derivative1_power2', 'trans_x_derivative1_power2', 'trans_y_derivative1_power2', 'trans_z_derivative1_power2', 'rot_x_derivative1_power2', 'rot_y_derivative1_power2', 'rot_z_derivative1_power2' ] important_confounds_df = get_important_confounds( sleep_bids_comb_df['confound_path'], important_reg_list, configs.startSlice, configs.endSlice )
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Load the masker data file to prepare to apply to images
masker = NiftiMasker(mask_img=configs.maskDataFile, standardize=False)
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Helper to generate raw voxel df from a given path + masker and print shape for sanity
@timer def gen_one_voxel_df(filepath, masker, start, end): masked_array = masker.fit_transform(image.index_img(filepath, slice(start,end))) reshaped_array = pd.DataFrame(np.reshape( masked_array.ravel(), newshape=[1,-1]), dtype='float32') print('> Shape of raw voxels for file ' + '\"' + pathlib.Path(filepath).stem + '\" ' + 'is: \n' + '\t 1-D (UnMasked+Sliced): ' + str(reshaped_array.shape) + '\n' + '\t 2-D (UnMasked+Sliced): ' + str(masked_array.shape) + '\n' + '\t 4-D (Raw header) : ' + str(nib.load(filepath).header.get_data_shape()) ) return reshaped_array
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Function to generate from masked image the raw voxel df from all images in folder
@timer def get_voxels_df(metadata_df, masker, start, end): rawvoxels_list = [] print() # Print to add a spacer for aesthetics #below has been parallelized for i in range(len(metadata_df)): rawvoxels_list.append(gen_one_voxel_df(metadata_df['path'].iloc[i], masker, start, end)) print() # Print to add a spacer for aesthetics # rawvoxels_list.append(Parallel(n_jobs=-1, verbose=100)(delayed(gen_one_voxel_df)(metadata_df['path'].iloc[i], masker, start, end) for i in range(len(metadata_df)))) print() # Print to add a spacer for aesthetics tmp_df = pd.concat(rawvoxels_list, ignore_index=True) tmp_df['sleepdep'] = metadata_df['sleepdep'] temp_dict = dict((val, str(val)) for val in list(range(len(tmp_df.columns)-1))) return tmp_df.rename(columns=temp_dict, errors='raise')
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Garbage collect
gc.collect()
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Get/Generate raw voxels dataframe from all images with Y column label included
voxels_df = get_voxels_df(sleep_bids_comb_df, masker, configs.startSlice, configs.endSlice) X = pd.concat([voxels_df, important_confounds_df], axis=1)
Calling 'get_voxels_df' Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-1_task-arrows_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 352) Finished 'gen_one_voxel_df' in 6.1580 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-1_task-faces_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 165) Finished 'gen_one_voxel_df' in 3.0791 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-1_task-hands_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 202) Finished 'gen_one_voxel_df' in 3.7256 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-1_task-rest_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 193) Finished 'gen_one_voxel_df' in 3.8081 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-1_task-sleepiness_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 200) Finished 'gen_one_voxel_df' in 3.9162 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-2_task-arrows_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 360) Finished 'gen_one_voxel_df' in 6.4836 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-2_task-faces_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 187) Finished 'gen_one_voxel_df' in 3.4762 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-2_task-hands_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 209) Finished 'gen_one_voxel_df' in 3.8866 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-2_task-rest_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 196) Finished 'gen_one_voxel_df' in 3.3033 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9001_ses-2_task-sleepiness_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 200) Finished 'gen_one_voxel_df' in 3.2924 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-1_task-arrows_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 330) Finished 'gen_one_voxel_df' in 5.4255 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-1_task-faces_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 152) Finished 'gen_one_voxel_df' in 2.6889 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-1_task-hands_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 181) Finished 'gen_one_voxel_df' in 3.0466 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-1_task-rest_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 192) Finished 'gen_one_voxel_df' in 3.2351 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-1_task-sleepiness_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 198) Finished 'gen_one_voxel_df' in 3.3624 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-2_task-arrows_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 336) Finished 'gen_one_voxel_df' in 5.3450 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-2_task-faces_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 152) Finished 'gen_one_voxel_df' in 2.6428 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-2_task-hands_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 185) Finished 'gen_one_voxel_df' in 3.1299 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-2_task-rest_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 193) Finished 'gen_one_voxel_df' in 3.2526 secs Calling 'gen_one_voxel_df' > Shape of raw voxels for file "sub-9002_ses-2_task-sleepiness_space-MNI152NLin2009cAsym_desc-preproc_bold_masked_(sub-9001-9072_resamp_intersected)_bold.nii" is: 1-D (UnMasked+Sliced): (1, 3634160) 2-D (UnMasked+Sliced): (40, 90854) 4-D (Raw header) : (87, 103, 65, 200) Finished 'gen_one_voxel_df' in 3.4293 secs Finished 'get_voxels_df' in 81.4176 secs
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Separately get the Y label
Y = sleep_bids_comb_df['sleepdep']
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Save raw dataframe with Y column included to a file
X.to_pickle(configs.rawVoxelFile)
_____no_output_____
Apache-2.0
06-feature_extraction/Raw_Voxels_Extraction_with_Confounds.ipynb
DataKnightsAI/Sleep-Deprivation-Classification-using-MRI-Scans
Useful modules in standard library--- **Programming Language**- Core Feature + builtin with language, + e.g input(), all(), for, if - Standard Library + comes preinstalled with language installer + e.g datetime, csv, Fraction- Thirdparty Library + created by community to solve specific problem + e.g numpy, pandas, requests import statement Absolute import
%ls import hello import hello2 %cat hello.py hello.hello() %ls hello_package/ %cat hello_package/__init__.py %cat hello_package/diff.py import hello_package hello_package.diff.diff hello_package.diff.diff() hello_package.diff import hello_package.diff diff.diff() hello_package.diff.diff() import hello_package.diff as hello_diff hello_diff.diff() from hello_package.diff import diff diff() patch() from hello_package.diff import patch patch()
Patch
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
Relative import
import sys sys.path from .hello import hello __name__ sys.__name__
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
Date and Time
import datetime datetime datetime.datetime datetime.datetime.now() datetime.datetime.today() datetime.date.today() now = datetime.datetime.now() now now.year now.microsecond now.second help(now) yesterday = datetime.datetime(2016, 8, 1, 8, 32, 29) yesterday now == yesterday now > yesterday now < yesterday now - yesterday
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
*timedelta is difference between two datetime*
delta = datetime.timedelta(days=3) delta yesterday + delta now - delta yesterday / now yesterday // now yesterday % now yesterday * delta help(datetime.timedelta) help(datetime.datetime) datetime.tzinfo('+530') datetime.datetime(2016, 10, 20, tzinfo=datetime.tzinfo('+530')) now.tzinfo datetime.datetime.now() datetime.datetime.utcnow()
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
Files and Directories
f = open('hello.py') open('non existing file') f.read() f.read() f.seek(0) f.read() f.seek(0) f.readlines() f.seek(0) f.readline() f.readline() f.readline() f.close() with open('hello.py') as _file: for line in _file.readlines(): print(line)
def hello(): print("hello")
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
**os**
import os os.path.abspath('hello.py') os.path.dirname(os.path.abspath('hello.py')) os.path.join(os.path.dirname(os.path.abspath('hello.py')), 'another.py') import glob glob.glob('*.py') glob.glob('*')
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
idwaker@gmail.com CSV files
import csv with open('../../data/countries.csv') as csvfile: reader = csv.reader(csvfile) for line in reader: print(line) with open('../../data/countries.csv') as csvfile: reader = csv.DictReader(csvfile, fieldnames=['name', 'code']) for line in reader: print(line) data = [ {'continent': 'asia', 'name': 'nepal'}, {'continent': 'asia', 'name': 'india'}, {'continent': 'asia', 'name': 'japan'}, {'continent': 'africa', 'name': 'chad'}, {'continent': 'africa', 'name': 'nigeria'}, {'continent': 'europe', 'name': 'greece'}, {'continent': 'europe', 'name': 'norway'}, {'continent': 'north america', 'name': 'canada'}, {'continent': 'north america', 'name': 'mexico'}, {'continent': 'south america', 'name': 'brazil'}, {'continent': 'south america', 'name': 'chile'} ] # r == read # w == write [ erase the file first ] # a == apend with open('countries.csv', 'w') as csvfile: writer = csv.DictWriter(csvfile, fieldnames=['name', 'continent']) writer.writeheader() writer.writerows(data) # r == read # w == write [ erase the file first ] # a == apend with open('countries.csv', 'a') as csvfile: writer = csv.DictWriter(csvfile, fieldnames=['name', 'continent']) writer.writerow({'name': 'pakistan', 'continent': 'asia'})
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
Fractions
import fractions fractions.Fraction(3, 5) from fractions import Fraction Fraction(2, 3) Fraction(1, 3) + Fraction(1, 3) (1/3) + (1/3) 10/21
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
Named Tuples
from collections import namedtuple Color = namedtuple('Color', ['red', 'green', 'blue']) button_color = Color(231, 211, 201) button_color.red button_color[0] 'This picture has Red:{0.red} Green:{0.green} and Blue:{0.blue}'.format(button_color)
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
Builtin Methods - all()- any()- chr()- dict()- dir()- help()- id()- input()- list()- len()- map()- open()- print()- range()- reversed()- set()- sorted()- tuple()- zip()
all([1, 0, 4]) all([1, 3, 4]) any([1, 0]) any([0, 0]) chr(64) chr(121) ord('6') ord('*') dict(name='kathmandu', country='nepal') dir('') help(''.title) id('') id(1) input("Enter your number") list((1, 3, 5)) list('hello') len('hello') len([1, 4, 5]) # open() # see: above print("test") range(0, 9) range(0, 99, 3) list(range(0, 9)) reversed(list(range(0, 9))) list(reversed(list(range(0, 9)))) ''.join(reversed('hello')) set([1, 5, 6, 7, 8, 7, 1]) tuple([1, 5, 2, 7, 3, 9]) sorted([1, 5, 2, 7, 3, 9]) sorted([1, 5, 2, 7, 3, 9], reverse=True) data = [{'continent': 'asia', 'name': 'nepal', 'id':0}, {'continent': 'asia', 'name': 'india', 'id':5}, {'continent': 'asia', 'name': 'japan', 'id':8}, {'continent': 'africa', 'name': 'chad', 'id':2}, {'continent': 'africa', 'name': 'nigeria', 'id':7}, {'continent': 'europe', 'name': 'greece', 'id':1}, {'continent': 'europe', 'name': 'norway', 'id':6}, {'continent': 'north america', 'name': 'canada', 'id':3}, {'continent': 'north america', 'name': 'mexico', 'id':5}, {'continent': 'south america', 'name': 'brazil', 'id':4}, {'continent': 'south america', 'name': 'chile', 'id':7}] def sort_by_name(first): return first['name'] < first['continent'] sorted(data, key=sort_by_name) list(zip([1, 2, 3], [2, 3, 4]))
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
**Lambda operations**
map(lambda x: x * 2, [1, 2, 3, 4]) list(map(lambda x: x * 2, [1, 2, 3, 4])) lambda x: x + 4 def power2(x): return x * 2 list(map(power2, [1, 2, 3, 4]))
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
*reduce is available in python2 only*
list(reduce(lambda x: x, [1, 4, 5, 6, 9]))
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
*for python 3*
from functools import reduce reduce(lambda x, y: x + y, [1, 4, 5, 7, 8])
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
*filter*
list(filter(lambda x: x < 3, [1, 3, 5, 2, 8]))
_____no_output_____
CC-BY-4.0
References/A.Standard Library.ipynb
pywaker/pybasics
Preface雖然我年紀已經不小, 但是追朔 [FP (Functional programming) 的歷史](https://en.wikipedia.org/wiki/Functional_programmingHistory), 我也只能算年輕:> The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. 既然歷史攸久, 因此一言難盡, 在這邊只會 吹吹皮毛, 希望至少可以 cover 下面的內容:* Basic FP terminology (Function/Side effect/Closure ...)* Lambda 用法 & 範例* 常常和 Lambda 一起使用的其他函數 map/filter/functools.reduce* 可以接受的 Lambda 使用時機* Review 時會被打槍的用法* FPU 簡介
#!pip install fpu from fpu.flist import * from functools import partial from typing import Sequence from collections.abc import Iterable
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
Basic FP terminology* FP Terminology - Imperative vs Declarative* FP Terminology - Closure* FP Terminology - CurryingFunctional programming has a [long history](https://en.wikipedia.org/wiki/Functional_programmingHistory). In a nutshell, **its a style of programming where you focus on transforming data through the use of small expressions that ideally don’t contain side effects.** In other words, when you call my_fun(1, 2), it will always return the same result. This is achieved by **immutable data** typical of a functional language.![function & side effect](images/1.PNG)([image source](https://www.fpcomplete.com/blog/2017/04/pure-functional-programming/)) FP Terminology - Imperative vs Declarative你可以把 `Imperative` 與 `Decleartive` 想成將編程語言分類的一種方法. (顏色, 大小, 形狀 etc). 稍後會說明這兩種陣營的語言寫起來的差別:![function & side effect](images/2.PNG) Imperative底下是 [Imperative 語言的 wiki 說明](https://en.wikipedia.org/wiki/Imperative_programming):> **Imperative programming** is like building assembly lines, which take some initial global state as raw material, apply various specific transformations, mutations to it as this material is pushed through the line, and at the end comes off the end product, the final global state, that represents the result of the computation. Each step needs to change, rotate, massage the workpiece precisely one specific way, so that it is prepared for subsequent steps downstream. Every step downstream depend on every previous step, and their order is therefore fixed and rigid. Because of these dependencies, an individual computational step has not much use and meaning in itself, but only in the context of all the others, and to understand it, one must understand how the whole line works.現今大部分的語言都屬於 imperative 陣營, 看描述很辛苦, 透過比對兩者的編程 style 可以很容易發現兩者不同. 底下是 imperative style 的編成方式:
salaries = [ (True, 9000), # (Is female, salary) (False, 12000), (False, 6000), (True, 14000), ] def imperative_way(salaries): '''Gets the sum of salaries of female and male. Args: salaries: List of salary. Each element is of tuple(is_female: bool, salary: int) Returns: Tuple(Sum of female salaries, Sum of male salaries) ''' female_sum = male_sum = 0 for is_female, salary in salaries: if is_female: female_sum += salary else: male_sum += salary return (female_sum, male_sum) imperative_way(salaries)
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
Declarative底下是 [Declarative 語言的 wiki 說明](https://en.wikipedia.org/wiki/Declarative_programming):> A style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.
def add(a, b): return a + b def salary_sum(is_female: bool): '''Return calculator to sum up the salary based on female/male Args: is_female: True to return calculator to sum up salaries of female False to return calculator to sum up salaries of male. Returns: Calculator to sum up salaries. ''' def _salary_sum(salaries): flist = fl(salaries) return flist.filter(lambda t: t[0] == is_female) \ .map(lambda t: t[1]) \ .foldLeft(0, add) return _salary_sum def declarative_way(salaries): return ( salary_sum(is_female=True)(salaries), # Salary sum of female salary_sum(is_female=False)(salaries), # Salary sum of male ) declarative_way(salaries)
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
FP Terminology - Closure ([back](sect1))A [**Closure**](https://en.wikipedia.org/wiki/Closure_(computer_programming)) is a function which **simply creates a scope that allows the function to access and manipulate the variables in enclosing scopes**. Normally, you will follow below steps to create a Closure in Python:* We have to create a nested function (a function inside another function).* This nested function has to refer to a variable defined inside the enclosing function.* The enclosing function has to return the nested function簡單說就是你的 Function object 綁定一個封閉的 name space ([這篇](https://dboyliao.medium.com/%E8%81%8A%E8%81%8A-python-closure-ebd63ff0146f)介紹還蠻清楚, 可以參考), 直接來看範例理解:
def contain_N(n): def _inner(sequence: Sequence): return n in sequence return _inner contain_5 = contain_N(5) contain_10 = contain_N(10) my_datas = [1, 2, 3, 4, 5] print(f'my_data={my_datas} contains 5? {contain_5(my_datas)}') print(f'my_data={my_datas} contains 10? {contain_10(my_datas)}')
my_data=[1, 2, 3, 4, 5] contains 5? True my_data=[1, 2, 3, 4, 5] contains 10? False
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
上面的函數 `contain_N` 返回一個 closure. 該 closure 綁訂了變數 `n`. (contain_5 綁定的 `n` 為 5; contain_10 綁定的 `n` 為 10) FP Terminology - Currying ([back](sect1))底下是 wiki 上對 [**currying**](https://en.wikipedia.org/wiki/Curry_(programming_language)) 的說明:> Currying is like a kind of incremental binding of function arguments. It is the technique of breaking down the evaluation of a function that takes multiple arguments into evaluating a sequence of single-argument functions.很可惜在 Python 預設的函數並不支援這個特性, 幸運的是在模組 [**functools**](https://docs.python.org/3/library/functools.html) 有提供 [partial](https://docs.python.org/3/library/functools.htmlfunctools.partial) 來模擬 currying 的好處. 直接來看範例:
def sum_salary_by_sex(*args): def _sum_salary_by_sex(is_female: bool, salaries: Sequence) -> int: return sum(map(lambda t: t[1], filter(lambda t: t[0]==is_female, salaries))) if len(args) == 1: return partial(_sum_salary_by_sex, args[0]) return _sum_salary_by_sex(*args) # Get female salaries sum_salary_by_sex(True, salaries) # # Get female salaries in currying way sum_salary_by_sex(True)(salaries) # Get male salaries sum_salary_by_sex(False, salaries) # Get male salaries in curring way sum_salary_by_sex(False)(salaries) sum_salary_by_female = sum_salary_by_sex(True) sum_salary_by_male = sum_salary_by_sex(False) sum_salary_by_female(salaries) sum_salary_by_male(salaries)
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
我們透過 currying 的特性便可以得到新的函數 sum_salary_by_female 與 sum_salary_by_male. 是不是很方便? Lambda 用法 & 範例 ([back](sect0))[**Lambda**](https://docs.python.org/3/reference/expressions.htmllambda) 在 Python 是一個關鍵字用來定義 匿名函數. 底下是使用 lambda 匿名函數的注意事項:* It can only contain expressions and can’t include statements ([No Statesments](sect2_1)) in its body.* [It is written as a single line of execution](sect2_2).* [It does not support type annotations.](sect2_3)* [It can be immediately invoked](sect2_4) ([IIFE](https://en.wikipedia.org/wiki/Immediately_invoked_function_expression)). No StatementsA lambda function can’t contain any statements. In a lambda function, statements like `return`, `pass`, `assert`, or `raise` will raise a [**SyntaxError**](https://realpython.com/invalid-syntax-python/) exception. Here’s an example of adding assert to the body of a lambda:```python>>> (lambda x: assert x == 2)(2) File "", line 1 (lambda x: assert x == 2)(2) ^SyntaxError: invalid syntax``` Single ExpressionIn contrast to a normal function, a Python lambda function is a single expression. Although, in the body of a lambda, you can spread the expression over several lines using parentheses or a multiline string, it remains a single expression:```python>>> (lambda x:... (x % 2 and 'odd' or 'even'))(3)'odd'```The example above returns the string 'odd' when the lambda argument is odd, and 'even' when the argument is even. It spreads across two lines because it is contained in a set of parentheses, but it remains a single expression. Type AnnotationsIf you’ve started adopting type hinting, which is now available in Python, then you have another good reason to prefer normal functions over Python lambda functions. Check out [**Python Type Checking** (Guide)]((https://realpython.com/python-type-checking/hello-types)) to get learn more about Python type hints and type checking. In a lambda function, there is no equivalent for the following:```pythondef full_name(first: str, last: str) -> str: return f'{first.title()} {last.title()}'```Any type error with full_name() can be caught by tools like [**mypy**](http://mypy-lang.org/) or [**pyre**](https://pyre-check.org/), whereas a [**SyntaxError**](https://realpython.com/invalid-syntax-python/) with the equivalent lambda function is raised at runtime:```python>>> lambda first: str, last: str: first.title() + " " + last.title() -> str File "", line 1 lambda first: str, last: str: first.title() + " " + last.title() -> strSyntaxError: invalid syntax```Like trying to include a statement in a lambda, adding type annotation immediately results in a [**SyntaxError**](https://realpython.com/invalid-syntax-python/) at runtime. IIFEYou’ve already seen several examples of [immediately invoked function execution](https://developer.mozilla.org/en-US/docs/Glossary/IIFE):```python>>> (lambda x: x * x)(3)9```It’s a direct consequence of a lambda function being callable as it is defined. For example, this allows you to pass the definition of a Python lambda expression to a higher-order function like [map()](https://docs.python.org/3/library/functions.htmlmap), [filter()](https://docs.python.org/3/library/functions.htmlfilter), or [**functools**.reduce()](https://docs.python.org/3/library/functools.htmlfunctools.reduce), or to a `key function`. 常常和 Lambda 一起使用的函數 map/filter/functools.reduce 與 key functions ([back](sect0))* Built-in map/filter & functools.reduce* key functions 淺談 Built-in map/filter & functools.reduce[map](https://docs.python.org/3/library/functions.htmlmap) 與 [filter](https://docs.python.org/3/library/functions.htmlfilter) 是 Python 預設就支援的函數. [reduce](https://docs.python.org/3/library/functools.htmlfunctools.reduce) 則必須到 [**functools**](https://docs.python.org/3/library/functools.html) 套件下去 import. 底下是這三個函數使用的示意圖:[image source](https://www.reddit.com/r/ProgrammerHumor/comments/55ompo/map_filter_reduce_explained_with_emojis/)![map/filter/reduce usage](images/3.PNG)對我來說 [reduce](https://docs.python.org/3/library/functools.htmlfunctools.reduce) 更像是:![map/filter/reduce usage](images/4.PNG)來看幾個例子吧:
class Beef: def __init__(self): self.is_veg = False def cook(self): return 'Hamburger' class Potato: def __init__(self): self.is_veg = True def cook(self): return 'French Fries' class Chicken: def __init__(self): self.is_veg = False def cook(self): return 'Fried chicken' class Corn: def __init__(self): self.is_veg = True def cook(self): return 'Popcorn' food_ingredients = [Beef(), Potato(), Chicken(), Corn()]
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
map 範例[map](https://docs.python.org/3/library/functions.htmlmap) 需要你提供一個 function 與一個 iterable 物件 (延伸閱讀: [`The Iterator Protocol`](https://www.pythonmorsels.com/iterator-protocol/)), 接著 map 會將 iterable 的每個 element 丟掉你提供的 function 並收集 return 結果並回傳另一個 iterable 物件給你. 底下我們的範例:* **function**: `lambda food_ingredient: food_ingredient.cook()`* **iterable 物件**: `food_ingredients`
# map(function, iterable, ...): # Return an iterator that applies function to every item of iterable, yielding the results. map_iter = map(lambda food_ingredient: food_ingredient.cook(), food_ingredients) isinstance(map_iter, Iterable) # map_iter is an iterable object. list(map_iter)
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
filter 範例[filter](https://docs.python.org/3/library/functions.htmlfilter) 函數透過你提供的 function 來選擇傳入 iterable 物件中的 element (element 傳進 function 得到 True 的會被選擇).
# filter(function, iterable): # Construct an iterator from those elements of iterable for which function returns true. veg_iter = filter(lambda food_ingredient: food_ingredient.is_veg, food_ingredients) isinstance(veg_iter, Iterable) # veg_iter is an iterable object. # 只有 Proato 與 Corn 被選擇, 因為他們 `is_veg` == True list(veg_iter)
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
reduce 範例reduce 函數的用法用講的不好說, 直接看範例:
from functools import reduce # If initializer is not given, the first item of iterable object is returned. f = lambda a, b: a+b reduce( f, [1, 2, 3, 4, 5] )
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
上面的執行過程可以看成:![reduce flow](https://nbviewer.org/github/johnklee/oo_dp_lesson/blob/master/lessons/Test_and_function_programming_in_Python/images/fp_6.PNG)事實上你可以提供初始值, 例如:
reduce( lambda a, b: a+b, [1, 2, 3, 4, 5], 10, )
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
更多有關這個函數的用法, 可以參考 [**Python's reduce(): From Functional to Pythonic Style**](https://realpython.com/python-reduce-function/) key functions 淺談Key functions in Python are higher-order functions that take a parameter `key` as a named argument.在 Python 許多的函數有提供參數 `key` 便是 lambda 使用的場合之一, 例如:* [sort()](https://docs.python.org/3/library/stdtypes.htmllist.sort): list method* [sorted()](https://docs.python.org/3/library/functions.htmlstaticmethod), [min()](https://docs.python.org/3/library/functions.htmlmin), [max()](https://docs.python.org/3/library/functions.htmlmax): built-in functions* [nlargest()](https://docs.python.org/3/library/heapq.htmlheapq.nlargest) and [nsmallest()](https://docs.python.org/3/library/heapq.htmlheapq.nsmallest): in the Heap queue algorithm module [**heapq**](https://docs.python.org/3/library/heapq.html)來看幾個範例來理解用法. sorted考慮你有如下的 list:
ids = ['id1', 'id2', 'id100', 'id30', 'id3', 'id22']
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
你希望透過 `id` 的 `` 來進行排序 (ascending), 這時 [sorted](https://docs.python.org/3/library/functions.htmlsorted) 便可以派上用場:* **sorted(iterable, /, *, key=None, reverse=False)**: Return a new sorted list from the items in iterable.
sorted( ids, key=lambda id_str: int(id_str[2:]), # 比對時使用的值 )
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
懂一個 Key function 的用法, 其他就是依此類推了, 例如取出最大 `` 的 id 就會是:
max( ids, key=lambda id_str: int(id_str[2:]))
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
可以接受的 Lambda 使用時機 ([back](sect0))底下是 readability 文件對 Lambda 使用的建議:* [**2.10 Lambda Functions**](https://engdoc.corp.google.com/eng/doc/devguide/py/style/index.md?cl=headlambdas)> Okay to use them for one-liners. If the code inside the lambda function is longer than 60-80 chars, it's probably better to define it as a regular [nested function](https://engdoc.corp.google.com/eng/doc/devguide/py/style/index.md?cl=headlexical-scoping). > For common operations like multiplication, use the functions from the operator module instead of lambda functions. For example, prefer [**operator**.mul](https://docs.python.org/3/library/operator.htmloperator.mul) to `lambda x, y: x * y`. Alternatives to Lambdas個人在 readability review 不會特別 high light lambda 的使用, 但是有收到一些 review comment 建議可以使用其他方式來取代 lambda 用法. 這邊來看幾個範例. Map**The built-in function [map()](https://docs.python.org/3/library/functions.htmlmap) takes a function as a first argument and applies it to each of the elements of its second argument, an iterable**. Examples of iterables are strings, lists, and tuples. For more information on iterables and iterators, check out [**Iterables and Iterators**](https://realpython.com/lessons/looping-over-iterables/).[map()](https://docs.python.org/3/library/functions.htmlmap) returns an iterator corresponding to the transformed collection. As an example, if you wanted to transform a list of strings to a new list with each string capitalized, you could use [map()](https://docs.python.org/3/library/functions.htmlmap), as follows:
# Map example list(map(lambda x: x.capitalize(), ['cat', 'dog', 'cow'])) # Proposed way in using list comprehension [w.capitalize() for w in ['cat', 'dog', 'cow']]
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
FilterThe built-in function [filter()](https://docs.python.org/3/library/functions.htmlfilter), another classic functional construct, can be converted into a list comprehension. It takes a [predicate](https://en.wikipedia.org/wiki/Predicate_(mathematical_logic)) as a first argument and an iterable as a second argument. It builds an iterator containing all the elements of the initial collection that satisfies the predicate function. Here’s an example that filters all the even numbers in a given list of integers:
# Filter example even = lambda x: x%2 == 0 list(filter(even, range(11))) # Proposed way in using list comprehension [x for x in range(11) if x%2 == 0]
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
ReduceSince Python 3, [reduce()](https://docs.python.org/3/library/functools.htmlfunctools.reduce) has gone from a built-in function to a [**functools**](https://docs.python.org/3/library/functools.htmlfunctools.reduce) module function. As [map()](https://docs.python.org/3/library/functions.htmlmap) and [filter()](https://docs.python.org/3/library/functions.htmlfilter), its first two arguments are respectively a function and an iterable. It may also take an initializer as a third argument that is used as the initial value of the resulting accumulator. For each element of the iterable, [reduce()](https://docs.python.org/3/library/functools.htmlfunctools.reduce) applies the function and accumulates the result that is returned when the iterable is exhausted.To apply [reduce()](https://docs.python.org/3/library/functools.htmlfunctools.reduce) to a list of pairs and calculate the sum of the first item of each pair, you could write this:```python>>> import functools>>> pairs = [(1, 'a'), (2, 'b'), (3, 'c')]>>> functools.reduce(lambda acc, pair: acc + pair[0], pairs, 0)6```A more idiomatic approach using a [generator expression](https://www.python.org/dev/peps/pep-0289/), as an argument to [sum()](https://docs.python.org/3/library/functions.htmlsum) in the example, is the following:
pairs = [(1, 'a'), (2, 'b'), (3, 'c')] sum(x[0] for x in pairs) generator = (x[0] for x in pairs) generator iterator = pairs.__iter__() iterator
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
有關 generator 與 iterator 的介紹與說明, 可以參考 "[**How to Use Generators and yield in Python**](https://realpython.com/introduction-to-python-generators/)" 與 "[**The Python `for` loop**](https://realpython.com/python-for-loop/the-python-for-loop)" Review 時會被打槍的用法 ([back](sect0))* g-long-lambda* unnecessary-lambda**The next sections illustrate a few examples of lambda usages that should be avoided**. Those examples might be situations where, in the context of Python lambda, the code exhibits the following pattern:* It doesn’t follow the Python style guide ([PEP 8](https://peps.python.org/pep-0008/))* It’s cumbersome and difficult to read.* It’s unnecessarily clever at the cost of difficult readability. g-long-lambda> ([link](go/gpylint-faqg-long-lambda)) Used when a tricky functional-programming construct may be too long.* Negative:```pythonusers = [ {'name': 'John', 'age': 40, 'sex': 1}, {'name': 'Ken', 'age': 26, 'sex': 0}, ...]sorted_users = sorted( users, key=lambda u: (u['age'], u['sex']) if is_employee(u) else (u['age'], u['name']),)``` unnecessary-lambda> ([link](go/gpylint-faqunnecessary-lambda)) Lambda may not be necessary* Negative:```pythonfoo = {'x': 1, 'y': 2}self.mock_fn_that_returns_dict = lambda: foo.copy()```* Example:```pythonfoo = {'x': 1, 'y': 2}self.mock_fn_that_returns_dict = foo.copy``` FPU 簡介 ([back](sect0))* functional composition* Built-in filter/map/reduce in collection object[**fpu**](https://github.com/johnklee/fpu) (Functional programming utility) 是我維護的一個 Python package 用來提升 Python 對 FP 的支援. 這邊帶幾個範例來了解它帶來的好處. functional compositionFunctional composition 的特性 (延伸閱讀: "[**Function composition and lazy execution**](https://ithelp.ithome.com.tw/articles/10235556)") 讓你可以方便的串接函數來產生新的函數. 考慮你有以下代碼:
data_set = [{'values':[1, 2, 3]}, {'values':[4, 5]}] # Imperative def min_max_imp(data_set): """Picks up the maximum of each element and calculate the minimum of them.""" max_list = [] for d in data_set: max_list.append(max(d['values'])) return min(max_list) # Max of [1, 2, 3] -> [3], max of [4, 5] -> [5] => Got [3, 5] # Min of [3, 5] => 3 min_max_imp(data_set)
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
事實上這是兩個函數 min/max 串接的結果. 透過 FPU, 你可以改寫成:
# FP from fpu.fp import * from functools import reduce, partial # compose2(f, g) = f(g()) min_max = compose2( partial(reduce, min), # [3, 5] -> [3] partial(map, lambda d: max(d['values']))) # [{'values':[1, 2, 3]}, {'values':[4, 5]}] -> [3, 5] min_max(data_set)
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
Built-in filter/map/reduce in collection objectFPU 中的 collection 物件自帶 filter/map/reduce 函數. 考慮下面問題:
# 請找出在每個 element 都有出現過的元素 (character). arr = ['abcdde', 'baccd', 'eeabg'] def gemstones_imp(arr): # 1) Collect unique character of each element set_list = [] for s in arr: set_list.append(set(list(s))) # 2) Keep searching overlapping characters among all set uset = set_list[0] for aset in set_list[1:]: uset = uset & aset return ''.join(uset) gemstones_imp(arr)
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
使用 FPU 改寫變成:
from fpu.flist import * def gemstones_dec(arr): rlist = fl(arr) return ''.join( rlist.map( # 將每個 element 轉成 set lambda e: set(list(e)) ).reduce( # 依序找出每個 set 共用的 character lambda a, b: a & b ) ) gemstones_dec(arr)
_____no_output_____
MIT
lambda_function_101/fp_and_lambda_101.ipynb
johnklee/python_101_crash_courses
Inheritance with the Gaussian ClassTo give another example of inheritance, take a look at the code in this Jupyter notebook. The Gaussian distribution code is refactored into a generic Distribution class and a Gaussian distribution class. Read through the code in this Jupyter notebook to see how the code works.The Distribution class takes care of the initialization and the read_data_file method. Then the rest of the Gaussian code is in the Gaussian class. You'll later use this Distribution class in an exercise at the end of the lesson.Run the code in each cell of this Jupyter notebook. This is a code demonstration, so you do not need to write any code.
class Distribution: def __init__(self, mu=0, sigma=1): """ Generic distribution class for calculating and visualizing a probability distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ self.mean = mu self.stdev = sigma self.data = [] def read_data_file(self, file_name): """Function to read in data from a txt file. The txt file should have one number (float) per line. The numbers are stored in the data attribute. Args: file_name (string): name of a file to read from Returns: None """ with open(file_name) as file: data_list = [] line = file.readline() while line: data_list.append(int(line)) line = file.readline() file.close() self.data = data_list import math import matplotlib.pyplot as plt class Gaussian(Distribution): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ def __init__(self, mu=0, sigma=1): Distribution.__init__(self, mu, sigma) def calculate_mean(self): """Function to calculate the mean of the data set. Args: None Returns: float: mean of the data set """ avg = 1.0 * sum(self.data) / len(self.data) self.mean = avg return self.mean def calculate_stdev(self, sample=True): """Function to calculate the standard deviation of the data set. Args: sample (bool): whether the data represents a sample or population Returns: float: standard deviation of the data set """ if sample: n = len(self.data) - 1 else: n = len(self.data) mean = self.calculate_mean() sigma = 0 for d in self.data: sigma += (d - mean) ** 2 sigma = math.sqrt(sigma / n) self.stdev = sigma return self.stdev def plot_histogram(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.hist(self.data) plt.title('Histogram of Data') plt.xlabel('data') plt.ylabel('count') def pdf(self, x): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) def plot_histogram_pdf(self, n_spaces = 50): """Function to plot the normalized histogram of the data and a plot of the probability density function along the same range Args: n_spaces (int): number of data points Returns: list: x values for the pdf plot list: y values for the pdf plot """ mu = self.mean sigma = self.stdev min_range = min(self.data) max_range = max(self.data) # calculates the interval between x values interval = 1.0 * (max_range - min_range) / n_spaces x = [] y = [] # calculate the x values to visualize for i in range(n_spaces): tmp = min_range + interval*i x.append(tmp) y.append(self.pdf(tmp)) # make the plots fig, axes = plt.subplots(2,sharex=True) fig.subplots_adjust(hspace=.5) axes[0].hist(self.data, density=True) axes[0].set_title('Normed Histogram of Data') axes[0].set_ylabel('Density') axes[1].plot(x, y) axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation') axes[0].set_ylabel('Density') plt.show() return x, y def __add__(self, other): """Function to add together two Gaussian distributions Args: other (Gaussian): Gaussian instance Returns: Gaussian: Gaussian distribution """ result = Gaussian() result.mean = self.mean + other.mean result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2) return result def __repr__(self): """Function to output the characteristics of the Gaussian instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}".format(self.mean, self.stdev) # initialize two gaussian distributions gaussian_one = Gaussian(25, 3) gaussian_two = Gaussian(30, 2) # initialize a third gaussian distribution reading in a data efile gaussian_three = Gaussian() gaussian_three.read_data_file('numbers.txt') gaussian_three.calculate_mean() gaussian_three.calculate_stdev() # print out the mean and standard deviations print(gaussian_one.mean) print(gaussian_two.mean) print(gaussian_one.stdev) print(gaussian_two.stdev) print(gaussian_three.mean) print(gaussian_three.stdev) # plot histogram of gaussian three gaussian_three.plot_histogram_pdf() # add gaussian_one and gaussian_two together gaussian_one + gaussian_two
_____no_output_____
MIT
Assignment 7/inheritance_probability_distribution.ipynb
Sameer411/AWS-ML-Course-Assignments
SWI - single layer Test case - strange behaviour output control packageWhen requesting both budget and head data via the OC package, the solution differs from when only the head is requested.This is set via the 'words' parameter in the OC package.
%matplotlib inline import os import sys import numpy as np import flopy.modflow as mf import flopy.utils as fu import matplotlib.pyplot as plt os.chdir('C:\\Users\\Bas\\Google Drive\\USGS\\FloPy\\slope1D') sys.path.append('C:\\Users\\Bas\\Google Drive\\USGS\\FloPy\\basScript') # location of gridObj modelname = 'run1swi2' exe_name = 'mf2005' workspace = 'data' ml = mf.Modflow(modelname, exe_name=exe_name, model_ws=workspace) nstp = 10000 #[] perlen = 10000 #[d] ssz = 0.2 #[] Q = 0.005 #[m3/d] nlay = 1 nrow = 1 ncol = 4 delr = 1. delc = 1. dell = 1. top = np.array([[-1.,-1., -0.7, -0.4]], dtype = np.float32) bot = np.array(top-dell, dtype = np.float32).reshape((nlay,nrow,ncol)) initWL = 0. # inital water level lrcQ1 = np.recarray(1, dtype = mf.ModflowWel.get_default_dtype()) lrcQ1[0] = (0, 0, ncol-1, Q) #LRCQ, Q[m**3/d] lrchc = np.recarray(2, dtype = mf.ModflowGhb.get_default_dtype()) lrchc[0]=(0, 0, 0, -top[0,0]*0.025, 0.8 / 2.0 * delc) lrchc[1]=(0, 0, 1, -top[0,0]*0.025, 0.8 / 2.0 * delc) lrchd = np.recarray(2, dtype = mf.ModflowChd.get_default_dtype()) lrchd[0]=(0, 0, 0, -top[0,0]*0.025, -top[0,0]*0.025) lrchd[1]=(0, 0, 1, -top[0,0]*0.025, -top[0,0]*0.025) zini = -0.9*np.ones((nrow,ncol)) isource = np.array([[-2,-2, 0, 0]]) ml = mf.Modflow(modelname, version='mf2005', exe_name=exe_name) discret = mf.ModflowDis(ml, nrow=nrow, ncol=ncol, nlay=nlay, delr=delr, delc=delc, laycbd=[0], top=top, botm=bot, nper=1, perlen=perlen, nstp=nstp) bas = mf.ModflowBas(ml, ibound=1, strt=(initWL-zini)*0.025) bcf = mf.ModflowBcf(ml, laycon=[0], tran=[4.0]) wel = mf.ModflowWel(ml, stress_period_data={0:lrcQ1}) #ghb = mf.ModflowGhb(ml, stress_period_data={0:lrchc}) chd = mf.ModflowChd(ml, stress_period_data={0:lrchd}) swi = mf.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=0.02, tipslope=0.04, nu=[0, 0.025], zeta=[zini], ssz=ssz, isource=isource, nsolver=1) oc = mf.ModflowOc(ml,save_head_every=nstp) pcg = mf.ModflowPcg(ml) ml.write_input() #--write the model files m = ml.run_model(silent=True, report=True) headfile = modelname + '.hds' hdobj = fu.HeadFile(headfile) head = hdobj.get_data(idx=0) zetafile = modelname + '.zta' zobj = fu.CellBudgetFile(zetafile) zeta = zobj.get_data(idx=0, text=' ZETASRF 1')[0] print 'isource: ', swi.isource.array print 'init zeta: ', swi.zeta[0].array print 'init fresh hd: ', bas.strt.array print 'final head: ', head[0, 0, :] print 'final zeta: ', zeta[0,0,:] print 'final BGH head: ', - 40. * (head[0, 0, :]) import gridobj as grd gr = grd.gridobj(discret) fig = plt.figure(figsize=(16, 8), dpi=300, facecolor='w', edgecolor='k') ax = fig.add_subplot(111) gr.plotgrLC(ax) gr.plothdLC(ax,zini[0,:],label='Initial') gr.plothdLC(ax,zeta[0,0,:], label='SWI2') gr.plothdLC(ax,head[0, 0, :], label='feshw head') gr.plothdLC(ax,-40. * (head[0, 0, :]), label='Ghyben-Herzberg') ax.axis(gr.limLC([-0.2,0.2,-0.2,0.2])) leg = ax.legend(loc='lower left', numpoints=1) leg._drawFrame = False
_____no_output_____
MIT
SWI1D/4cell1_woOcBgt.ipynb
bdestombe/SWItest
Bank customers clustering project This dataset contains data on 5000 customers. The data include customer demographic information (age, income, etc.), the customer's relationship with the bank (mortgage, securities account, etc.), and the customer response to the last personal loan campaign (Personal Loan). Among these 5000 customers, only 480 (= 9.6%) accepted the personal loan that was offered to them in the earlier campaign.The dataset has a mix of numerical and categorical attributes, but all categorical data are represented with numbers. Moreover, some of the predictor variables are heavily skewed (long - tailed), making the data pre-processing an interesting yet not too challenging aspect of the data.
import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns from matplotlib import cm df=pd.read_csv("Bank_Personal_Loan_Modelling.csv") df.head()
_____no_output_____
MIT
clustering_k_means_bank_loans.ipynb
Filipos27/KMeans_clustering
Column informations: - ID - Customer id- Age - Customers age- Experience - Number of years of professional experience- Income - Annual income of the customer (x1000 USD)- ZIP Code - Home Address ZIP code- Family - Family size of the customer- CCAVG - Avg. spending on credit cards per month (x1000 USD)- Education - Education Level. 1: Undergrad; 2: Graduate; 3: Advanced/Professional- Mortgage - Value of house mortgage if any. (x1000 USD)- Personal Loan - Did this customer accept the personal loan offered in the last campaign?- Securities Account - Does the customer have a securities account with the bank?(1-yes,0-no)- CD Account - Does the customer have a certificate of deposit (CD) account with the bank?(1-yes,0-no)- Online - Does the customer use internet banking facilities? (1-yes,0-no)- CreditCard - Does the customer use a credit card issued by this Bank? (1-yes,0-no)
#renaming columns df.columns=['id', 'age', 'experience', 'income', 'zip_code', 'family', 'cc_avg', 'education', 'mortgage', 'personal_loan', 'securities_account', 'cd_account', 'online', 'credit_card'] df2=df.copy() #Converting values df2["income"]=df["income"]*1000 df2["cc_avg"]=df["cc_avg"]*1000 df2["mortgage"]=df["mortgage"]*1000 df2.head()
_____no_output_____
MIT
clustering_k_means_bank_loans.ipynb
Filipos27/KMeans_clustering
Dataset exploring
df2.shape df2.info() df2["income"].describe() #visualize outliers with boxplot plt.boxplot(df['income']) # Upper outlier threshold Q3 + 1.5(IQR) max_threshold=98000 + 1.5*(98000 - 39000) max_threshold # Removing outliers df3=df2[df2.income<max_threshold] # recalculate summary statistics df3['income'].describe() df3["cc_avg"].describe() #visualize outliers with boxplot plt.boxplot(df['cc_avg']) # Upper outlier threshold Q3 + 1.5(IQR) max_threshold=2500+ 1.5*(2500 - 700) max_threshold # Removing outliers df4=df3[df3.cc_avg<max_threshold] # recalculate summary statistics df4['cc_avg'].describe() df4["mortgage"].describe() df4["mortgage"].value_counts() df4.shape
_____no_output_____
MIT
clustering_k_means_bank_loans.ipynb
Filipos27/KMeans_clustering
Data visualization
#Ploting scatterplot title = 'Income by year experience ' plt.figure(figsize=(12,9)) sns.scatterplot(df4.experience,df4.income,hue=df4.experience).set_title(title) plt.ioff() #Bar plot of average income by education df4.groupby('education')["income"].mean().plot.bar(color=[ 'red', 'cyan',"magenta"]) plt.show() # Count customers presonal loan based on size of familiy count_delayed=df4.groupby('family')['personal_loan'].apply(lambda x: (x==1).sum()).reset_index(name='Number of customer with personal loan') color = cm.viridis(np.linspace(.4, .8, 30)) count_delayed= count_delayed.sort_values("Number of customer with personal loan" , ascending=[False]) count_delayed.plot.bar(x='family', y='Number of customer with personal loan', color=color , figsize=(12,7)) #Histogram of customers younger then 35 with mortgage df4[df4.age<35]["mortgage"].plot.hist(histtype="step")
_____no_output_____
MIT
clustering_k_means_bank_loans.ipynb
Filipos27/KMeans_clustering
Almost 700 hundres customers with mortgage between 0$ and 50 000$. Preparing features
features=df4[["age","experience","income","cc_avg"]]
_____no_output_____
MIT
clustering_k_means_bank_loans.ipynb
Filipos27/KMeans_clustering
Scaling features
# min-max scaling from sklearn.preprocessing import MinMaxScaler, RobustScaler scaler=MinMaxScaler() data_scaled_array=scaler.fit_transform(features) scaled=pd.DataFrame(data_scaled_array, columns=features.columns) scaled.head()
_____no_output_____
MIT
clustering_k_means_bank_loans.ipynb
Filipos27/KMeans_clustering
KMeans cluster
from sklearn.cluster import KMeans, AffinityPropagation import warnings warnings.filterwarnings("ignore") #finding sum of the squared distance between centroid and each member of the cluster k_range = range(1,10) sse =[] for k in k_range: km = KMeans(n_clusters=k) km.fit(scaled) sse.append(km.inertia_) sse #Ploting kmeans elbow method plt.plot(k_range, sse, 'bx-') plt.xlabel('k') plt.ylabel('Sum_of_squared_distances') plt.title('Elbow Method For Optimal k') plt.show() #Ploting silhouette score from sklearn.metrics import silhouette_samples, silhouette_score clusters_range = range(2,15) random_range = range(0,20) results =[] for c in clusters_range: for r in random_range: clusterer = KMeans(n_clusters=c, random_state=r) cluster_labels = clusterer.fit_predict(scaled) silhouette_avg = silhouette_score(scaled, cluster_labels) #print("For n_clusters =", c," and seed =", r, "\nThe average silhouette_score is :", silhouette_avg) results.append([c,r,silhouette_avg]) result = pd.DataFrame(results, columns=["n_clusters","seed","silhouette_score"]) pivot_km = pd.pivot_table(result, index="n_clusters", columns="seed",values="silhouette_score") plt.figure(figsize=(15,6)) sns.heatmap(pivot_km, annot=True, linewidths=.5, fmt='.3f', cmap=sns.cm.rocket_r) plt.tight_layout()
_____no_output_____
MIT
clustering_k_means_bank_loans.ipynb
Filipos27/KMeans_clustering