markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
计算信息熵
def entropy(y): precs = np.array(list(Counter(y).values()))/len(y) ent = np.sum(-1 * precs * np.log(precs)) return ent entropy(y_train)
_____no_output_____
MIT
DecisionTree/MyDecisionTree.ipynb
QYHcrossover/ML-numpy
决定使用哪个特征分割
def decide_feature(X,y,feature_order): n_features = X.shape[-1] ents = (feature_order != -1).astype(np.float64) for i in range(n_features): if feature_order[i] >= 0: continue for feature,size in Counter(X[:,i]).items(): index = (X[:,i] == feature) splity =...
_____no_output_____
MIT
DecisionTree/MyDecisionTree.ipynb
QYHcrossover/ML-numpy
构建决策树
def build_tree(X,y,feature_order): curent = entropy(y) counts = dict(Counter(y)) if len(counts) == 1 or min(feature_order) == 0: result = max(counts,key=counts.get) return {"counts":counts,"result":result} fi,ent = decide_feature(X,y,feature_order) feature_order[fi] = max(feature_ord...
是 6 否 8 是 2 否 6
MIT
DecisionTree/MyDecisionTree.ipynb
QYHcrossover/ML-numpy
predict
x_test = X_test[0] print(x_test) while tree["result"] == None: feature = tree["feature"] nexttree = tree["next"][x_test[feature]] tree = nexttree print(tree["result"]) class ID3DecisionTree: @staticmethod def entropy(y): precs = np.array(list(Counter(y).values()))/len(y) ent = np.sum...
_____no_output_____
MIT
DecisionTree/MyDecisionTree.ipynb
QYHcrossover/ML-numpy
Genetic algorithmThe implementation uses the inver-over genetic operator to optimize the static sequence of debris based on the transference cost of the arcs.Also, the implementation uses **index_frozen** to model the already deorbited debris.
class GA: def __init__(self, population, fn_fitness, subpath_fn_fitness=None): self.population = population self.index_frozen = -1 self.fitnesses = [] # fitness for each individual in population self.fn_fitness = fn_fitness # fitness function for the whole path self.subpath_fn_fitness = subpath_fn...
_____no_output_____
MIT
ActiveDebrisRemoval.ipynb
jbrneto/active-debris-removal
Problem instanceThe active debris removal problem is going to be modeled as a complex variant of Traveling Salesman Problem (TSP), the time-dependent TSP (TDTSP).The debris are the nodes and the dynamic transference trajectories are the edges.Also, the Max Open Walk is used to find for the optimized subpath.
class StaticDebrisTSP: mu = 398600800000000 # gravitational parameter of earth re = 6378100 # radius of earth def __init__(self, debris=[], weight_matrix=[], reward_matrix=[], path_size=0, population_size=100, epoch=None, hohmanncost=False): self.index_frozen = -1 self.debris = debris # the debris cloud ...
_____no_output_____
MIT
ActiveDebrisRemoval.ipynb
jbrneto/active-debris-removal
Instance loadingThe instances can be downloaded at SATCAT site.It is necessary to use a TXT file (TLE file) to get the debris names, codes and kepler elements, and a CSV file for the debris RCS (reward).
deb_file = 'fengyun-1c-debris' debris = pykep.util.read_tle(tle_file=deb_file+'.txt', with_name=True) with open(deb_file+'.txt') as f: tle_string = ''.join(f.readlines()) tle_lines = tle_string.strip().splitlines() tle_elements = [tle_lines[i:i + 3] for i in range(0, len(tle_lines), 3)] #split in array of debris d...
_____no_output_____
MIT
ActiveDebrisRemoval.ipynb
jbrneto/active-debris-removal
SolutionHere the actual solution is generated.An interpolated tree search is performed to enhance the static to a time dependent solution.
start_epoch = "2021-06-11 00:06:09" FMT = '%Y-%m-%d %H:%M:%S' steps = int((24 * 60) / 10) * 7 # in days step_size = timedelta(minutes=10) removal_time = timedelta(days=1) # time taken to deorbit a debris winsize = 10 # range for the kopt for _ in range(10): t0 = datetime.datetime.now() # to track time elapsed epoc...
_____no_output_____
MIT
ActiveDebrisRemoval.ipynb
jbrneto/active-debris-removal
2. Feature SelectionModelAuthor: _Carlos Sevilla Salcedo (Updated: 18/07/2019)_In this notebook we are going to present the extension to include a double sparsity in the model. The idea behind this modification is that besides imposing sparsity in the latent features, we could also force to have sparsity in the input ...
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import math np.random.seed(0) N = 1000 # number of samples D0 = 55 # input features D1 = 3 # output features myKc = 20 K = 2 # common latent variables K0 = 3 # first view's latent variables K1 = 3 # second view's latent variabl...
_____no_output_____
MIT
2. Sparse SSHIBA.ipynb
alexjorguer/SSHIBA
Once the data is generated we divide it into train and test in order to be able to test the performance of the model. After that, we can normalize the data.
from sklearn.model_selection import train_test_split X_tr, X_tst, Y_tr, Y_tst = train_test_split(X0, X1, test_size=0.3, random_state = 31) from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_tr = scaler.fit_transform(X_tr) X_tst = scaler.transform(X_tst)
_____no_output_____
MIT
2. Sparse SSHIBA.ipynb
alexjorguer/SSHIBA
Training the modelOnce the data is prepared we just have to feed it to the model. As the model has so many possibilities we have decided to pass the data to the model following a particular structure so that we can now, for each view, if the data corresponds to real, multilabel or categorical as well as knowing if we ...
import os os.sys.path.append('lib') import sshiba myKc = 20 # number of latent features max_it = int(5*1e4) # maximum number of iterations tol = 1e-6 # tolerance of the stopping condition (abs(1 - L[-2]/L[-1]) < tol) prune = 1 # whether to prune the irrelevant latent features myModel =...
Iteration 1620 Lower Bound 473636.1 K 9 Model correctly trained. Convergence achieved Final L(Q): 473636.1 Final MSE 9.493
MIT
2. Sparse SSHIBA.ipynb
alexjorguer/SSHIBA
Visualization of the results Lower Bound and MSENow the model is trained we can plot the evolution of the lower bound through out the iterations. This lower bound is calculated using the values of the variables the model is calculating and is the value we are maximizing. As we want to maximize this value it has to be ...
def plot_mse(mse): fig, ax = plt.subplots(figsize=(10, 4)) ax.plot(mse, linewidth=2, marker='s',markersize=5, label='SSHIBA', markerfacecolor='red') ax.grid() ax.set_xlabel('Iteration') ax.set_ylabel('MSE') plt.legend() def plot_L(L): fig, ax = plt.subplots(figsize=(10, 4)) ax.plot(L, l...
_____no_output_____
MIT
2. Sparse SSHIBA.ipynb
alexjorguer/SSHIBA
Sparsity in matrix WFor the sake of this example the model has not been automatically erasing a feature whenever it is considered as irrelevant and, instead of deleting it the model has just learned that these features are less important.As we now have the different weights given to each feature based on their relevan...
q = myModel.q_dist gamma = q.gamma_mean(0) ax1 = plt.subplot(2, 1, 1) plt.title('Feature selection analysis') plt.hist(gamma,100) ax2 = plt.subplot(2, 1, 2) plt.plot(gamma,'.') plt.ylabel('gamma') plt.xlabel('feature') plt.show()
_____no_output_____
MIT
2. Sparse SSHIBA.ipynb
alexjorguer/SSHIBA
As we can see, the values that we have randomly added to the original data are recognisible and can be therefore easily selected as relevant. We can also see the efect of the sparsity by looking at matrix $W$ where the features that are found to be irrelevant have lower values than the ones which are relevant.
pos_ord_var=np.argsort(gamma)[::-1] plot_W(q.W[0]['mean'][pos_ord_var,:])
_____no_output_____
MIT
2. Sparse SSHIBA.ipynb
alexjorguer/SSHIBA
15. Calibration using matching sections In notebook 14 we showed how you can take splices or connectors within your calibration into account. To then calibrate the cable we used reference sections on both sides of the splice. If these are not available, or in other cases where you have a lack of reference sections, ma...
import os from dtscalibration import read_silixa_files import matplotlib.pyplot as plt %matplotlib inline filepath = os.path.join('..', '..', 'tests', 'data', 'double_ended2') ds_ = read_silixa_files( directory=filepath, timezone_netcdf='UTC', file_ext='*.xml') ds = ds_.sel(x=slice(0, 110)) # only calib...
6 files were found, each representing a single timestep 6 recorded vars were found: LAF, ST, AST, REV-ST, REV-AST, TMP Recorded at 1693 points along the cable The measurement is double ended Reading the data from disk
BSD-3-Clause
examples/notebooks/15Matching_sections.ipynb
fprice111/python-dts-calibration
Again, we introduce a step loss in the signal strength at x = 50 m. For the forward channel, this means all data beyond 50 meters is reduced with a 'random' factor. For the backward channel, this means all data up to 50 meters is reduced with a 'random' factor.
ds['st'] = ds.st.where(ds.x < 50, ds.st*.8) ds['ast'] = ds.ast.where(ds.x < 50, ds.ast*.82) ds['rst'] = ds.rst.where(ds.x > 50, ds.rst*.85) ds['rast'] = ds.rast.where(ds.x > 50, ds.rast*.81)
_____no_output_____
BSD-3-Clause
examples/notebooks/15Matching_sections.ipynb
fprice111/python-dts-calibration
We will first run a calibration without adding the transient attenuation location or matching sections. A big jump in the calibrated temperature is visible at x = 50. As all calibration sections are before 50 meters, the first 50 m will be calibrated correctly.
ds_a = ds.copy(deep=True) st_var, resid = ds_a.variance_stokes(st_label='st') ast_var, _ = ds_a.variance_stokes(st_label='ast') rst_var, _ = ds_a.variance_stokes(st_label='rst') rast_var, _ = ds_a.variance_stokes(st_label='rast') ds_a.calibration_double_ended( st_var=st_var, ast_var=ast_var, rst_var=rst_v...
_____no_output_____
BSD-3-Clause
examples/notebooks/15Matching_sections.ipynb
fprice111/python-dts-calibration
Now we run a calibration, adding the keyword argument '**trans_att**', and provide a list of floats containing the locations of the splices. In this case we only add a single one at x = 50 m.We will also define the matching sections of cable. The matching sections have to be provided as a list of tuples. A tuple per ma...
matching_sections = [ (slice(7.5, 17.6), slice(69, 79.1), False) ] st_var, resid = ds.variance_stokes(st_label='st') ast_var, _ = ds.variance_stokes(st_label='ast') rst_var, _ = ds.variance_stokes(st_label='rst') rast_var, _ = ds.variance_stokes(st_label='rast') ds.calibration_double_ended( st_var=st_var, ...
/home/bart/git/python-dts-calibration/.tox/docs/lib/python3.7/site-packages/scipy/sparse/_index.py:116: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. self._set_arrayXarray_sparse(i, j, x)
BSD-3-Clause
examples/notebooks/15Matching_sections.ipynb
fprice111/python-dts-calibration
Deep Learning Explained Module 4 - Lab - Introduction to Regularization for Deep Neural Nets This lesson will introduce you to the principles of regularization required to successfully train deep neural networks. In this lesson you will:1. Understand the need for regularization of complex machine learning models, ...
%matplotlib inline import numpy as np import numpy.random as nr import matplotlib.pyplot as plt from numpy.random import normal, seed import sklearn.linear_model as slm from sklearn.preprocessing import scale import sklearn.model_selection as ms from math import sqrt import keras import keras.models as models import ke...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Notice that these data points fall approximately on a straight line, but with significant deviations. Next, you will compute a simple single regression model. This model has an intercept term and a single slope parameter. The code in the cell below splits the data into randomly selected training and testing subsets. Ex...
indx = range(len(x)) seed(9988) indx = ms.train_test_split(indx, test_size = 20) x_train = np.ravel(x[indx[0]]) y_train = np.ravel(y[indx[0]]) x_test = np.ravel(x[indx[1]]) y_test = np.ravel(y[indx[1]])
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Next, we will use the linear model in `sklearn.linear_model` package to create a single regression model for these data. The code in the cell below does just this, prints the single model coefficient, and plots the result. Execute this code.
def plot_reg(x, y_score, y): ax = plt.figure(figsize=(6, 6)).gca() # define axis ## Get the data in plot order xy = sorted(zip(x,y_score)) x = [x for x, _ in xy] y_score = [y for _, y in xy] ## Plot the result plt.plot(x, y_score, c = 'red') plt.scatter(x, y) plt.title('Predict...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Examine these results. Notice that the single coefficient (slope) seems reasonable, given the standardization of the training data. Visually, the fit to the training data also looks reasonable. We should also test the fit to some test data. The code in the cell does just this and returns the RMS error. execute this cod...
from math import sqrt def test_mod(x,y, mod): x_scale = scale(x) y_score = mod.predict(x_scale) plot_reg(x_scale, y_score, y) return np.std(y_score - y) test_mod(x_test.reshape(-1, 1), y_test, mod)
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Again, these results look reasonable. The RMSE is relatively small given the significant dispersion in these data. Now, try a model with significantly higher capacity. In this case we compute new features for a 9th order polynomial model. Using this new set of features a regression model is trained and a summary displa...
seed(2233) x_power = np.power(x_train.reshape(-1, 1), range(1,10)) x_scale = scale(x_power) mod_power = slm.LinearRegression() mod_power.fit(x_scale, y_train) y_hat_power = mod_power.predict(x_scale) plot_reg(x_scale[:,0], y_hat_power, y_train) print(mod_power.coef_) print(np.std(y_hat_power - y_train))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Notice the following, indicating the model is quite over-fit. - There is a wide range of coefficient values across 7 orders of magnitude. This situation is in contrast to the coefficient of the single regression model which had a reasonable single digit value.- The graph of the fitted model shows highly complex behavio...
x_test_scale = scale(x_test.reshape(-1, 1)) # Prescale to prevent numerical overflow. x_test_power = np.power(x_test_scale, range(1,10)) x_scale_test = scale(x_test_power) y_hat_power = mod_power.predict(x_scale_test) plot_reg(x_scale_test[:,0], y_hat_power, y_test) print(np.std(y_hat_power - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
This is clearly a terrible fit! The RMSE is enormous and the curve of predicted values bears little resemblance to the test values. Indeed, this is a common problem with over-fit models that the errors grow in very rapidly toward the edges of the training data domain. We can definitely state that this model **does not ...
mod_L2 = slm.Ridge(alpha = 100.0) mod_L2.fit(x_scale, y_train) y_hat_L2 = mod_L2.predict(x_scale) print(np.std(y_hat_L2 - y_train)) print(mod_L2.coef_) plot_reg(x_train, y_hat_L2, y_train)
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
This model is quite different from the un-regularized one we trained previously. - The coefficients all have small values. Some of the coefficients are significantly less than 1. These small coefficients are a direct result of the l2 penalty.- The fitted curve looks rather reasonable given the noisy data.Now test the m...
y_hat_L2 = mod_L2.predict(x_scale_test) plot_reg(x_scale_test[:,0], y_hat_L2, y_test) print(np.std(y_hat_L2 - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
This result looks a lot more reasonable. The RMSE is nearly the same as for the single feature regression example. Also, the predicted curve looks reasonable.In summary, we have seen that l2 regularization significantly improves the result for the 9th order polynomial regression. The coefficients are kept within a reas...
nr.seed(345) set_random_seed(4455) nn = models.Sequential() nn.add(layers.Dense(128, activation = 'relu', input_shape = (9, ))) nn.add(layers.Dense(1)) nn.compile(optimizer = 'rmsprop', loss = 'mse', metrics = ['mae']) history = nn.fit(x_scale, y_train, epochs = 30, batch_size = 1, ...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
With the model fit, let's have a look at the loss function vs. training epoch. Execute the code in the cell below and examine the result.
def plot_loss(history): train_loss = history.history['loss'] test_loss = history.history['val_loss'] x = list(range(1, len(test_loss) + 1)) plt.plot(x, test_loss, color = 'red', label = 'Test loss') plt.plot(x, train_loss, label = 'Train loss') plt.legend() plt.xlabel('Epoch') plt.ylabel...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
It looks like this model becomes overfit after 3 or 4 training epochs. Execute the code in the cell below to compute and plot predictions for the unconstrained model.
history = nn.fit(x_scale, y_train, epochs = 4, batch_size = 1, validation_data = (x_scale_test, y_test), verbose = 0) predicted = nn.predict(x_scale_test) plot_reg(x_scale_test[:,0], predicted, y_test) print(np.std(predicted - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Both the high RMSE and the odd behavior of the predicted curve indicates that this model does not generalize well at all. Notice in particular, how the predicted curve moves away from the test data values on the right. Now, we will try to improve this result by applying l2 norm regularization to the neural network. The...
nr.seed(45678) set_random_seed(45546) nn = models.Sequential() nn.add(layers.Dense(128, activation = 'relu', input_shape = (9, ), kernel_regularizer=regularizers.l2(2.0))) nn.add(layers.Dense(1)) nn.compile(optimizer = 'rmsprop', loss = 'mse', metrics = ['mae']) history = nn.fit(x_scale, y_train...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
This loss function is quite a bit different than for the unconstrained model. It is clear that regularization allows many more training epochs before over-fitting. But are the predictions any better? Execute the code in the cell below and find out.
history = nn.fit(x_scale, y_train, epochs = 30, batch_size = 1, validation_data = (x_scale_test, y_test), verbose = 0) predicted = nn.predict(x_scale_test) plot_reg(x_scale_test[:,0], predicted, y_test) print(np.std(predicted - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
The l2 regularization has reduced the RMSE. Just as significantly, the pathological behavior of the predicted values on the right is reduced, but clearly not eliminated. The bias effect is also visible. Notice that the left part of the fitted curve is now shifted upwards. ******************Exercise 1:** You have now t...
nr.seed(9456) set_random_seed(55566) nr.seed(9566) set_random_seed(44223)
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Next, in the cells below you will create code to compute and plot the predicted values from your model for the test data, along with the error metric. Include the test data values on your plot.
history20 = nn20.fit(x_scale, y_train, epochs = 30, batch_size = 1, validation_data = (x_scale_test, y_test), verbose = 0) predicted20 = nn20.predict(x_scale_test) plot_reg(x_scale_test[:,0], predicted20, y_test) print(np.std(predicted20 - y_test)) history200 = nn2...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Finally, compare the results for the three models with regularization hyperparameter values of 2.0, 20.0, and 200.0. Notice how the RMSE improves as the hyperparameter increases. Notice also, that the test loss for the highest hyperparameter value decreases most uniformly, indicating less over-fitting of the model. 3...
mod_L1 = slm.Lasso(alpha = 2.0, max_iter=100000) mod_L1.fit(x_scale, y_train) y_hat_L1 = mod_L1.predict(x_scale) print(np.std(y_hat_L1 - y_test)) print(mod_L1.coef_) plot_reg(x_train, y_hat_L1, y_train)
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Notice the following about the results of this l1 regularized regression:- Many of the coefficients are 0, as expected.- The fitted curve looks reasonable. Now, execute the code in the cell below and examine the prediction results.
y_hat_L1 = mod_L1.predict(x_scale_test) plot_reg(x_scale_test[:,0], y_hat_L1, y_test) print(np.std(y_hat_L1 - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
The RMSE has been reduced considerably, and is less than for l2 regularization regression. The plot of predicted values looks similar to the single regression model, but with some bias. 3.2 Neural network with l1 regularization Now, we will try l1 regularization with a neural network. The code in the cell below defin...
nn = models.Sequential() nn.add(layers.Dense(128, activation = 'relu', input_shape = (9, ), kernel_regularizer=regularizers.l1(10.0))) nn.add(layers.Dense(1)) nn.compile(optimizer = 'rmsprop', loss = 'mse', metrics = ['mae']) history = nn.fit(x_scale, y_train, epochs = 100, ba...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
As a result of the l1 regularization the training loss does not exhibit signs of over-fitting for quite a few epochs. Next, excute the code in the cell below to compute and display predicted values from the trained network.
history = nn.fit(x_scale, y_train, epochs = 40, batch_size = 1, validation_data = (x_scale_test, y_test), verbose = 0) predicted = nn.predict(x_scale_test) plot_reg(x_scale_test[:,0], predicted, y_test) print(np.std(predicted - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
These results are a definite improvement. The RMSE is similar to that produced by the l2 regularization neural network. Further, the fitting curve shows similar behavior and bias. This bias is the result of the regularization. 4.0 Early stoppingEarly stopping is conceptually simple. Early stopping terminates the trai...
## First define and compile a model. nn = models.Sequential() nn.add(layers.Dense(128, activation = 'relu', input_shape = (9, ), kernel_regularizer=regularizers.l2(1.0))) nn.add(layers.Dense(1)) nn.compile(optimizer = 'RMSprop', loss = 'mse', metrics = ['mae']) ## Define the callback list fil...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
You can see the behavior of the loss with training epoch is behaving as with l2 regularization alone. Notice that the training has been automatically terminated at the point the loss function is at its optimum. Let's also have a look at the accuracy vs. epoch. Execute the code in the cell below and examine the result.
def plot_accuracy(history): train_acc = history.history['mean_absolute_error'] test_acc = history.history['val_mean_absolute_error'] x = list(range(1, len(test_acc) + 1)) plt.plot(x, test_acc, color = 'red', label = 'Test error rate') plt.plot(x, train_acc, label = 'Train error rate') plt.legend...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
The curve of test accuracy is consistent with the test loss.The code in the cell below retrieves the best model (by our stopping criteria) from storage, computes predictions and displays the result. Execute this code and examine the results.
best_model = keras.models.load_model(filepath) predictions = best_model.predict(x_scale_test) plot_reg(x_scale_test[:,0], predictions, y_test) print(np.std(predictions - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
As expected, these results are similar, but a bit worse, than those obtained while manually stopping the training of the l2 regularized neural network. 5.0 Dropout regularizationAll of the regularization methods we have discussed so far, originated long before the current deep neural network era. We will now look at...
## First define and compile a model with a dropout layer. nn = models.Sequential() nn.add(layers.Dense(128, activation = 'relu', input_shape = (9, ))) nn.add(Dropout(rate = 0.5)) # Use 50% dropout on this model nn.add(layers.Dense(1)) nn.compile(optimizer = 'rmsprop', loss = 'mse', metrics = ['mae']) ## Now fit the m...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
The familiar loss plot looks a bit different here. Notice the kinks in the training loss curve. This is likely a result of the dropout sampling. Execute the code in the cell below, and examine the accuracy vs. epoch curves.
plot_accuracy(history)
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
The behavior of the training accuracy curve has a similar appearance to the loss curve in terms of the jagged appearance. Execute the code in the cell below examine the prediction results for this model.
best_model = keras.models.load_model(filepath) predictions = best_model.predict(x_scale_test) plot_reg(x_scale_test[:,0], predictions, y_test) print(np.std(predictions - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
These results appear similar to those obtained with other regularization methods for neural networks on this problem, particularly, early stopping. While the dropout method is an effective regularizer it is no 'silver bullet'. 6.0 Batch NormalizationIt is often the case that the distribution of output values of some ...
## Use patience of 3 callbacks_list = [ keras.callbacks.EarlyStopping( monitor = 'val_loss', # Use loss to monitor the model patience = 3 # Stop after three steps with lower accuracy ), keras.callbacks.ModelCheckpoint( filepath = filepath, # file where the checkpoint is saved ...
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
The loss decreases rapidly and then remains in a narrow range thereafter. It appears that convergence is quite rapid.How does the accuracy evolve with the training episodes? Execute the code in the cell below to display the result.
plot_accuracy(history)
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
This accuracy curve is rather unusual. It seems to reflect the simple regularization being used. Finally, execute the code in the cell below to evaluate the predictions made with this model.
best_model = keras.models.load_model(filepath) predictions = best_model.predict(x_scale_test) plot_reg(x_scale_test[:,0], predictions, y_test) print(np.std(predictions - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
The fit to the test data look fairly good. 7.0 Using multiple regularization methods**Exercise 2:** In many cases more than one regularization method is applied. We have already applied early stopping with other regularization methods. In this exercise you will create a neural network work using four regularization m...
nr.seed(242244) set_random_seed(4346)
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
In the cell below create and execute the code to plot the loss history for both training and test.
## Visualize the outcome plot_loss(history)
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
In the cell below create and execute the code to plot the accuracy history for both training and test.
plot_accuracy(history)
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Next, in the cells below you will create code to compute and plot the predicted values from your model for the test data, along with the error metric. Include the test data values on your plot.
best_model = keras.models.load_model(filepath) predictions = best_model.predict(x_scale_test) plot_reg(x_scale_test[:,0], predictions, y_test) print(np.std(predictions - y_test))
_____no_output_____
Unlicense
Module4/IntroToRegularization.ipynb
AlephEleven/Deep-Learning-Explained
Multi-Layer Perceptron, MNIST---In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.The process will be broken down into the following steps:>1. Load and visualize the data2. Define a neural network3. Train the model4. Evalu...
# import libraries import torch import numpy as np
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
mwizasimbeye11/udacity-pytorch-scholar-challenge
--- Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.This cell will create DataLoaders for each of our...
from torchvision import datasets import torchvision.transforms as transforms # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # convert data to torch.FloatTensor transform = transforms.ToTensor() # choose the training and test datasets train_data =...
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz Processing... Done!
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
mwizasimbeye11/udacity-pytorch-scholar-challenge
Visualize a Batch of Training DataThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = f...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
mwizasimbeye11/udacity-pytorch-scholar-challenge
View an Image in More Detail
img = np.squeeze(images[1]) fig = plt.figure(figsize = (12,12)) ax = fig.add_subplot(111) ax.imshow(img, cmap='gray') width, height = img.shape thresh = img.max()/2.5 for x in range(width): for y in range(height): val = round(img[x][y],2) if img[x][y] !=0 else 0 ax.annotate(str(val), xy=(y,x), ...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
mwizasimbeye11/udacity-pytorch-scholar-challenge
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example u...
import torch.nn as nn import torch.nn.functional as F ## TODO: Define the NN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # linear layer (784 -> 1 hidden node) self.fc1 = nn.Linear(28 * 28, 512) self.fc2 = nn.Linear(512, 256) self.fc3 = ...
Net( (fc1): Linear(in_features=784, out_features=512, bias=True) (fc2): Linear(in_features=512, out_features=256, bias=True) (fc3): Linear(in_features=256, out_features=10, bias=True) (dropout): Dropout(p=0.2) )
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
mwizasimbeye11/udacity-pytorch-scholar-challenge
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a soft...
## TODO: Specify loss and optimization functions # specify loss function criterion = nn.CrossEntropyLoss() # specify optimizer optimizer = torch.optim.SGD(model.parameters(), lr=0.02)
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
mwizasimbeye11/udacity-pytorch-scholar-challenge
--- Train the NetworkThe steps for training/learning from a batch of data are described in the comments below:1. Clear the gradients of all optimized variables2. Forward pass: compute predicted outputs by passing inputs to the model3. Calculate the loss4. Backward pass: compute gradient of the loss with respect to mode...
# number of epochs to train the model n_epochs = 30 # suggest training between 20-50 epochs model.train() # prep model for training for epoch in range(n_epochs): # monitor training loss train_loss = 0.0 ################### # train the model # ################### for data, target in train...
Epoch: 1 Training Loss: 0.556918 Epoch: 2 Training Loss: 0.222661 Epoch: 3 Training Loss: 0.156637 Epoch: 4 Training Loss: 0.119404 Epoch: 5 Training Loss: 0.095555 Epoch: 6 Training Loss: 0.078523 Epoch: 7 Training Loss: 0.065621 Epoch: 8 Training Loss: 0.055467 Epoch: 9 Training Loss: 0.047212 Epoch: 10 Tra...
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
mwizasimbeye11/udacity-pytorch-scholar-challenge
--- Test the Trained NetworkFinally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as w...
# initialize lists to monitor test loss and accuracy test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # prep model for *evaluation* for data, target in test_loader: # forward pass: compute predicted outputs by passing inputs to the model output...
Test Loss: 0.070234 Test Accuracy of 0: 98% (970/980) Test Accuracy of 1: 99% (1127/1135) Test Accuracy of 2: 97% (1011/1032) Test Accuracy of 3: 97% (986/1010) Test Accuracy of 4: 98% (968/982) Test Accuracy of 5: 98% (875/892) Test Accuracy of 6: 98% (941/958) Test Accuracy of 7: 97% ...
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
mwizasimbeye11/udacity-pytorch-scholar-challenge
Visualize Sample Test ResultsThis cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds = torch.max(output, 1) # prep images for display images = images.numpy() # plot the images in the batch, along with pre...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
mwizasimbeye11/udacity-pytorch-scholar-challenge
Workshop 4 File Input and Output (I/O)**Submit this notebook to bCourses (ipynb and pdf) to receive a grade for this Workshop.**Please complete workshop activities in code cells in this iPython notebook. The activities titled **Practice** are purely for you to explore Python. Some of them may have some code written, a...
%matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Practice: Basic Writing and Reading ASCII Files Think of ASCII files as text files. You can open them using a text editor (like vim or emacs in Unix, or Notepad in Windows) and read the information they contain directly. There are a few ways to produce these files, and to read them once they've been produced. In Pytho...
f = open( 'welcome.txt', 'w' )
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
**A note of caution**: as soon as you call `open()`, Python creates a new file with the name you pass to it if you open it in write mode (`'w'`). Python will overwrite existing files if you open a file of the same name in write ('`w`') mode. Now we can write to the file using `f.write( thing_to_write )`. We can write a...
topics = ['Data types', 'Loops', 'Functions', 'Arrays', 'Plotting', 'Statistics'] f.write( 'Welcome to Physics 77, Fall 2021\n' ) # the newline command \n tells Python to start a new line f.write( 'Topics we will learn about include:\n' ) for top in topics: f.write( top + '\n') f.close() ...
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
**Practice 1:** Use the syntax you have just learned to create an ASCII file titled "`sine.txt`" with two columns containing 20 x and 20 y values. The x values should range from $0$ to $2\pi$ - you can use `np.linspace()` to generate these values (as many as you want). The y values should be $y = sin(x)$ (you can use `...
# Code for Practice 1
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Now we will show how to *read* the values from `welcome.txt` back out:
f = open( 'welcome.txt', 'r' ) for line in f: print(line.strip()) f.close()
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
**Practice 2:** In the cell immediately above, you see that we print `line.strip()` instead of just printing `line`. Remove the `.strip()` part and see what happens. Suppose we wanted to skip the first two lines of `welcome.txt` and print only the list of topics `('Data types', 'Loops', 'Functions', 'Arrays', 'Plottin...
f = open( 'welcome.txt', 'r' ) f.readline() f.readline() # skip the first two lines topicList = [] for line in f: topicList.append(line.strip()) f.close() print(topicList)
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Python reads in spacing commands from files as well as strings. The `.strip()` just tells Python to ignore those spacing commands. What happens if you remove it from the code above? **Practice 3:** Use the syntax you have just learned to read back each line of x and y values from the `sine.txt` file that you just wrote...
# Code for Practice 3
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Practice Reading in Numerical Data as Floats Numerical data can be somewhat trickier to read in than strings. In the practices above, you read in `sine.txt` but each line was a `string` not a pair of `float` values. Let's read in a file I produced in another program, that contains results from a BaBar experiment, wher...
# Example using BaBar_2016.dat f = open('BaBar_2016.dat', 'r') # read each line, split the data wherever there's a blank space, # and convert the values to floats # lists where we will store the values we read in mass = [] charge = [] for line in f: tokens = line.split() mass.append(float(tokens[0])) char...
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
We got it; let's plot it!
import matplotlib.pyplot as plt %matplotlib inline plt.plot(mass, charge, 'r-' ) plt.xlim(0, 8) plt.ylim(0, 2e-3) plt.xlabel('mass (GeV)') plt.ylabel('charge, 90% C.L. limit') plt.show()
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
**Practice 4:** Use the syntax you have just learned to read back each line of x and y values from the sine.txt file that you wrote in Practice 1, and split each line into `float` values and store them. Then, plot your stored x and y values to make sure you have done everything correctly
# Code for Practice 4
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Of course, you already know of another way to read in values like this: `numpy.loadtxt()` and `numpy.genfromtxt()`. If you have already been using those, feel free to move on. Otherwise, take a moment to make yourself aware of these functions as they will massively simplify your life. Fortunately, Python's `numpy` libr...
# Same plot as before but now using numpy functions to load the data import numpy as np mass, charge = np.loadtxt('BaBar_2016.dat', unpack = True) plt.plot(mass, charge,'r-') plt.xlim(0, 8) plt.ylim(0, 2e-3) plt.xlabel('mass (GeV)') plt.ylabel('charge, 90% C.L. limit') plt.show()
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Practice: Writing and Reading CSV files CSV stands for Comma Separated Values. Python's `csv` module allows easy reading and writing of sequences. CSV is especially useful for loading data from spreadsheets and databases. Let's make a list and write a file! First, we need to load a new module that you have not used y...
import csv
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Next, just as before we need to create an abstract file object that opens the file we want to write to. Then, we create another programming abstraction called a *csv writer*, a special object that is built specifically to write sequences to our csv file. In this example, we have called the abstract file object `f_csv`...
f_csv = open( 'nationData.csv', 'w' ) SA_writer = csv.writer( f_csv, # write to this file object delimiter = '|', # place vertical bar between items we write quotechar = '', # Don't place quotes around strings ...
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Make sure that you understand at this point that all we have done is create a writer. It has not written anything to the file yet. So let's try to write the following lists of data:
countries = ['Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador', 'Guyana',\ 'Paraguay', 'Peru', 'Suriname', 'Uruguay', 'Venezuela'] capitals = ['Buenos Aires', 'Sucre', 'Brasília', 'Santiago', 'Bogotá', 'Quito', 'Georgetown',\ 'Asunción', 'Lima', 'Paramaribo', 'Montevideo', 'Cara...
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Now let's figure out how to add a line to our CSV file. For a regular ASCII file, we added lines by calling the `write` function. For a CSV file, we use a function called `writerow` which is attributed to our abstract csv writer `SA_writer`:
SA_writer.writerow(['Data on South American Nations']) SA_writer.writerow(['Country', 'Capital', 'Populaton (millions)']) for i in range(len(countries)): SA_writer.writerow( [countries[i], capitals[i], population_mils[i]] ) f_csv.close()
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Now let's see if we can open your file using a SpreadSheet program. If you don't have access to one, find someone who does! * Download nationData.csv* Open Microsoft Excel (or equivalent), and select "Import Data." * Locate nationData.csv in the list of files that pops up. * Select the "Delimited" Option in the nex...
cities = [] cityPops = [] metroPops = [] f_csv = open( 'cities.csv', 'r') readCity = csv.reader( f_csv, delimiter = ',' ) # The following line is how we skip a line in a csv. It is the equivalent of readline from before. next(readCity) # skip the header row for row in readCity: print(row) f_csv.close()
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Look at the output of the code above. Every `row` that is read in is a `list` of `strings` by default again. So in order to use the numbers *as numbers* we need to convert them using the `float()` operation. Below, we use this to figure out which city has the largest city population:
f_csv = open( 'cities.csv', 'r') readCity = csv.reader( f_csv, delimiter = ',' ) largest_city_pop = 0.0 city_w_largest_pop = None # The following line is how we skip a line in a csv. It is the equivalent of readline from before. next(readCity) # skip the header row for row in readCity: city_country = ', '.join(...
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
**Practice 6:** Use the syntax learned above to read in the x and y values from your `sine.csv` file. Plot your data to be sure you did everything correctly.
# Code for Practice 6
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Exercises **Exercise 1:** This exercise is meant to put many of the skills you have practiced thus far together. In this exercise, we are going to use I/O methods to see if we can find some correlations in some fake housing data. You will need the following files which should be in your directory: house_locs_rent....
# Code for Exercise 1 goes here
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
OPTIONAL: Practice With HDF5 Files So far you have encountered a standard ASCII text file and a CSV file. The next file format is called an HDF5 file. HDF5 files are ideally suited for managing large amounts of complex data. Python can read them using the module `h5py.`
import h5py
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Let's load our first hdf5 file into an abstract file object. We call ours `fh5` in the example below:
fh5 = h5py.File( 'solar.h5py', 'r' )
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Here is how data is stored in an HDF5 file: hdf5 files are made up of data sets Each data set has a name. The correct Python terminology for this is "key". Let's take a look at what data sets are in `solar.h5py`. You can access the names (keys) of these data sets using the `.keys()` function:
for k in fh5.keys(): # loop through the keys print(k)
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
To access one of the 6 data sets above, we need to use its name from above. Here we access the data set called `"names"`:
for nm in fh5["names"]: # make sure to include the quotation marks! print(nm)
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
So the dataset called `"names"` contains 8 elements (poor Pluto) which are strings. In this HDF5 file, the other data sets contain `float` values, and can be treated like numpy arrays:
print(fh5["solar_AU"][::2]) print(fh5["surfT_K"][fh5["names"]=='Earth'])
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Let's make a plot of the solar system that shows each planet's: * distance from the sun (position on the x-axis)* orbital period (position on the y-axis* mass (size of scatter plot marker)* surface temperature (color of marker)* density (transparency (or alpha, in matplotlib language))
distAU = fh5["solar_AU"][:] mass = fh5["mass_earthM"][:] torb = fh5["TOrbit_yr"][:] temp = fh5["surfT_K"][:] rho = fh5["density"][:] names = fh5["names"][:] def get_size( ms ): m = 400.0/(np.max(mass) - np.min(mass)) return 100.0 + (ms - np.min(mass))*m def get_alpha( p ): m = .9/(np.max(rho)-np.min(rho)) ...
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Play around with the data and see what interesting relationships you can find! If you ever want to write your own HDF5 file, you can open an h5py file object by calling: fh5 = h5py.File('filename.h5py', 'w') Data sets are created with dset = fh5.create_dataset( "dset_name", (shape,)) The default data ty...
x = np.linspace(-1, 1.0, 100) y = np.sin(10*x)*np.exp(-x) - x xy = np.hstack((x,y)) # save the array np.save('y_of_x.npy', xy ) del x, y, xy # erase these variables from Python's memory
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
Now reload the data and check that you can use it just as before. Bonus challenge! Load the file `mysteryArray.npy`, and figure out the best way to plot it. **Hint:** look at the shape of the file, and the dimensions.
data = np.load('mysteryArray.npy')
_____no_output_____
BSD-3-Clause
Week05/WS04/Workshop04.ipynb
ds-connectors/Physics-88-Fa21
maxima misc
kill(all)$ f(x) := 1/(x^2+l^2)^(3/2); integrate(f(x), x); tex(%)$
_____no_output_____
MIT
tests/notebooks/ipynb_maxima/maxima_example.ipynb
sthagen/mwouts-jupytext
"Settling in France: The healthcare system"> The public healthcare system here is amazing- toc: true - badges: true- comments: true- categories:["french bureaucracy", Nancy, going to France]- image: images/chart-preview.png ---It is mandatory for all residents in France to register into the French healthcare system. -...
#collapse-hide """ Attestation sur l’honneur de non-imposition Je soussigné <M./Mme MY LAST NAME my name> demeurant <my address>, Atteste sur l’honneur ne pas être soumis à l’impôt au titre des revenus de l’année 2020/2021. Pour faire valoir ce que de droit A Nancy, le ________________ """;
_____no_output_____
Apache-2.0
_notebooks/2021-11-24-french_healtcare.ipynb
LuisAVasquez/quiescens-lct
Principal Component Analysis (PCA) Introduction The purpose of this tutorial is to provide the reader with an intuitive understanding for principal component analysis (PCA). PCA is a multivariate data analysis technique mainly used for dimensionality reduction of large data sets. Working with large data sets is usua...
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn seaborn.set_style("white") import sklearn from sklearn.decomposition import PCA %matplotlib inline
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
On this tutorial, we will go step by step on how to implement principal component analysis (PCA) and then, we will use the sklearn library to compare the results. This tutorial is composed by the following sections: Data Loading Data Processing Mean-Centering Data Scaling Data Mathematical Computation PCA...
dataset = pd.read_csv("tutorial_data1.csv") dataset.head()
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Since the variables contain different dimensions, this will require some data processing that is described in the following section. Data Processing Mean-Centering Data The first step before applying PCA to a data matrix $X$ is to mean-center the data by making the mean of each variable to be zero. This is accom...
dataset_center = (dataset - dataset.mean()) dataset_center.round(2).head()
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Now, we need to find the covariance matrix of our dataset to see how the variables are correlated to each other. This is described by the following equation: $$ Cov(X) = \dfrac{\sum_{i=1}^{N}(x_{i} - \bar{x})(y_{i} - \bar{y})}{N-1}$$ where $N$ is the total number of rows, $\bar{x}$ and $\bar{y}$ are the means for the...
data_cov = dataset_center.cov() data_cov = data_cov - data_cov.mean() data_cov.round(2).head()
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
The problem with the above matrix is that it depends on the units of the variables, so it is difficult to see the relationship between variables with different units. This requires you to scale the data and find the correlation matrix; which not only provides you with how the variables are related (positively/negative...
# Scale data, ddof is set to 0 since we are using N not (N-1); which is the default for std in pandas dataset_scale = (dataset_center)/(dataset_center.std(ddof = 1)) # Correlation matrix data_corr = dataset_scale.corr() #Changing the names for the columns and indexes data_corr.columns = ['Exports of Crude Oil', 'Impor...
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Mathematical Computation The next step is to find the eigenvalues and eigenvectors of our matrix. But before doing so, we will need to go over some linear algebra concepts that are essential for understanding PCA. As mentioned in the introduction, PCA is a projection method that maps the data into a different space ...
# Finding eigenvalues(w) and eigenvectors(v) w, v = np.linalg.eig(data_corr) print "Eigenvalues:" print w.round(3) print "Eigenvectors:" print v.round(3)
Eigenvalues: [ 3.222 2.806 1.14 0.008 0.08 0.309 0.187 0.249] Eigenvectors: [[-0.104 0.492 -0.347 -0.042 0.192 -0.114 0.18 0.736] [-0.081 -0.5 -0.401 -0.018 -0.594 -0.412 0.162 0.184] [-0.498 0.044 0.234 0.013 -0.161 -0.159 -0.772 0.218] [-0.142 0.539 -0.133 0.03 -0.673 0.403 0.077 -0.222]...
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
The eigenvalues represent the amount of variation contained by each of the principal components. The purpose of finding these values is to obtain the number of components that provide a reasonable generalization of the entire data set; which would be less than the total number of variables in the data. In order to det...
plt.figure(figsize=(16,10)) # Scree Plot plt.subplot(221) plt.plot([1,2,3,4,5,6,7,8], w, 'ro-') plt.title('Scree Plot', fontsize=16, fontweight='bold') plt.xlabel('Principal Components', fontsize=14, fontweight='bold') plt.ylabel('Eigenvalues', fontsize=14, fontweight='bold') # Cumulative Explained Variation plt.subp...
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
As you can see in the scree plot, the value of the eigenvalues drop significantly between PC3 and PC4 and it is not very clear how many components represent the data set. As a result, the cumulative explained variation is plotted and it clearly shows that the first 3 components account for approximately 90% of the var...
# Scores scores = (dataset_scale.as_matrix()).dot(v[:,:2]) scores
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Having the scores allows us to observe the direction of the principal components in the x-y axis as shown in the following graph:
plt.figure(figsize=(10,8)) # Principal Components pc1 = v[:, 0] pc2 = v[:, 1] pc1x = [] pc2x = [] pc1y = [] pc2y = [] # Plotting the scaled data and getting the values of the PCs for ii, jj in zip(scores, dataset_scale.as_matrix()): pc1x.append(pc1[0]*ii[0]), pc1y.append(pc1[1]*ii[0]) pc2x.append(pc2[0]*ii[1...
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
As you can see above, the principal components are orthogonal (perpendicular) to each other, which is expected as this is an attribute of eigenvectors. Visually, PC1 and PC2 seem to capture a similar amount of variation and this correlates with the eigenvalues obtained whose values are 3.222 and 2.806 for PC1 and PC2, ...
plt.figure(figsize=(10,8)) for i in scores: plt.scatter(i[0], i[1], color = 'b') plt.xlim([-5, 3]) plt.ylim([-6, 4]) plt.title('Biplot', fontsize=16, fontweight='bold') plt.xlabel('PC1 (40.3%)', fontsize=14, fontweight='bold') plt.ylabel('PC2 (35.1%)', fontsize=14, fontweight='bold') # Labels for the loadings nam...
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
As you can see above, some of the variables are close to each other and this means that they are strongly related to one another. The actual correlation can be obtained by calculating the cosine of the angle between the variables. In addition, you can look back at the correlation matrix to see if those variables have a...
pca = PCA(n_components=2)
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Second, fit the scaled data to PCA to find the eigenvalues and eigenvectors
pca.fit(dataset_scale.as_matrix()) # Eigenvectors print "Eigenvectors:" print pca.components_ print # Eigenvalues print "Eigenvalues:" print pca.explained_variance_ print # Variance explained print "Amount of variance explained per component (%)" print pca.explained_variance_ratio_*100
Eigenvectors: [[-0.10390372 -0.08094858 -0.4979827 -0.14178406 -0.38709197 -0.25433889 -0.4946933 -0.50528402] [-0.49233521 0.49995417 -0.04365996 -0.53907923 -0.33467661 0.14923826 0.21430968 0.18689607]] Eigenvalues: [ 3.21945411 2.80348497] Amount of variance explained per component (%) [ 40.27451837 ...
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io