text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
``` import neuroglancer import numpy as np ``` Create a new (initially empty) viewer. This starts a webserver in a background thread, which serves a copy of the Neuroglancer client, and which also can serve local volume data and handles sending and receiving Neuroglancer state updates. ``` viewer = neuroglancer.Viewer() ``` Print a link to the viewer (only valid while the notebook kernel is running). Note that while the Viewer is running, anyone with the link can obtain any authentication credentials that the neuroglancer Python module obtains. Therefore, be very careful about sharing the link, and keep in mind that sharing the notebook will likely also share viewer links. ``` viewer ``` Add some example layers using the precomputed data source (HHMI Janelia FlyEM FIB-25 dataset). ``` with viewer.txn() as s: s.layers['image'] = neuroglancer.ImageLayer(source='precomputed://gs://neuroglancer-public-data/flyem_fib-25/image') s.layers['segmentation'] = neuroglancer.SegmentationLayer(source='precomputed://gs://neuroglancer-public-data/flyem_fib-25/ground_truth', selected_alpha=0.3) ``` Display a numpy array as an additional layer. A reference to the numpy array is kept only as long as the layer remains in the viewer. Move the viewer position. ``` with viewer.txn() as s: s.voxel_coordinates = [3000, 3000, 3000] ``` Hide the segmentation layer. ``` with viewer.txn() as s: s.layers['segmentation'].visible = False import cloudvolume image_vol = cloudvolume.CloudVolume('https://storage.googleapis.com/neuroglancer-public-data/flyem_fib-25/image', mip=0, bounded=True, progress=False) a = np.zeros((200,200,200), np.uint8) def make_thresholded(threshold): a[...] = np.transpose(image_vol[3000:3200,3000:3200,3000:3200][...,0], (2,1,0)) > threshold make_thresholded(110) # This volume handle can be used to notify the viewer that the data has changed. volume = neuroglancer.LocalVolume(a, voxel_size=[8, 8, 8], voxel_offset=[3000, 3000, 3000]) with viewer.txn() as s: s.layers['overlay'] = neuroglancer.ImageLayer( source=volume, # Define a custom shader to display this mask array as red+alpha. shader=""" void main() { float v = toNormalized(getDataValue(0)) * 255.0; emitRGBA(vec4(v, 0.0, 0.0, v)); } """, ) ``` Modify the overlay volume, and call `invalidate()` to notify the Neuroglancer client. ``` make_thresholded(100) volume.invalidate() ``` Select a couple segments. ``` with viewer.txn() as s: s.layers['segmentation'].segments.update([1752, 88847]) s.layers['segmentation'].visible = True ``` Print the neuroglancer viewer state. The Neuroglancer Python library provides a set of Python objects that wrap the JSON-encoded viewer state. `viewer.state` returns a read-only snapshot of the state. To modify the state, use the `viewer.txn()` function, or `viewer.set_state`. ``` viewer.state ``` Print the set of selected segments.| ``` viewer.state.layers['segmentation'].segments ``` Update the state by calling `set_state` directly. ``` import copy new_state = copy.deepcopy(viewer.state) new_state.layers['segmentation'].segments.add(10625) viewer.set_state(new_state) ``` Bind the 't' key in neuroglancer to a Python action. ``` num_actions = 0 def my_action(s): global num_actions num_actions += 1 with viewer.config_state.txn() as st: st.status_messages['hello'] = ('Got action %d: mouse position = %r' % (num_actions, s.mouse_voxel_coordinates)) print('Got my-action') print(' Mouse position: %s' % (s.mouse_voxel_coordinates,)) print(' Layer selected values: %s' % (s.selected_values,)) viewer.actions.add('my-action', my_action) with viewer.config_state.txn() as s: s.input_event_bindings.viewer['keyt'] = 'my-action' s.status_messages['hello'] = 'Welcome to this example' ``` Change the view layout to 3-d. ``` with viewer.txn() as s: s.layout = '3d' s.perspective_zoom = 300 ``` Take a screenshot (useful for creating publication figures, or for generating videos). While capturing the screenshot, we hide the UI and specify the viewer size so that we get a result independent of the browser size. ``` with viewer.config_state.txn() as s: s.show_ui_controls = False s.show_panel_borders = False s.viewer_size = [1000, 1000] from ipywidgets import Image screenshot_image = Image(value=viewer.screenshot().screenshot.image) with viewer.config_state.txn() as s: s.show_ui_controls = True s.show_panel_borders = True s.viewer_size = None screenshot_image ``` Change the view layout to show the segmentation side by side with the image, rather than overlayed. This can also be done from the UI by dragging and dropping. The side by side views by default have synchronized position, orientation, and zoom level, but this can be changed. ``` with viewer.txn() as s: s.layout = neuroglancer.row_layout( [neuroglancer.LayerGroupViewer(layers=['image', 'overlay']), neuroglancer.LayerGroupViewer(layers=['segmentation'])]) ``` Remove the overlay layer. ``` with viewer.txn() as s: s.layout = neuroglancer.row_layout( [neuroglancer.LayerGroupViewer(layers=['image']), neuroglancer.LayerGroupViewer(layers=['segmentation'])]) ``` Create a publicly sharable URL to the viewer state (only works for external data sources, not layers served from Python). The Python objects for representing the viewer state (`neuroglancer.ViewerState` and friends) can also be used independently from the interactive Python-tied viewer to create Neuroglancer links. ``` print(neuroglancer.to_url(viewer.state)) ``` Stop the Neuroglancer web server, which invalidates any existing links to the Python-tied viewer. ``` neuroglancer.stop() ```
github_jupyter
# k-Nearest Neighbor (kNN) exercise *Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.* The kNN classifier consists of two stages: - During training, the classifier takes the training data and simply remembers it - During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples - The value of k is cross-validated In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code. ``` # Run some setup code for this notebook. from __future__ import print_function import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print('Training data shape: ', X_train.shape) print('Training labels shape: ', y_train.shape) print('Test data shape: ', X_test.shape) print('Test labels shape: ', y_test.shape) # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = list(range(num_training)) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = list(range(num_test)) X_test = X_test[mask] y_test = y_test[mask] # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print(X_train.shape, X_test.shape) from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train) ``` We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for the label Lets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example. First, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time. ``` # Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print(dists.shape) # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolation='none') plt.show() ``` **Inline Question #1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.) - What in the data is the cause behind the distinctly bright rows? - What causes the columns? **Your Answer**: *fill this in.* ``` # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)) ``` You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`: ``` y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)) ``` You should expect to see a slightly better performance than with `k = 1`. ``` # Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print('Difference was: %f' % (difference, )) if difference < 0.001: print('Good! The distance matrices are the same') else: print('Uh-oh! The distance matrices are different') # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print('Difference was: %f' % (difference, )) if difference < 0.001: print('Good! The distance matrices are the same') else: print('Uh-oh! The distance matrices are different') # Let's compare how fast the implementations are def time_function(f, *args): """ Call a function f with args and return the time (in seconds) that it took to execute. """ import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print('Two loop version took %f seconds' % two_loop_time) one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print('One loop version took %f seconds' % one_loop_time) no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print('No loop version took %f seconds' % no_loop_time) # you should see significantly faster performance with the fully vectorized implementation ``` ### Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. ``` num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] ################################################################################ # TODO: # # Split up the training data into folds. After splitting, X_train_folds and # # y_train_folds should each be lists of length num_folds, where # # y_train_folds[i] is the label vector for the points in X_train_folds[i]. # # Hint: Look up the numpy array_split function. # ################################################################################ pass ################################################################################ # END OF YOUR CODE # ################################################################################ # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = {} ################################################################################ # TODO: # # Perform k-fold cross validation to find the best value of k. For each # # possible value of k, run the k-nearest-neighbor algorithm num_folds times, # # where in each case you use all but one of the folds as training data and the # # last fold as a validation set. Store the accuracies for all fold and all # # values of k in the k_to_accuracies dictionary. # ################################################################################ pass ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print('k = %d, accuracy = %f' % (k, accuracy)) # plot the raw observations for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())]) accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.show() # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. best_k = 1 classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)) ```
github_jupyter
# Time series prediction with neural networks. The problem we are going to look at in this post is the international airline passengers prediction problem. This is a problem where given a year and a month, the task is to predict the number of international airline passengers in units of 1,000. The data ranges from January 1949 to December 1960 or 12 years, with 144 observations. ``` import keras print('keras:', keras.__version__) # Multilayer Perceptron to Predict International Airline Passengers (t+1, given t) import numpy import matplotlib.pyplot as plt import pandas import math %matplotlib inline from keras.models import Sequential from keras.layers import Dense # fix random seed for reproducibility numpy.random.seed(7) # load the dataset dataframe = pandas.read_csv('files/international-airline-passengers.csv', usecols=[1], engine='python', skipfooter=3) dataset = dataframe.values dataset = dataset.astype('float32') # split into train and test sets train_size = int(len(dataset) * 0.67) test_size = len(dataset) - train_size train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] print(len(train), len(test)) # convert an array of values into a dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) # reshape into X=t and Y=t+1 look_back = 1 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) print(trainX[0], trainY[0]) # create and fit Multilayer Perceptron model model = Sequential() model.add(Dense(8, input_dim=look_back, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, nb_epoch=200, batch_size=2, verbose=0) # Estimate model performance trainScore = model.evaluate(trainX, trainY, verbose=0) print('Train Score: %.2f MSE (%.2f RMSE)' % (trainScore, math.sqrt(trainScore))) testScore = model.evaluate(testX, testY, verbose=0) print('Test Score: %.2f MSE (%.2f RMSE)' % (testScore, math.sqrt(testScore))) # generate predictions for training trainPredict = model.predict(trainX) testPredict = model.predict(testX) # shift train predictions for plotting trainPredictPlot = numpy.empty_like(dataset) trainPredictPlot[:, :] = numpy.nan trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict # shift test predictions for plotting testPredictPlot = numpy.empty_like(dataset) testPredictPlot[:, :] = numpy.nan testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict # plot baseline and predictions plt.plot(dataset) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.plot(abs(dataset-testPredictPlot)) plt.show() import math print "The average error on the training dataset was {:.0f} passengers\ (in thousands per month) and the average error on the unseen\ test set was {:.0f} passengers (in thousands per month).".format(math.sqrt(trainScore) \ ,math.sqrt(testScore)) ``` ## The Window Method We can also phrase the problem so that multiple recent time steps can be used to make the prediction for the next time step. This is called the window method, and the size of the window is a parameter that can be tuned for each problem. For example, given the current time ($t$) we want to predict the value at the next time in the sequence ($t + 1$), we can use the current time ($t$) as well as the two prior times ($t-1$ and $t-2$). When phrased as a regression problem the input variables are $t-2$, $t-1$, $t$ and the output variable is $t+1$. ``` # Multilayer Perceptron to Predict International Airline Passengers (t+1, given t, t-1, t-2) import numpy import matplotlib.pyplot as plt import pandas import math from keras.models import Sequential from keras.layers import Dense # convert an array of values into a dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) # fix random seed for reproducibility numpy.random.seed(7) # load the dataset dataframe = pandas.read_csv('files/international-airline-passengers.csv', usecols=[1], engine='python', skipfooter=3) dataset = dataframe.values dataset = dataset.astype('float32') # split into train and test sets train_size = int(len(dataset) * 0.67) test_size = len(dataset) - train_size train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] print(len(train), len(test)) # reshape dataset look_back = 3 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) print(trainX[0], trainY[0]) # create and fit Multilayer Perceptron model model = Sequential() model.add(Dense(8, input_dim=look_back, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, nb_epoch=200, batch_size=20, verbose=0) # Estimate model performance trainScore = model.evaluate(trainX, trainY, verbose=0) print('Train Score: %.2f MSE (%.2f RMSE)' % (trainScore, math.sqrt(trainScore))) testScore = model.evaluate(testX, testY, verbose=0) print('Test Score: %.2f MSE (%.2f RMSE)' % (testScore, math.sqrt(testScore))) # generate predictions for training trainPredict = model.predict(trainX) testPredict = model.predict(testX) # shift train predictions for plotting trainPredictPlot = numpy.empty_like(dataset) trainPredictPlot[:, :] = numpy.nan trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict # shift test predictions for plotting testPredictPlot = numpy.empty_like(dataset) testPredictPlot[:, :] = numpy.nan testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict # plot baseline and predictions plt.plot(dataset) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.plot(abs(dataset-testPredictPlot)) plt.show() import math print "The average error on the training dataset was {:.0f} passengers \ (in thousands per month) and the average error on the unseen \ test set was {:.0f} passengers (in thousands per month).".format(math.sqrt(trainScore) \ , math.sqrt(testScore)) ``` ## Exercise Get better performace by changing parameters: network architecture, look-back, etc. ``` import numpy import matplotlib.pyplot as plt import pandas import math from keras.models import Sequential from keras.layers import Dense # convert an array of values into a dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) # fix random seed for reproducibility numpy.random.seed(7) # load the dataset dataframe = pandas.read_csv('files/international-airline-passengers.csv', usecols=[1], engine='python', skipfooter=3) dataset = dataframe.values dataset = dataset.astype('float32') # split into train and test sets train_size = int(len(dataset) * 0.67) test_size = len(dataset) - train_size train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] print(len(train), len(test)) # your code here # Estimate model performance trainScore = model.evaluate(trainX, trainY, verbose=0) print('Train Score: %.2f MSE (%.2f RMSE)' % (trainScore, math.sqrt(trainScore))) testScore = model.evaluate(testX, testY, verbose=0) print('Test Score: %.2f MSE (%.2f RMSE)' % (testScore, math.sqrt(testScore))) # generate predictions for training trainPredict = model.predict(trainX) testPredict = model.predict(testX) # shift train predictions for plotting trainPredictPlot = numpy.empty_like(dataset) trainPredictPlot[:, :] = numpy.nan trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict # shift test predictions for plotting testPredictPlot = numpy.empty_like(dataset) testPredictPlot[:, :] = numpy.nan testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict # plot baseline and predictions plt.plot(dataset) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.plot(abs(dataset-testPredictPlot)) plt.show() import math print "The average error on the training dataset was {:.0f} passengers \ (in thousands per month) and the average error on the unseen \ test set was {:.0f} passengers (in thousands per month).".format(math.sqrt(trainScore) \ , math.sqrt(testScore)) ``` ## LSTM ``keras.layers.recurrent.LSTM(output_dim, init='glorot_uniform', inner_init='orthogonal', forget_bias_init='one', activation='tanh', inner_activation='hard_sigmoid', W_regularizer=None, U_regularizer=None, b_regularizer=None, dropout_W=0.0, dropout_U=0.0)`` Note: Making a RNN stateful means that the states for the samples of each batch will be reused as initial states for the samples in the next batch. ``` # Stacked LSTM for international airline passengers problem with stateful LSTM import numpy import matplotlib.pyplot as plt import pandas import math from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error import tqdm # convert an array of values into a dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) # fix random seed for reproducibility numpy.random.seed(7) # load the dataset dataframe = pandas.read_csv('files/international-airline-passengers.csv', usecols=[1], engine='python', skipfooter=3) dataset = dataframe.values dataset = dataset.astype('float32') # normalize the dataset scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(dataset) # split into train and test sets train_size = int(len(dataset) * 0.67) test_size = len(dataset) - train_size train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] # reshape look_back = 10 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] trainX = numpy.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1)) testX = numpy.reshape(testX, (testX.shape[0], testX.shape[1], 1)) # create and fit the LSTM network batch_size = 1 model = Sequential() model.add(LSTM(64, batch_input_shape=(batch_size, look_back, 1), stateful=True, return_sequences=True)) model.add(LSTM(64, stateful=True)) model.add(Dense(16, activation='relu')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') print trainX.shape, trainY.shape for i in tqdm.tqdm(range(20)): model.fit(trainX, trainY, nb_epoch=1, batch_size=batch_size, verbose=0, shuffle=False) model.reset_states() # make predictions trainPredict = model.predict(trainX, batch_size=batch_size) model.reset_states() testPredict = model.predict(testX, batch_size=batch_size) # invert predictions trainPredict = scaler.inverse_transform(trainPredict) trainY = scaler.inverse_transform([trainY]) testPredict = scaler.inverse_transform(testPredict) testY = scaler.inverse_transform([testY]) # calculate root mean squared error trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0])) print('Train Score: %.2f RMSE' % (trainScore)) testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0])) print('Test Score: %.2f RMSE' % (testScore)) # shift train predictions for plotting trainPredictPlot = numpy.empty_like(dataset) trainPredictPlot[:, :] = numpy.nan trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict # shift test predictions for plotting testPredictPlot = numpy.empty_like(dataset) testPredictPlot[:, :] = numpy.nan testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict # plot baseline and predictions plt.plot(scaler.inverse_transform(dataset)) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.show() # Stacked GRU for international airline passengers problem import numpy import matplotlib.pyplot as plt import pandas import math from keras.models import Sequential from keras.layers import Dense from keras.layers import GRU from sklearn.preprocessing import MinMaxScaler, StandardScaler from sklearn.metrics import mean_squared_error import tqdm # convert an array of values into a dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) # fix random seed for reproducibility numpy.random.seed(7) # load the dataset dataframe = pandas.read_csv('files/international-airline-passengers.csv', usecols=[1], engine='python', skipfooter=3) dataset = dataframe.values dataset = dataset.astype('float32') # normalize the dataset # scaler = MinMaxScaler(feature_range=(0, 1)) scaler = StandardScaler() dataset = scaler.fit_transform(dataset) # split into train and test sets train_size = int(len(dataset) * 0.67) test_size = len(dataset) - train_size train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] # reshape look_back = 10 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] trainX = numpy.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1)) testX = numpy.reshape(testX, (testX.shape[0], testX.shape[1], 1)) # create and fit the GRU network batch_size = 1 model = Sequential() model.add(GRU(32, batch_input_shape=(batch_size, look_back, 1), stateful=True, return_sequences=True)) model.add(GRU(32, stateful=True)) model.add(Dense(8, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') print trainX.shape, trainY.shape for i in tqdm.tqdm(range(50)): model.fit(trainX, trainY, nb_epoch=1, batch_size=batch_size, verbose=0, shuffle=False) model.reset_states() # make predictions trainPredict = model.predict(trainX, batch_size=batch_size) model.reset_states() testPredict = model.predict(testX, batch_size=batch_size) # invert predictions trainPredict = scaler.inverse_transform(trainPredict) trainY = scaler.inverse_transform([trainY]) testPredict = scaler.inverse_transform(testPredict) testY = scaler.inverse_transform([testY]) # calculate root mean squared error trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0])) print('Train Score: %.2f RMSE' % (trainScore)) testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0])) print('Test Score: %.2f RMSE' % (testScore)) # shift train predictions for plotting trainPredictPlot = numpy.empty_like(dataset) trainPredictPlot[:, :] = numpy.nan trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict # shift test predictions for plotting testPredictPlot = numpy.empty_like(dataset) testPredictPlot[:, :] = numpy.nan testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict # plot baseline and predictions plt.plot(scaler.inverse_transform(dataset)) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.show() ``` ## Power Consumption. The task here will be to be able to predict values for a timeseries : the history of 2 million minutes of a household's power consumption. We are going to use a multi-layered LSTM recurrent neural network to predict the last value of a sequence of values. Put another way, given 49 timesteps of consumption, what will be the 50th value? The initial file contains several different pieces of data. We will here focus on a single value : a house's Global_active_power history, minute by minute for almost 4 years. This means roughly 2 million points. Notes: + Neural networks usually learn way better when data is pre-processed. However regarding time-series we do not want the network to learn on data too far from the real world. So here we'll keep it simple and simply center the data to have a 0 mean. ``` import matplotlib.pyplot as plt import numpy as np import time import csv from keras.layers.core import Dense, Activation, Dropout from keras.layers.recurrent import LSTM from keras.models import Sequential np.random.seed(1234) def data_power_consumption(path_to_dataset, sequence_length=50, ratio=1.0): max_values = ratio * 2049280 with open(path_to_dataset) as f: data = csv.reader(f, delimiter=";") power = [] nb_of_values = 0 for line in data: try: power.append(float(line[2])) nb_of_values += 1 except ValueError: pass # 2049280.0 is the total number of valid values, i.e. ratio = 1.0 if nb_of_values >= max_values: break print "Data loaded from csv. Formatting..." result = [] for index in range(len(power) - sequence_length): result.append(power[index: index + sequence_length]) result = np.array(result) # shape (2049230, 50) result_mean = result.mean() result -= result_mean print "Shift : ", result_mean print "Data : ", result.shape row = int(round(0.9 * result.shape[0])) train = result[:row, :] np.random.shuffle(train) X_train = train[:, :-1] y_train = train[:, -1] X_test = result[row:, :-1] y_test = result[row:, -1] X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1)) X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) return [X_train, y_train, X_test, y_test] def build_model(): model = Sequential() layers = [1, 50, 100, 1] model.add(LSTM( input_dim=layers[0], output_dim=layers[1], return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM( layers[2], return_sequences=False)) model.add(Dropout(0.2)) model.add(Dense( output_dim=layers[3])) model.add(Activation("linear")) start = time.time() model.compile(loss="mse", optimizer="rmsprop") print "Compilation Time : ", time.time() - start return model def run_network(model=None, data=None): global_start_time = time.time() epochs = 1 ratio = 0.5 sequence_length = 50 path_to_dataset = 'files/household_power_consumption.txt' if data is None: print 'Loading data... ' X_train, y_train, X_test, y_test = data_power_consumption( path_to_dataset, sequence_length, ratio) else: X_train, y_train, X_test, y_test = data print '\nData Loaded. Compiling...\n' if model is None: model = build_model() try: model.fit( X_train, y_train, batch_size=512, nb_epoch=epochs, validation_split=0.05) predicted = model.predict(X_test) predicted = np.reshape(predicted, (predicted.size,)) except KeyboardInterrupt: print 'Training duration (s) : ', time.time() - global_start_time return model, y_test, 0 try: fig = plt.figure() ax = fig.add_subplot(111) ax.plot(y_test[:100]) plt.plot(predicted[:100]) plt.show() except Exception as e: print str(e) print 'Training duration (s) : ', time.time() - global_start_time return model, y_test, predicted run_network() ``` ### Exercise Train a stateful LSTM to learn an absolute cosine time series with the amplitude exponentially decreasing. ``` import numpy as np import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense, LSTM % matplotlib inline def gen_cosine_amp(amp=100, period=1000, x0=0, xn=50000, step=1, k=0.0001): """Generates an absolute cosine time series with the amplitude exponentially decreasing Arguments: amp: amplitude of the cosine function period: period of the cosine function x0: initial x of the time series xn: final x of the time series step: step of the time series discretization k: exponential rate """ cos = np.zeros(((xn - x0) * step, 1, 1)) for i in range(len(cos)): idx = x0 + i * step cos[i, 0, 0] = amp * np.cos(2 * np.pi * idx / period) cos[i, 0, 0] = cos[i, 0, 0] * np.exp(-k * idx) return cos print 'Generating Data' cos = gen_cosine_amp() print 'Input shape:', cos.shape plt.plot(cos[:,0,0]) # since we are using stateful rnn tsteps can be set to 1 time_steps = 1 batch_size = 250 epochs = 100 # number of elements ahead that are used to make the prediction look_head = 100 expected_output = np.zeros((len(cos), 1)) for i in range(len(cos) - look_head): expected_output[i, 0] = np.mean(cos[i + 1:i + look_head + 1]) print 'Output shape' print(expected_output.shape) print 'Creating Model' # model = Sequential() # YOUR CODE HERE # The objective is a MSE < 0.05 # model.compile(loss='mse', optimizer='rmsprop') print 'Training' for i in range(epochs): if i%10 == 0: print('Epoch', i, '/', epochs) model.fit(cos, expected_output, batch_size=batch_size, verbose=0, nb_epoch=1, shuffle=False) model.reset_states() print 'Predicting' predicted_output = model.predict(cos, batch_size=batch_size) import math print 'MSE: ', math.sqrt(((expected_output-predicted_output)**2).sum())/len(expected_output) print 'Plotting Results' plt.subplot(2, 1, 1) plt.plot(expected_output) plt.title('Expected') plt.subplot(2, 1, 2) plt.plot(predicted_output) plt.title('Predicted') plt.show() ```
github_jupyter
[View in Colaboratory](https://colab.research.google.com/github/ckbjimmy/2018_mlw/blob/master/nb2_clustering.ipynb) # Machine Learning for Clinical Predictive Analytics We would like to introduce basic machine learning techniques and toolkits for clinical knowledge discovery in the workshop. The material will cover common useful algorithms for clinical prediction tasks, as well as the diagnostic workflow of applying machine learning to real-world problems. We will use [Google colab](https://colab.research.google.com/) / python jupyter notebook and two datasets: - Breast Cancer Wisconsin (Diagnostic) Database, and - pre-extracted ICU data from PhysioNet Database to build predictive models. The learning objectives of this workshop tutorial are: - Learn how to use Google colab / jupyter notebook - Learn how to build machine learning models for clinical classification and/or clustering tasks To accelerate the progress without obstacles, we hope that the readers fulfill the following prerequisites: - [Skillset] basic python syntax - [Requirements] Google account OR [anaconda](https://anaconda.org/anaconda/python) In part 1, we will go through the basic of machine learning for classification problems. In part 2, we will investigate more on unsupervised learning methods for clustering and visualization. In part 3, we will play with neural networks. # Part II – Unsupervised learning algorithms In part 2, we will investigate more on unsupervised learning algorithms of clustering and dimensionality reduction. In the first part of the workshop, we introduce many algorithms for classification tasks. Those tasks belong to the scenario of **supervised learning**, which means that the label/annotation of your training dataset are given. For example, you already know some tumor samples are malignant or benign. Now we will look at the other scenario called **unsupervised learning**, which is for finding the patterns (hidden representation) in the data. In such scenario, the data do not need to be labelled, and we just need the input variables/features without any outcome variables. Unsupervised learning algorithms will try to discover the pattern and inner structure of the data by itself, and group the **similar** data points together and form a cluster, or compress high dimension data to lower dimension data representation. The difference between supervised (classification and regression problems) and unsupervised learning can be rougly shown in the following picture. ![unsup](http://oliviaklose.azurewebsites.net/content/images/2015/02/2-supervised-vs-unsupervised-1.png) [Source] Andrew Ng's Machine Learning Coursera Course Lecture 1 After going through this tutorial, we hope that you will understand how to use scikit-learn to design and build models for clustering and dimensionality reduction, and how to evaluate them. Again, we start from the breast cancer dataset in UCI data repository to have a quick view on how to do the analysis and build models using well-structured data. We load the breast cancer dataset from `sklearn.datasets`, and preprocess the dataset as we did in Part I. We visualize the data in the vector space just using the data in the first two columns, and color them with the provided labels. We realize that simply using two features may separate two clusters at some degrees. ``` from sklearn import datasets import matplotlib.pyplot as plt df_bc = datasets.load_breast_cancer() print(df_bc.feature_names) print(df_bc.target_names) X = df_bc['data'] y = df_bc['target'] label = {0: 'malignant', 1: 'benign'} x_axis = X[:, 0] # mean radius y_axis = X[:, 1] # mean texture plt.scatter(x_axis, y_axis, c=y) plt.show() ``` ## Clustering We are now going to use clustering algorithms to cluster data points into several groups just using predictors/features. ### K-means clustering K-means clustering is an iterative algorithm that aims to find local maxima in each iteration. In k-means, we need to choose the number of clusters, $k$, beforehand. There are many methods to decide $k$ value if it is unknown. The simplest approach is that we can use elbow (bend) method in the sum of squared error screen plot for deciding $k$ value. The elbow point can be suggested as the number of culsters for k-means. ``` from sklearn.cluster import KMeans from sklearn.metrics import confusion_matrix from sklearn.decomposition import PCA # decide k value Nc = range(1, 5) kmeans = [KMeans(n_clusters=i) for i in Nc] kmeans score = [kmeans[i].fit(X).score(X) for i in range(len(kmeans))] score plt.plot(Nc, score) plt.xlabel('Number of Clusters') plt.ylabel('Score') plt.title('Elbow Curve') plt.show() ``` Since we already know that there are two classes in our dataset, we then set the $k$ value of 2 to the parameter `n_clusters` in our model. Based on the centroid distance between each points, the next given inputs are segregated into respected clusters. Each centroid of a cluster is a collection of feature values which define the resulting groups. Examining the centroid feature weights can be used to qualitatively interpret what kind of group each cluster represent. Now we use all features (`X`) for clustering (`km`). We use confusion matrix to demonstrate the performance of k-means clustering. The accuracy of the model can be computed by the summation of diagonal (or reverse diagonal) elements divided by the sample size. In our case, $\frac{(356+130)}{(82+356+130+1)} = 0.85$. For visualization, here we use principal component analysis (PCA) for higher dimension data since it is impossible to simply do it on 2D plot with raw data. We will introduce PCA later in the section of dimensionality reduction. We can see that two clusters can be well separated with given features. For the details of k-means algorithm, please check the [wikipedia page](https://en.wikipedia.org/wiki/K-means_clustering). ``` # k-means k = 2 km = KMeans(n_clusters=k) km.fit(X) print(km.labels_) # performance cm = confusion_matrix(y, km.labels_) print(cm) # visualization pca = PCA(n_components=2).fit(X) pca_2d = pca.transform(X) for i in range(0, pca_2d.shape[0]): if km.labels_[i] == 0: c1 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='r', marker='+') elif km.labels_[i] == 1: c2 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='g', marker='o') plt.legend([c1, c2], ['Cluster 1', 'Cluster 2']) plt.title('K-means finds 2 clusters') plt.show() ``` ### DBSCAN clustering DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is another clustering algorithm that you don't need to decide $k$ value beforehand. However, the tradeoff is that you need to decide values of two parameters: - `eps` (maximum distance between two data points to be considered in the same neighborhood) and - `min_samples` (minimum amount of data points in a neighborhood to be considered a cluster for DBSCAN). We can see that some samples are not clustered in the correct groups. You may use different values of two parameters for better clustering. ``` from sklearn.cluster import DBSCAN # DBSCAN dbscan = DBSCAN(eps=100, min_samples=10) dbscan.fit(X) print(dbscan.labels_) # performance cm = confusion_matrix(y, dbscan.labels_) print(cm) # visualization pca = PCA(n_components=2).fit(X) pca_2d = pca.transform(X) for i in range(0, pca_2d.shape[0]): if dbscan.labels_[i] == 0: c1 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='r', marker='+') elif dbscan.labels_[i] == 1: c2 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='g', marker='o') elif dbscan.labels_[i] == -1: c3 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='b', marker='*') plt.legend([c1, c2, c3], ['Cluster 1', 'Cluster 2', 'Noise']) plt.title('DBSCAN finds 2 clusters and Noise') plt.show() ``` There are also other clustering algorithms provided in the `scikit-learn`. You may check the [scikit-learn document of clustering](http://scikit-learn.org/stable/modules/clustering.html#clustering) and play with them! ## Dimensionality reduction Dimensionality reduction methods can reduce the number of features and represent the data with much smaller, compressed representation. The technique is helpful for analyzing sparse data that may cause an issue of ["curse of dimensionality"](https://en.wikipedia.org/wiki/Curse_of_dimensionality). Here we will introduce two commonly seen algorithms for dimensionality reduction, principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). ### Principal component analysis (PCA) PCA guarantees finding the best linear transformation that reduces the number of dimensions with a minimum loss of information. ![loss](https://raw.githubusercontent.com/ckbjimmy/2018_mlw/master/img/pca.png) [Source] Courtesy by Prof. HY Lee (NTU) Sometimes the information that was lost is regarded as noise---information that does not represent the phenomena we are trying to model, but is rather a side effect of some usually unknown processes. In the example, we preserve the first two principal components (PC1 and PC2) and visualize the data after PCA transformation. The figure shows that PCA compresses the data from 30-dimension to 2-dimension without lossing too much information for clustering data points. ``` from sklearn.decomposition import PCA # original feature number print(X.shape[1]) # PCA pca = PCA(n_components=2).fit(X) pca_2d = pca.transform(X) x_axis = pca_2d[:, 0] y_axis = pca_2d[:, 1] plt.scatter(x_axis, y_axis, c=y) plt.show() ``` We can even use the result of PCA transformation to perform classification task (just simply use logistic regression as an example) with much compact data. The results show that using PCA transformed data for classification does not decrease the performance of classification too much. ``` from sklearn.linear_model import LogisticRegression from sklearn import metrics # use all features clf = LogisticRegression(fit_intercept=True) clf.fit(X, y) yhat = clf.predict_proba(X)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) # use PCA transformed features clf = LogisticRegression(fit_intercept=True) clf.fit(pca_2d, y) yhat = clf.predict_proba(pca_2d)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) ``` ### t-SNE (t-distributed stochastic neighbor embedding) PCA utilizes the **linear transformation** of data. However, it may be better to consider **non-linearity** for data with higher dimension. t-SNE is one of the unsupervised learning method for higher dimension data visualization. It adopts the idea of manifold learning of modeling each high-dimensional data point by a lower dimensional data point in such a way that similar objects are modeled by nearby points with high probability. Again, we use the result of t-SNE modeling and realize that it still preserve most of the information inside the data for classification (but worse than simple PCA in this case). ``` from sklearn.manifold import TSNE # t-SNE ts = TSNE(learning_rate=100) tsne = ts.fit_transform(X) x_axis = tsne[:, 0] y_axis = tsne[:, 1] plt.scatter(x_axis, y_axis, c=y) plt.show() from sklearn.linear_model import LogisticRegression from sklearn import metrics # use all features clf = LogisticRegression(fit_intercept=True) clf.fit(X, y) yhat = clf.predict_proba(X)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) # use tSNE transformed features clf = LogisticRegression(fit_intercept=True) clf.fit(tsne, y) yhat = clf.predict_proba(tsne)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) ``` ## Exercise ### Iris dataset Try to use iris dataset! We show the result of using k-means for iris dataset. Please try to modify the above codes to see what will happen when you apply DBSCAN, PCA and t-SNE on this dataset. ``` df = datasets.load_iris() print(df.feature_names) print(df.target_names) X = df['data'] y = df['target'] label = {0: 'setosa', 1: 'versicolor', 2: 'virginica'} # simply visualize using two features x_axis = X[:, 0] y_axis = X[:, 1] plt.scatter(x_axis, y_axis, c=y) plt.show() # find optimal k value Nc = range(1, 5) kmeans = [KMeans(n_clusters=i) for i in Nc] kmeans score = [kmeans[i].fit(X).score(X) for i in range(len(kmeans))] score plt.plot(Nc, score) plt.xlabel('Number of Clusters') plt.ylabel('Score') plt.title('Elbow Curve') plt.show() # k-means k = 3 km = KMeans(n_clusters=k) km.fit(X) # performance cm = confusion_matrix(y, km.labels_) print(cm) # PCA pca = PCA(n_components=2).fit(X) pca_2d = pca.transform(X) for i in range(0, pca_2d.shape[0]): if km.labels_[i] == 0: c1 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='r', marker='+') elif km.labels_[i] == 1: c2 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='g', marker='o') elif km.labels_[i] == 2: c3 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='b', marker='*') plt.legend([c1, c2, c3], ['Cluster 1', 'Cluster 2', 'Cluster 3']) plt.title('K-means finds 3 clusters') plt.show() ``` ### PhysioNet dataset How about PhysioNet dataset? It seems like that the quality of unsupervised model is not good enough. This may because of significant reduction of dimension, which yield loss of information. ``` import numpy as np import pandas as pd from sklearn.preprocessing import Imputer from sklearn.preprocessing import StandardScaler # load data dataset = pd.read_csv('https://raw.githubusercontent.com/ckbjimmy/2018_mlw/master/data/PhysionetChallenge2012_data.csv') X = dataset.iloc[:, 1:].values y = dataset.iloc[:, 0].values # imputation and normalization X = Imputer(missing_values='NaN', strategy='mean', axis=0).fit(X).transform(X) X = StandardScaler().fit(X).transform(X) # find k value Nc = range(1, 5) kmeans = [KMeans(n_clusters=i) for i in Nc] kmeans score = [kmeans[i].fit(X).score(X) for i in range(len(kmeans))] score plt.plot(Nc, score) plt.xlabel('Number of Clusters') plt.ylabel('Score') plt.title('Elbow Curve') plt.show() # k-means k = 2 km = KMeans(n_clusters=k) km.fit(X) # performance cm = confusion_matrix(y, km.labels_) print(cm) # visualization pca = PCA(n_components=2).fit(X) pca_2d = pca.transform(X) for i in range(0, pca_2d.shape[0]): if km.labels_[i] == 0: c1 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='r', marker='+') elif km.labels_[i] == 1: c2 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='g', marker='o') plt.legend([c1, c2], ['Cluster 1', 'Cluster 2']) plt.title('K-means finds 2 clusters') plt.show() # use all features clf = LogisticRegression(fit_intercept=True) clf.fit(X, y) yhat = clf.predict_proba(X)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) # use PCA transformed features clf = LogisticRegression(fit_intercept=True) clf.fit(pca_2d, y) yhat = clf.predict_proba(pca_2d)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) ts = TSNE(learning_rate=200) tsne = ts.fit_transform(X) x_axis = tsne[:, 0] y_axis = tsne[:, 1] plt.scatter(x_axis, y_axis, c=y) plt.show() # use all clf = LogisticRegression(fit_intercept=True) clf.fit(X, y) yhat = clf.predict_proba(X)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) # use tSNE clf = LogisticRegression(fit_intercept=True) clf.fit(tsne, y) yhat = clf.predict_proba(tsne)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) pca_16 = PCA(n_components=16).fit(X).transform(X) tsne = TSNE(learning_rate=200).fit_transform(pca_16) x_axis = tsne[:, 0] y_axis = tsne[:, 1] plt.scatter(x_axis, y_axis, c=y) plt.show() # use all clf = LogisticRegression(fit_intercept=True) clf.fit(X, y) yhat = clf.predict_proba(X)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) # use tSNE clf = LogisticRegression(fit_intercept=True) clf.fit(tsne, y) yhat = clf.predict_proba(tsne)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) ``` From the above cases, we may guess that 2-dimension is enough for breast cancer and Iris data representation but not PhysioNet mortality prediction. Afterall, the feature number of PhysioNet data is more than 180. You can try to increase the number of dimensionality reduction from 2 to more (16, 32, 64) and see how the performance will improve. The following codes give an example of using 16 dimensions---although they can not be visualized in a 2D plot. ``` pca_16 = PCA(n_components=16).fit(X).transform(X) # use all clf = LogisticRegression(fit_intercept=True) clf.fit(X, y) yhat = clf.predict_proba(X)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) # use tSNE clf = LogisticRegression(fit_intercept=True) clf.fit(pca_16, y) yhat = clf.predict_proba(pca_16)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) ``` ## More unsupervised learning algorithms There are still a lot unsupervised ways to represent the data. We won't cover the remaining algorithms but you may check them in the future when you want to dive into this field. - Anomaly detection - Autoencoders - Generative Adversarial Networks (GAN) - ...more ``` ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Goal" data-toc-modified-id="Goal-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Goal</a></span></li><li><span><a href="#Var" data-toc-modified-id="Var-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Var</a></span></li><li><span><a href="#Init" data-toc-modified-id="Init-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Init</a></span></li><li><span><a href="#LLMGA" data-toc-modified-id="LLMGA-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>LLMGA</a></span><ul class="toc-item"><li><span><a href="#Setup" data-toc-modified-id="Setup-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Setup</a></span></li><li><span><a href="#config" data-toc-modified-id="config-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>config</a></span></li><li><span><a href="#Run" data-toc-modified-id="Run-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>Run</a></span></li></ul></li><li><span><a href="#Summary" data-toc-modified-id="Summary-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Summary</a></span><ul class="toc-item"><li><span><a href="#No.-of-genomes" data-toc-modified-id="No.-of-genomes-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>No. of genomes</a></span></li><li><span><a href="#CheckM" data-toc-modified-id="CheckM-5.2"><span class="toc-item-num">5.2&nbsp;&nbsp;</span>CheckM</a></span></li><li><span><a href="#Taxonomy" data-toc-modified-id="Taxonomy-5.3"><span class="toc-item-num">5.3&nbsp;&nbsp;</span>Taxonomy</a></span><ul class="toc-item"><li><span><a href="#Taxonomic-novelty" data-toc-modified-id="Taxonomic-novelty-5.3.1"><span class="toc-item-num">5.3.1&nbsp;&nbsp;</span>Taxonomic novelty</a></span></li><li><span><a href="#Quality-~-taxonomy" data-toc-modified-id="Quality-~-taxonomy-5.3.2"><span class="toc-item-num">5.3.2&nbsp;&nbsp;</span>Quality ~ taxonomy</a></span></li></ul></li></ul></li><li><span><a href="#sessionInfo" data-toc-modified-id="sessionInfo-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>sessionInfo</a></span></li></ul></div> # Goal * running `LLMGA` on metagenome datasets * studyID = PRJNA485217 * host = primate # Var ``` work_dir = '/ebio/abt3_projects/Georg_animal_feces/data/metagenome/multi-study/BioProjects/' tmp_out_dir = '/ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/multi-study_MG-asmbl/' pipeline_dir = '/ebio/abt3_projects/methanogen_host_evo/bin/llmga-find-refs/' studyID = 'PRJNA485217' threads = 24 ``` # Init ``` library(dplyr) library(tidyr) library(ggplot2) library(data.table) set.seed(8304) source('/ebio/abt3_projects/Georg_animal_feces/code/misc_r_functions/init.R') ``` # LLMGA ## Setup ``` out_dir = file.path(tmp_out_dir, studyID) make_dir(out_dir) out_dir = file.path(out_dir, 'LLMGA') make_dir(out_dir) ref_genomes = file.path(work_dir, studyID, 'LLMGA-find-refs/references/ref_genomes.fna') cat(ref_genomes) ``` ## config ``` cat_file(file.path(out_dir, 'config.yaml')) ``` ## Run ``` (snakemake_dev) @ rick:/ebio/abt3_projects/methanogen_host_evo/bin/llmga $ screen -L -S llmga-PRJNA485217 ./snakemake_sge.sh /ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/multi-study_MG-asmbl/PRJNA485217/LLMGA/config.yaml cluster.json /ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/multi-study_MG-asmbl/PRJNA485217/LLMGA/SGE_log 24 ``` ``` pipelineInfo('/ebio/abt3_projects/methanogen_host_evo/bin/llmga') ``` # Summary ``` asmbl_dir = out_dir = file.path(tmp_out_dir, studyID, 'LLMGA') checkm_markers_file = file.path(asmbl_dir, 'checkm', 'markers_qa_summary.tsv') gtdbtk_bac_sum_file = file.path(asmbl_dir, 'gtdbtk', 'gtdbtk_bac_summary.tsv') gtdbtk_arc_sum_file = file.path(asmbl_dir, 'gtdbtk', 'gtdbtk_ar_summary.tsv') bin_dir = file.path(asmbl_dir, 'bin') das_tool_dir = file.path(asmbl_dir, 'bin_refine', 'DAS_Tool') drep_dir = file.path(asmbl_dir, 'drep', 'drep') # bin genomes ## maxbin2 bin_files = list.files(bin_dir, '*.fasta$', full.names=TRUE, recursive=TRUE) bin = data.frame(binID = gsub('\\.fasta$', '', basename(bin_files)), fasta = bin_files, binner = bin_files %>% dirname %>% basename, sample = bin_files %>% dirname %>% dirname %>% basename) ## metabat2 bin_files = list.files(bin_dir, '*.fa$', full.names=TRUE, recursive=TRUE) X = data.frame(binID = gsub('\\.fa$', '', basename(bin_files)), fasta = bin_files, binner = bin_files %>% dirname %>% basename, sample = bin_files %>% dirname %>% dirname %>% basename) ## combine bin = rbind(bin, X) X = NULL bin %>% dfhead # DAS-tool genomes dastool_files = list.files(das_tool_dir, '*.fa$', full.names=TRUE, recursive=TRUE) dastool = data.frame(binID = gsub('\\.fa$', '', basename(dastool_files)), fasta = dastool_files) dastool %>% dfhead # drep genome files P = file.path(drep_dir, 'dereplicated_genomes') drep_files = list.files(P, '*.fa$', full.names=TRUE) drep = data.frame(binID = gsub('\\.fa$', '', basename(drep_files)), fasta = drep_files) drep %>% dfhead # checkm info markers_sum = read.delim(checkm_markers_file, sep='\t') markers_sum %>% nrow %>% print drep_j = drep %>% inner_join(markers_sum, c('binID'='Bin.Id')) drep_j %>% dfhead # gtdb ## bacteria X = read.delim(gtdbtk_bac_sum_file, sep='\t') %>% dplyr::select(-other_related_references.genome_id.species_name.radius.ANI.AF.) %>% separate(classification, c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species'), sep=';') X %>% nrow %>% print ## archaea # Y = read.delim(gtdbtk_arc_sum_file, sep='\t') %>% # dplyr::select(-other_related_references.genome_id.species_name.radius.ANI.AF.) %>% # separate(classification, c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species'), sep=';') # Y %>% nrow %>% print ## combined drep_j = drep_j %>% left_join(X, c('binID'='user_genome')) ## status X = Y = NULL drep_j %>% dfhead ``` ## No. of genomes ``` cat('Number of binned genomes:', bin$fasta %>% unique %>% length) cat('Number of DAS-Tool passed genomes:', dastool$binID %>% unique %>% length) cat('Number of 99% ANI de-rep genomes:', drep_j$binID %>% unique %>% length) ``` ## CheckM ``` # checkm stats p = drep_j %>% dplyr::select(binID, Completeness, Contamination) %>% gather(Metric, Value, -binID) %>% ggplot(aes(Value)) + geom_histogram(bins=30) + labs(y='No. of MAGs\n(>=99% ANI derep.)') + facet_grid(Metric ~ ., scales='free_y') + theme_bw() dims(4,3) plot(p) ``` ## Taxonomy ``` # summarizing by taxonomy p = drep_j %>% unite(Taxonomy, Phylum, Class, sep=';', remove=FALSE) %>% group_by(Taxonomy, Phylum) %>% summarize(n = n()) %>% ungroup() %>% ggplot(aes(Taxonomy, n, fill=Phylum)) + geom_bar(stat='identity') + coord_flip() + labs(y='No. of MAGs\n(>=99% ANI derep.)') + theme_bw() dims(7,4) plot(p) # summarizing by taxonomy p = drep_j %>% unite(Taxonomy, Phylum, Class, Family, sep=';', remove=FALSE) %>% group_by(Taxonomy, Phylum) %>% summarize(n = n()) %>% ungroup() %>% ggplot(aes(Taxonomy, n, fill=Phylum)) + geom_bar(stat='identity') + coord_flip() + labs(y='No. of MAGs\n(>=99% ANI derep.)') + theme_bw() dims(7,4) plot(p) ``` ### Taxonomic novelty ``` # no close ANI matches p = drep_j %>% unite(Taxonomy, Phylum, Class, sep=';', remove=FALSE) %>% mutate(closest_placement_ani = closest_placement_ani %>% as.character, closest_placement_ani = ifelse(closest_placement_ani == 'N/A', 0, closest_placement_ani), closest_placement_ani = ifelse(is.na(closest_placement_ani), 0, closest_placement_ani), closest_placement_ani = closest_placement_ani %>% as.Num) %>% mutate(has_species_placement = ifelse(closest_placement_ani >= 95, 'ANI >= 95%', 'No match')) %>% ggplot(aes(Taxonomy, fill=Phylum)) + geom_bar() + facet_grid(. ~ has_species_placement) + coord_flip() + labs(y='Closest placement ANI') + theme_bw() dims(7,4) plot(p) p = drep_j %>% filter(Genus == 'g__') %>% unite(Taxonomy, Phylum, Class, Order, Family, sep='; ', remove=FALSE) %>% mutate(Taxonomy = stringr::str_wrap(Taxonomy, 45), Taxonomy = gsub(' ', '', Taxonomy)) %>% group_by(Taxonomy, Phylum) %>% summarize(n = n()) %>% ungroup() %>% ggplot(aes(Taxonomy, n, fill=Phylum)) + geom_bar(stat='identity') + coord_flip() + labs(y='No. of MAGs lacking a\ngenus-level classification') + theme_bw() dims(6.5,3) plot(p) ``` ### Quality ~ taxonomy ``` p = drep_j %>% unite(Taxonomy, Phylum, Class, sep='; ', remove=FALSE) %>% dplyr::select(Taxonomy, Phylum, Completeness, Contamination) %>% gather(Metric, Value, -Taxonomy, -Phylum) %>% ggplot(aes(Taxonomy, Value, color=Phylum)) + geom_boxplot() + facet_grid(. ~ Metric, scales='free_x') + coord_flip() + labs(y='CheckM quality') + theme_bw() dims(7,3.5) plot(p) # just unclassified at genus/species p = drep_j %>% filter(Genus == 'g__' | Species == 's__') %>% unite(Taxonomy, Phylum, Class, sep='; ', remove=FALSE) %>% dplyr::select(Taxonomy, Phylum, Completeness, Contamination) %>% gather(Metric, Value, -Taxonomy, -Phylum) %>% ggplot(aes(Taxonomy, Value, color=Phylum)) + geom_boxplot() + facet_grid(. ~ Metric, scales='free_x') + coord_flip() + labs(y='CheckM quality') + theme_bw() dims(7,3.5) plot(p) ``` # sessionInfo ``` sessionInfo() ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') %matplotlib inline ``` ### The single most important equation in linear systems $$\mathbf{y} = \mathbf{A}\mathbf{x}$$ ### Or $$\mathbf{Y} = \mathbf{A}\mathbf{X}$$ $$\mathbf{y} = \mathbf{A}\mathbf{x}$$ ### Where $\mathbf{x}$ is the input, $\mathbf{y}$ is the output, or observations, and $\mathbf{A}$ is a matrix of coefficients. -------------- # Linear System of Equations ### Question: Why does it take two points to define a line? ``` # pick any two points, at random, between 0 and 10 # First point - P px, py = np.random.randint(0, high=11, size=(2,)) # Second point - Q qx, qy = np.random.randint(0, high=11, size=(2,)) fig, ax = plt.subplots() ax.scatter([px, qx], [py, qy], s=100) ax.annotate('P', [px, py], fontsize='xx-large') ax.annotate('Q', [qx, qy], fontsize='xx-large') ax.axis([-1, 12, -1, 12]) ax.set_aspect('auto') ``` ### Assume that the two points are joined by a line $$y = mx + c$$ ### i.e. $$p_{y} = mp_{x} + c$$ ### and $$q_{y} = mq_{x} + c$$ ### Exercise: Arrange the equations above in the form $$\mathbf{d} = \mathbf{A}\mathbf{b}$$ ### What are $\mathbf{A}$, $\mathbf{b}$ and $\mathbf{d}$? ``` np.linalg.solve? ``` ### Exercise: Construct the matrices $\mathbf{A}$, $\mathbf{b}$ and $\mathbf{c}$ with NumPy and solve for the slope and the intercept of the line ``` ### Put the slope in the variable `m` and the intercept in a variable `c`. ### Then run the next cell to check your solution # enter code here xx = np.linspace(0, 10, 100) yy = m * xx + c fig, ax = plt.subplots() ax.scatter([px, qx], [py, qy], s=100) ax.annotate('P', [px, py], fontsize='xx-large') ax.annotate('Q', [qx, qy], fontsize='xx-large') ax.plot(xx, yy) ``` # What you just solved was a trivial form of linear regression! ----------------- # Types of Linear Systems * ## Ideal System - ### number of equations = number of unknowns - ### Unique solutions * ## Underdetermined System: - ### number of equations < number of unknowns - ### Infinitely many solutions! (Or no solution) * ## Overdetermined systems: - ### number of equations > number of unknowns - ### No unique solutions # Application: Linear Regression ## We want to fit a straight line through the following dataset: ``` import pandas as pd df = pd.read_csv('data/hwg.csv') fig, ax = plt.subplots(figsize=(10, 8)) ax.scatter(df['Height'], df['Weight'], alpha=0.2) ``` ### Question: What type of a system of equations is this? Ideal, underdetermined or overdetermined? ### Each y-coordinate, $y_{i}$ can be defined as: ### $$y_{i} = x_{i}\beta + \epsilon$$ ## Ordinary Least Squares solution ### Optimal solution: Find the $\beta$ which minimizes: ### $$S(\beta) = \|\mathbf{y} -\mathbf{x}\beta\|^2$$ ### The optimal $\beta$ is: ### $$\hat{\beta} = (\mathbf{x}^{T}\mathbf{x})^{-1}\mathbf{x}^{T}\mathbf{y}$$ ``` np.transpose? np.linalg.inv? np.dot? X = np.c_[np.ones((df.shape[0],)), df['Height'].values] Y = df['Weight'].values.reshape(-1, 1) ``` ### Exercise: use the formula above to find the optimal beta, given the X and Y as defined. ### Place your solution in a variable named `beta`, ### then run the cell below to check your solution ``` # enter code here fig, ax = plt.subplots(figsize=(10, 8)) ax.scatter(df['Height'], df['Weight'], alpha=0.2) ax.plot(X[:, 1], np.dot(X, beta).ravel(), 'g') ```
github_jupyter
``` import bs4 import taxon import gui_widgets from wikidataintegrator import wdi_core import bibtexparser import requests import pandas as pd import json import ipywidgets as widgets from IPython.display import IFrame, clear_output, HTML, Image from ipywidgets import interact, interactive, fixed, interact_manual import math def fetch_missing_wikipedia_articles(url): photos = json.loads(requests.get(url).text) temp_results = [] for obs in photos["results"]: if obs["taxon"]["wikipedia_url"] is None: result = dict() result["inat_obs_id"] = obs["id"] result["inat_taxon_id"] = obs["taxon"]["id"] result["taxon_name"] = obs["taxon"]["name"] temp_results.append(result) to_verify = [] for temp in temp_results: if temp["taxon_name"] not in to_verify: to_verify.append(temp["taxon_name"]) verified = verify_wikidata(to_verify) results = [] for temp in temp_results: if temp["taxon_name"] in verified: results.append(temp) return results def verify_wikidata(taxon_names): progress = widgets.IntProgress( value=1, min=0, max=len(taxon_names)/50, description='Wikidata:', bar_style='', # 'success', 'info', 'warning', 'danger' or '' style={'bar_color': 'blue'}, orientation='horizontal') display(progress) verified = [] i = 1 for chunks in [taxon_names[i:i + 50] for i in range(0, len(taxon_names), 50)]: query = """ SELECT DISTINCT ?taxon_name (COUNT(?item) AS ?item_count) (COUNT(?article) AS ?article_count) WHERE {{ VALUES ?taxon_name {{{names}}} {{?item wdt:P225 ?taxon_name .}} UNION {{?item wdt:P225 ?taxon_name . ?article schema:about ?item ; schema:isPartOf <https://en.wikipedia.org/> .}} UNION {{?basionym wdt:P566 ?item ; wdt:P225 ?taxon_name . ?article schema:about ?item ; schema:isPartOf <https://en.wikipedia.org/> .}} UNION {{?basionym wdt:P566 ?item . ?item wdt:P225 ?taxon_name . ?article schema:about ?basionym ; schema:isPartOf <https://en.wikipedia.org/> .}} }} GROUP BY ?taxon_name """.format(names=" ".join('"{0}"'.format(w) for w in chunks)) url = "https://query.wikidata.org/sparql?format=json&query="+query #print(url) progress.value = i i+=1 try: results = json.loads(requests.get(url).text) except: continue for result in results["results"]["bindings"]: if result["article_count"]["value"]=='0': verified.append(result["taxon_name"]["value"]) return verified def render_results(photos, url): progress = widgets.IntProgress( value=1, min=0, max=math.ceil(photos["total_results"]/200)+1, description='iNaturalist:', bar_style='', # 'success', 'info', 'warning', 'danger' or '' style={'bar_color': 'green'}, orientation='horizontal') display(progress) for page in range(1, math.ceil(photos["total_results"]/200)+1): nextpageresult = json.loads(requests.get(url+"&page="+str(page)).text) progress.value = page+1 for obs in nextpageresult["results"]: photos["results"].append(obs) table = dict() for result in photos["results"]: if result["taxon"]["id"] not in table.keys(): table[result["taxon"]["id"]] = dict() table[result["taxon"]["id"]]["taxon_name"] = result["taxon"]["name"] for photo in result["observation_photos"]: if "photos" not in table[result["taxon"]["id"]].keys(): table[result["taxon"]["id"]]["photos"] = [] table[result["taxon"]["id"]]["photos"].append(photo["photo"]["url"]) to_verify = [] for taxon in table.keys(): to_verify.append(table[taxon]['taxon_name']) verified = verify_wikidata(to_verify) result_rows = [] for taxon in table.keys(): if table[taxon]["taxon_name"] in verified: result_row = [] #result_row.append(interactive(get_data, taxon_id=str(taxon))) stub_button = widgets.Button( description='WP stub', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Click me', icon='check' # (FontAwesome names without the `fa-` prefix) ) stub_button.taxon_id = str(taxon) stub_button.on_click(get_data) result_row.append(stub_button) result_row.append(widgets.Label(value="id: {taxon_id}".format(taxon_id=str(taxon)))) result_row.append(widgets.Label(value="name: {taxon_name}".format(taxon_name=str(table[taxon]["taxon_name"])))) photos = [] for photo in table[taxon]["photos"]: photos.append(photo) result_row.append(widgets.HTML(gallery(photos))) result_rows.append(widgets.HBox(result_row)) return widgets.VBox(result_rows) def fetch_by_user(username, license): url = "https://api.inaturalist.org/v1/observations?photo_license="+license+"&quality_grade=research&per_page=200&user_id="+username result = json.loads(requests.get(url).text) return display(render_results(json.loads(requests.get(url).text), url)) def fetch_by_taxon(taxon_id, license): url = "https://api.inaturalist.org/v1/observations?photo_license="+license+"&taxon_id="+str(taxon_id)+"&quality_grade=research&per_page=200&subview=grid" return display(render_results(json.loads(requests.get(url).text)), url) def fetch_by_country(country_code, license): # results = fetch_by_place_code(country_code) url = "https://api.inaturalist.org/v1/observations?photo_license="+license+"&place_id="+str(country_code)+"&quality_grade=research&per_page=200&subview=grid" return display(render_results(json.loads(requests.get(url).text)), url) def search_by_taxon(taxon_str, rank, license): url = "https://api.inaturalist.org/v1/taxa/autocomplete?q="+taxon_str+"&rank="+rank results = json.loads(requests.get(url).text) display(fetch_by_taxon(results["results"][0]["id"], license)) def search_species_place(place, license): url = "https://api.inaturalist.org/v1/places/autocomplete?q="+str(place) results = json.loads(requests.get(url).text) display(fetch_by_country(results["results"][0]["id"], license)) def _src_from_data(data): """Base64 encodes image bytes for inclusion in an HTML img element""" img_obj = Image(data=data) for bundle in img_obj._repr_mimebundle_(): for mimetype, b64value in bundle.items(): if mimetype.startswith('image/'): return f'data:{mimetype};base64,{b64value}' def gallery(images, row_height='auto'): """Shows a set of images in a gallery that flexes with the width of the notebook. Parameters ---------- images: list of str or bytes URLs or bytes of images to display row_height: str CSS height value to assign to all images. Set to 'auto' by default to show images with their native dimensions. Set to a value like '250px' to make all rows in the gallery equal height. """ figures = [] for image in images: if isinstance(image, bytes): src = _src_from_data(image) caption = '' else: src = image figures.append(f''' <figure style="margin: 5px !important;"> <img src="{src}" style="height: {row_height}"> </figure> ''') return f''' <div style="display: flex; flex-flow: row wrap; text-align: center;"> {''.join(figures)} </div> ''' tab1 = widgets.Output() tab2 = widgets.Output() tab3 = widgets.Output() tab4 = widgets.Output() tab5 = widgets.Output() tab6 = widgets.Output() tab = widgets.Tab(children=[tab1,tab2, tab3, tab4, tab5, tab6]) tab # iNaturalistTab = IFrame(src='https://www.inaturalist.org/home', width=1000, height=600) tab.set_title(0, 'iNaturalist') tab.set_title(1, 'GBIF') tab.set_title(2, '(cc0, cc-by, cc-by-sa) iNaturalist images') tab.set_title(3, 'BHL') tab.set_title(4, 'Commons') tab.set_title(5, 'Wikipedia') with tab1: clear_output() def paste_commons(commons_file_name): with tab6: print("https://en.wikipedia.org/wiki/"+data.inaturalist_data[0]["name"].replace(" ", "_")) print("=========================") print(data.create_wikipedia_stub(infobox_image=commons_file_name)) return commons_file_name def get_data(b): global data data = taxon.external_data(inaturalist_id=b.taxon_id) html = "<table><tr><td><img src='"+data.inaturalist_data[0]['default_photo']['medium_url']+"'><br>"+data.inaturalist_data[0]['default_photo']['attribution']+"</td>" html += "<td>" html += "stub-type: "+data.inaturalist_data[0]["iconic_taxon_name"] html += "<br>iNaturalist taxon id: "+ str(data.inaturalist_data[0]["id"]) html += "<br>name: "+data.inaturalist_data[0]["name"] if "preferrd_common_name" in data.inaturalist_data[0].keys(): html += "<br>common name: "+data.inaturalist_data[0]["preferred_common_name"] html += "<br>rank: "+data.inaturalist_data[0]["rank"] html += "<br>parent id: "+str(data.inaturalist_parent_data[0]["id"]) html += "<br>parent name: "+data.inaturalist_parent_data[0]["name"] html += "<br>parent rank: "+data.inaturalist_parent_data[0]["rank"] html += "</td></tr></table>" output_widget = widgets.HTML(value=html) with tab2: clear_output() html2 = "<table>" for key in data.gbif_data.keys(): html2 += "<tr><td>{}</td><td>{}</td></tr>".format(key, data.gbif_data[key]) html2 += "</table>" gbif_output = widgets.HTML(value=html2) display(gbif_output) with tab3: clear_output() url = "https://api.inaturalist.org/v1/observations?photo_license=cc0,cc-by,cc-by-sa&quality_grade=research&taxon_id="+b.taxon_id photos = json.loads(requests.get(url).text) i = 0 html = "<h1>images in iNaturalist with a license allowing reuse in Wikipedia (cc0, cc-by, cc-by-sa)<table><tr>" for result in photos["results"]: for photo in result["observation_photos"]: i += 1 html += "<td><img src='"+photo['photo']['url'].replace("square", "medium")+"'></td>" if i % 5 == 0: html += "</tr><tr>" html += "</tr></table>" display(HTML(html)) with tab4: clear_output() bhlurl = "https://www.biodiversitylibrary.org/name/"+data.inaturalist_data[0]["name"].replace(" ", "_") print("source: ", bhlurl) fields = [] for entry in data.bhl_references: for key in entry.keys(): if key not in fields: fields.append(key) fields df = pd.DataFrame(columns= fields) for i in range(len(data.bhl_references)): row = dict() for key in fields: if key not in data.bhl_references[i].keys(): row[key]=None else: row[key]=data.bhl_references[i][key] df.loc[i] = row display(df) with tab5: clear_output() commons_query = """ SELECT * WHERE {{?commons schema:about <{taxon}> ; schema:isPartOf <https://commons.wikimedia.org/> . }}""".format(taxon = data.wikidata["main_rank"].loc[0]["taxon"]) commons_query_result = wdi_core.WDItemEngine.execute_sparql_query(commons_query, as_dataframe=True) if len(commons_query_result) == 0: html5 = "<a href = 'https://commons.wikimedia.org/w/index.php?title=Category:"+data.inaturalist_data[0]["name"].replace(" ", "_")+"&action=edit'>create commons category</a><br>" html5 += "[[Category:"+data.inaturalist_data[0]["name"].replace(" ", "|")+"]]" else: html5 = "<a href = 'https://commons.wikimedia.org/wiki/Category:"+data.inaturalist_data[0]["name"].replace(" ", "_")+"' target='_new'>"+data.inaturalist_data[0]["name"].replace(" ", "_")+"</a><br>" commons_output = widgets.HTML(value=html5) data.selected_commons=gui_widgets.interact_manual(paste_commons, commons_file_name="") display(commons_output) return output_widget tab1tab1 = widgets.Output() tab1tab2 = widgets.Output() tab1tab3 = widgets.Output() tab1tab = widgets.Tab(children=[tab1tab1,tab1tab2,tab1tab3]) tab1tab.set_title(0, 'search by taxon') tab1tab.set_title(1, 'search by user') tab1tab.set_title(2, 'search by country') with tab1tab1: interact_manual(search_by_taxon, taxon_str='', rank=["genus", "family", "order"], license=["cc0,cc-by,cc-by-sa", "cc0", "cc-by", "cc-by-sa"]) with tab1tab2: interact_manual(fetch_by_user, username='', license=["cc0,cc-by,cc-by-sa", "cc0", "cc-by", "cc-by-sa"]) with tab1tab3: interact_manual(search_species_place, place='', license=["cc0,cc-by,cc-by-sa", "cc0", "cc-by", "cc-by-sa"]) display(tab1tab) data = None #taxon_window = gui_widgets.interact_manual(get_data, taxon_id="") display(tab) ```
github_jupyter
# Test agglomeration of segments ``` %matplotlib inline import matplotlib.pyplot as plt import h5py import numpy as np import skimage import skimage.morphology f = h5py.File('neuron-volume-with-axons/x0y0z0.hdf5', 'r') A = np.array(f['data']) neuron_ids = f['neuron_ids'] f = h5py.File('segmentation-volume/x0y0z0.hdf5', 'r') B = np.array(f['data']) #f = h5py.File('dendrites.hdf5', 'r') #B = np.array(f['data']) plt.figure(figsize=(20, 20)) plt.imshow(B[300, :512, :512]) import scipy plt.figure(figsize=(20, 20)) plt.imshow(scipy.ndimage.morphology.grey_erosion(scipy.ndimage.morphology.grey_dilation(B[300, :512, :512], size=3), size=3)) #mod_data = scipy.ndimage.morphology.grey_erosion(scipy.ndimage.morphology.grey_dilation(A[300, :512, :512], size=3), size=2) plt.figure(figsize=(20, 20)) plt.subplot(131) mod_data = A[300, :512, :512] plt.imshow(mod_data) plt.subplot(132) mod_data = scipy.ndimage.morphology.grey_dilation(A[300, :512, :512], size=10) plt.imshow(mod_data) plt.subplot(133) mod_data = scipy.ndimage.morphology.grey_erosion(scipy.ndimage.morphology.grey_dilation(A[300, :512, :512], size=10), size=10) #cube = h5py.File('x0y0z0_closed.hdf5', 'w') #cube.create_dataset('data', mod_data.shape, compression="gzip", data=mod_data) #cube.create_dataset('neuron_ids', neuron_ids.shape, data=neuron_ids) #cube.close() plt.imshow(mod_data) #mod_data.max() %%time from scipy.ndimage import morphology neuron_id = 19 mask = A == neuron_id #joined = morphology.binary_erosion(morphology.binary_fill_holes(morphology.binary_dilation(mask, iterations=3)), iterations=3) mod_data = scipy.ndimage.morphology.grey_erosion(scipy.ndimage.morphology.grey_dilation(A, size=5), size=5) # Compute the delta between the mask and the joined mask deltas = joined.sum(0).sum(1) - mask.sum(0).sum(0) # Check that we're only adding pixels and not removing (important at the boundaries) for neuron_id in neuron_ids: mask = A == neuron_id big_mask = (mod_data == neuron_id) num_bad = (mask & ~big_mask).sum() num_added = big_mask.sum() - mask.sum() print("Neuron %d, num_bad %d, num added %d" % (neuron_id, num_bad, num_added)) #plt.plot(deltas) #neuron_ids # Check that we're only adding pixels and not removing (important at the boundaries) for neuron_id in neuron_ids: mask = A == neuron_id big_mask = (mod_data == neuron_id) num_bad = (mask & ~big_mask).sum() num_added = big_mask.sum() - mask.sum() print("Neuron %d, num_bad %d, num added %d" % (neuron_id, num_bad, num_added)) #plt.plot(deltas) #neuron_ids neuron_id = 79 mask = A == neuron_id deltas = (mod_data == neuron_id).sum(1).sum(0) - mask.sum(1).sum(0) (deltas.argmax(), deltas.max()) slc = 692 plt.figure(figsize=(16, 16)) plt.imshow(A[:, :, slc] == neuron_id) plt.figure(figsize=(16, 16)) plt.imshow(mod_data[:, :, slc] == neuron_id) arr2 = np.array([[3072. , 1970.5 , 3087.6 ], [3072. , 1970.5 , 3090.09 ], [3072. , 1970.5 , 3092.58 ], [3072. , 1971. , 3093.825 ], [3072. , 1971.5 , 3095.07 ], [3072. , 1972. , 3096.315 ], [3072. , 1973. , 3096.315 ], [3072. , 1974. , 3096.315 ], [3072. , 1974.5 , 3097.56 ], [3072. , 1975. , 3098.805 ], [3072. , 1976. , 3098.805 ], [3072. , 1976.5 , 3100.05 ], [3072. , 1977. , 3101.295 ], [3072. , 1978. , 3101.295 ], [3072. , 1978.5 , 3102.54 ], [3072. , 1979. , 3103.785 ], [3072. , 1980. , 3103.785 ], [3072. , 1980.5 , 3105.03 ], [3072. , 1980.5 , 3107.52 ], [3072. , 1980.5 , 3110.01 ], [3072. , 1981. , 3111.255 ], [3072. , 1982. , 3111.255 ], [3072. , 1983. , 3111.255 ], [3072. , 1984. , 3111.255 ], [3072. , 1985. , 3111.255 ], [3072. , 1986. , 3111.255 ], [3072. , 1987. , 3111.255 ], [3072. , 1988. , 3111.255 ], [3072. , 1989. , 3111.255 ], [3072. , 1990. , 3111.255 ], [3072. , 1990.5 , 3112.5 ], [3072. , 1991. , 3113.7449], [3072. , 1992. , 3113.7449], [3072. , 1993. , 3113.7449], [3072. , 1993.5 , 3114.99 ], [3072. , 1994. , 3116.2349], [3072. , 1995. , 3116.2349], [3072. , 1996. , 3116.2349], [3072. , 1997. , 3116.2349], [3072. , 1997.5 , 3117.48 ], [3072. , 1998. , 3118.725 ], [3072. , 1999. , 3118.725 ], [3072. , 2000. , 3118.725 ], [3072. , 2001. , 3118.725 ], [3072. , 2002. , 3118.725 ], [3072. , 2003. , 3118.725 ], [3072. , 2004. , 3118.725 ], [3072. , 2004.5 , 3119.97 ], [3072. , 2004.5 , 3122.46 ], [3072. , 2005. , 3123.705 ], [3072. , 2006. , 3123.705 ], [3072. , 2007. , 3123.705 ], [3072. , 2008. , 3123.705 ], [3072. , 2009. , 3123.705 ], [3072. , 2009.5 , 3122.46 ], [3072. , 2009. , 3121.215 ], [3072. , 2008.5 , 3119.97 ], [3072. , 2008. , 3118.725 ], [3072. , 2007.5 , 3117.48 ], [3072. , 2007. , 3116.2349], [3072. , 2006.5 , 3114.99 ], [3072. , 2006.5 , 3112.5 ], [3072. , 2006.5 , 3110.01 ], [3072. , 2006. , 3108.765 ], [3072. , 2005.5 , 3107.52 ], [3072. , 2005.5 , 3105.03 ], [3072. , 2005. , 3103.785 ], [3072. , 2004.5 , 3102.54 ], [3072. , 2004. , 3101.295 ], [3072. , 2003. , 3101.295 ], [3072. , 2002.5 , 3100.05 ], [3072. , 2002. , 3098.805 ], [3072. , 2001. , 3098.805 ], [3072. , 2000. , 3098.805 ], [3072. , 1999. , 3098.805 ], [3072. , 1998. , 3098.805 ], [3072. , 1997. , 3098.805 ], [3072. , 1996.5 , 3097.56 ], [3072. , 1996. , 3096.315 ], [3072. , 1995. , 3096.315 ], [3072. , 1994. , 3096.315 ], [3072. , 1993.5 , 3095.07 ], [3072. , 1993. , 3093.825 ], [3072. , 1992. , 3093.825 ], [3072. , 1991. , 3093.825 ], [3072. , 1990. , 3093.825 ], [3072. , 1989.5 , 3092.58 ], [3072. , 1989. , 3091.335 ], [3072. , 1988. , 3091.335 ], [3072. , 1987. , 3091.335 ], [3072. , 1986.5 , 3090.09 ], [3072. , 1986. , 3088.845 ], [3072. , 1985. , 3088.845 ], [3072. , 1984. , 3088.845 ], [3072. , 1983.5 , 3087.6 ], [3072. , 1983. , 3086.355 ], [3072. , 1982. , 3086.355 ], [3072. , 1981. , 3086.355 ], [3072. , 1980. , 3086.355 ], [3072. , 1979. , 3086.355 ], [3072. , 1978. , 3086.355 ], [3072. , 1977. , 3086.355 ], [3072. , 1976. , 3086.355 ], [3072. , 1975. , 3086.355 ], [3072. , 1974. , 3086.355 ], [3072. , 1973. , 3086.355 ], [3072. , 1972. , 3086.355 ], [3072. , 1971. , 3086.355 ], [3072. , 1970.5 , 3087.6 ]], dtype=np.float32) arr1 = np.array([[3071. , 1974. , 3088.845 ], [3071. , 1973.5 , 3090.09 ], [3071. , 1973.5 , 3092.58 ], [3071. , 1974. , 3093.825 ], [3071. , 1974.5 , 3095.07 ], [3071. , 1975. , 3096.315 ], [3071. , 1976. , 3096.315 ], [3071. , 1977. , 3096.315 ], [3071. , 1977.5 , 3097.56 ], [3071. , 1978. , 3098.805 ], [3071. , 1979. , 3098.805 ], [3071. , 1980. , 3098.805 ], [3071. , 1980.5 , 3100.05 ], [3071. , 1981. , 3101.295 ], [3071. , 1981.5 , 3102.54 ], [3071. , 1982. , 3103.785 ], [3071. , 1982.5 , 3105.03 ], [3071. , 1982.5 , 3107.52 ], [3071. , 1983. , 3108.765 ], [3071. , 1984. , 3108.765 ], [3071. , 1985. , 3108.765 ], [3071. , 1986. , 3108.765 ], [3071. , 1987. , 3108.765 ], [3071. , 1988. , 3108.765 ], [3071. , 1989. , 3108.765 ], [3071. , 1990. , 3108.765 ], [3071. , 1991. , 3108.765 ], [3071. , 1992. , 3108.765 ], [3071. , 1992.5 , 3110.01 ], [3071. , 1993. , 3111.255 ], [3071. , 1994. , 3111.255 ], [3071. , 1994.5 , 3112.5 ], [3071. , 1995. , 3113.7449], [3071. , 1996. , 3113.7449], [3071. , 1996.5 , 3114.99 ], [3071. , 1997. , 3116.2349], [3071. , 1998. , 3116.2349], [3071. , 1999. , 3116.2349], [3071. , 2000. , 3116.2349], [3071. , 2001. , 3116.2349], [3071. , 2002. , 3116.2349], [3071. , 2003. , 3116.2349], [3071. , 2004. , 3116.2349], [3071. , 2005. , 3116.2349], [3071. , 2005.5 , 3114.99 ], [3071. , 2005.5 , 3112.5 ], [3071. , 2005.5 , 3110.01 ], [3071. , 2005.5 , 3107.52 ], [3071. , 2005.5 , 3105.03 ], [3071. , 2005.5 , 3102.54 ], [3071. , 2005. , 3101.295 ], [3071. , 2004. , 3101.295 ], [3071. , 2003. , 3101.295 ], [3071. , 2002. , 3101.295 ], [3071. , 2001. , 3101.295 ], [3071. , 2000. , 3101.295 ], [3071. , 1999. , 3101.295 ], [3071. , 1998. , 3101.295 ], [3071. , 1997.5 , 3100.05 ], [3071. , 1997. , 3098.805 ], [3071. , 1996. , 3098.805 ], [3071. , 1995. , 3098.805 ], [3071. , 1994. , 3098.805 ], [3071. , 1993. , 3098.805 ], [3071. , 1992. , 3098.805 ], [3071. , 1991. , 3098.805 ], [3071. , 1990. , 3098.805 ], [3071. , 1989.5 , 3097.56 ], [3071. , 1989. , 3096.315 ], [3071. , 1988. , 3096.315 ], [3071. , 1987. , 3096.315 ], [3071. , 1986.5 , 3095.07 ], [3071. , 1986.5 , 3092.58 ], [3071. , 1986. , 3091.335 ], [3071. , 1985. , 3091.335 ], [3071. , 1984.5 , 3090.09 ], [3071. , 1984. , 3088.845 ], [3071. , 1983. , 3088.845 ], [3071. , 1982. , 3088.845 ], [3071. , 1981. , 3088.845 ], [3071. , 1980. , 3088.845 ], [3071. , 1979. , 3088.845 ], [3071. , 1978. , 3088.845 ], [3071. , 1977. , 3088.845 ], [3071. , 1976. , 3088.845 ], [3071. , 1975. , 3088.845 ], [3071. , 1974. , 3088.845 ]], dtype=np.float32) plt.plot(arr1[:, 1], arr1[:, 2]) plt.plot(arr2[:, 1], arr2[:, 2]) def grow_cluster(start_point, adjacency_matrix): cluster = {start_point: 1} len_before = 0 len_after = 1 while len_before < len_after: cluster_after = cluster.copy() for point, _ in cluster.items(): for el1, el2 in adjacency_matrix: if el1 == point: cluster_after[el2] = 1 cluster = cluster_after len_before = len_after len_after = len(cluster) return cluster def get_connected_components(adjacency_matrix): unique_el = {} for el1, el2 in adjacency_matrix: unique_el[el1] = 1 unique_el[el2] = 1 keys = list(unique_el.keys()) cluster_memberships = {k: 0 for k in keys} # Start with key 0 cluster_num = 1 for k in keys: if cluster_memberships[k] == 0: cluster = grow_cluster(k, adjacency_matrix) for j in cluster.keys(): cluster_memberships[j] = cluster_num cluster_num += 1 return cluster_memberships get_connected_components([(0, 1), (1, 0), (2, 3), (3, 4), (4, 7), (5, 6)]) ```
github_jupyter
``` import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import seaborn as sns import matplotlib.pyplot as plt from xgboost import plot_importance, plot_tree from sklearn.ensemble import GradientBoostingRegressor from lightgbm import LGBMRegressor from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline, make_union from tpot.builtins import StackingEstimator from xgboost import XGBRegressor from sklearn.linear_model import LinearRegression from statsmodels.tsa.ar_model import AutoReg import copy from catboost import CatBoostRegressor from sklearn.metrics import mean_squared_error, mean_absolute_error import random np.random.seed(42) random.seed(42) RANDOM_SEED = 42 plt.style.use('fivethirtyeight') data = pd.read_csv('data.csv') #data = data[data.Datetime < '2020-12-08'] data.tail() drop = ['etat_barre_ce', 'etat_barre_lc', 'etat_barre_pv', 'Année', 'Mois', 'Jour', 'Heure', 'Jour semaine'] target_ce = ['q_ce', 'k_ce'] target_lc = ['q_lc', 'k_lc'] target_pv = ['q_pv', 'k_pv'] all_ = drop + target_ce + target_lc + target_pv features = [x for x in data.columns.tolist() if x not in all_] df = copy.deepcopy(data) df = df.drop(drop, axis=1) df_ce = copy.deepcopy(df[features + target_ce]) df_lc = copy.deepcopy(df[features + target_lc]) df_pv = copy.deepcopy(df[features + target_pv]) def rolling_custom(d, df, label): try: return df.loc[d - 168, label] except KeyError: return float('nan') df_pv['back_q'] = pd.Series([rolling_custom(d, df_pv, 'q_pv') for d in df_pv.index]) #df_pv['back_k'] = pd.Series([rolling_custom(d, df_pv, 'k_pv') for d in df_pv.index]) df_pv df[df.Datetime < '2017-01-01'] ``` # Utils ``` def create_train_test(df, date_min, date_max, label, prefix, start, dropna=True): if dropna: drop = df[label + '_' + prefix].notnull() train, test = df[(df.Datetime <= date_min) & drop & (df.Datetime >= start)].reset_index(drop=True), df[(df.Datetime > date_min) & (df.Datetime <= date_max) & drop].reset_index(drop=True) x_train, y_train = train.drop(['Datetime', 'q_' + prefix, 'k_' + prefix], axis=1), train[['Datetime', label + '_' + prefix]] x_test, y_test = test.drop(['Datetime', 'q_' + prefix, 'k_' + prefix], axis=1), test[['Datetime', label + '_' + prefix]] return x_train, y_train, x_test, y_test def train(models, X_train, y_train, X_test, y_test): model_trained = [] for model in models: model.fit(X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)], early_stopping_rounds=42, verbose=False) model_trained.append(model) return model_trained def evaluate(models, X_test, y_test, scale=None): rmse = [] for model in models: if scale is not None: pred_descaled = scale.inverse_transform(model.predict(X_test).reshape(-1,1)).flatten() rmse.append(np.sqrt(mean_squared_error(y_test, pred_descaled))) else: rmse.append(np.sqrt(mean_squared_error(y_test, model.predict(X_test)))) return rmse ``` q CE : 120 (sur les deux dernieres semaines) à partir du mois de octobre 2020 (sans k & q past week) (remove meteo) LC : 43 à partir du mois de fevrier 2020 ( only with q past week) ( SP : 50 a partir du mois de janvier 2020 (only with q past week) k CE : 3.85 aout 2020 (mais pas bon sur l'avant derniere semaine sans k & q past week) (remove meteo) LC : 2 fevrier 2020 (avec k pas week) SP : 1.26 janvier 2020 (++) (avec k & q past week) ``` 24*6 ``` # Champs elysées ``` len(y_test_date) df_lc = df_lc.drop(['desc_Broken clouds.', 'desc_Chilly.', 'desc_Clear.', 'desc_Cloudy.', 'desc_Cool.', 'desc_Dense fog.', 'desc_Drizzle. Broken clouds.', 'desc_Drizzle. Fog.', 'desc_Drizzle. Low clouds.', 'desc_Drizzle. Mostly cloudy.', 'desc_Fog.', 'desc_Haze.', 'desc_Ice fog.', 'desc_Light fog.', 'desc_Light rain. Broken clouds.', 'desc_Light rain. Clear.', 'desc_Light rain. Cloudy.', 'desc_Light rain. Fog.', 'desc_Light rain. Low clouds.', 'desc_Light rain. More clouds than sun.', 'desc_Light rain. Mostly cloudy.', 'desc_Light rain. Overcast.', 'desc_Light rain. Partly cloudy.', 'desc_Light rain. Partly sunny.', 'desc_Light rain. Passing clouds.', 'desc_Light snow. Ice fog.', 'desc_Low clouds.', 'desc_Mild.', 'desc_More clouds than sun.', 'desc_Mostly cloudy.', 'desc_No weather data available', 'desc_Overcast.', 'desc_Partly cloudy.', 'desc_Partly sunny.', 'desc_Passing clouds.', 'desc_Rain. Fog.', 'desc_Scattered clouds.', 'desc_Sprinkles. Mostly cloudy.', 'desc_Sunny.', 'desc_Thunderstorms. Fog.'], axis=1) #dates = X_train, y_train_date, X_test, y_test_date = create_train_test(df_pv, '2020-11-29', '2020-12-05', 'k', 'pv', '2020-01-01') models = [ LGBMRegressor(n_estimators=300, random_state=27), XGBRegressor(n_estimators=300, random_state=27)] y_train, y_test = y_train_date.drop(['Datetime'], axis=1), y_test_date.drop(['Datetime'], axis=1) #from sklearn.preprocessing import MinMaxScaler #MMS = MinMaxScaler() #MMS.fit(y_train.values.reshape(-1,1)) #y_train_scaled = MMS.transform(y_train.values.reshape(-1,1)) #y_test_scaled = MMS.transform(y_test.values.reshape(-1,1)) trained = train(models, X_train, y_train, X_test, y_test) rmses = evaluate(trained, X_test, y_test.values) rmses def test_mean_average(y_pred1, y_pred2, y_test_date): y_test = y_test_date.drop(['Datetime'], axis=1).values return np.sqrt(mean_squared_error((y_pred1+y_pred2)/2, y_test)) def test_between_time(y_pred1, y_pred2, y_test_date): y_test_date['Datetime'] = pd.to_datetime(y_test_date['Datetime']) index_labels = y_test_date.reset_index().set_index('Datetime') index_day = index_labels.between_time('07:00', '22:00')['index'].tolist() #index_night = [idx for idx in index_labels['index'].tolist() if x not in index_day] final_prediction = [] for i, preds in enumerate(zip(y_pred1, y_pred2)): if i in index_day: final_prediction.append(max(preds)) else: final_prediction.append(min(preds)) return np.array(final_prediction) y_pred = trained[1].predict(X_test) y_pred2 = trained[0].predict(X_test) preds = test_between_time(y_pred, y_pred2, y_test_date) np.sqrt(mean_squared_error(preds, y_test.values)) y_pred = trained[1].predict(X_test) y_pred2 = trained[0].predict(X_test) plt.figure(figsize=(12,12)) plt.plot(np.arange(len(y_pred)), y_pred, label="Predicted model 1", color='b') plt.plot(np.arange(len(y_pred)), y_test.values, label="True", color='r') plt.plot(np.arange(len(y_pred)), y_pred2, label="Predicted model 0", color='y') plt.legend() plt.show() from pycaret.regression import * (31+31)*24/144 drop = df_ce['q_ce'].notnull() exp_reg = setup(df_ce[(df_ce.Datetime>='2020-10-01') & (df_ce.Datetime < '2020-12-02') & drop].drop(['Datetime', 'k_ce'], axis=1), target='q_ce', fold_strategy='timeseries', session_id = 123, silent=True) compare_models(['et', 'catboost', 'xgboost', 'lightgbm', 'rf']) # Take the 3 bests models catb = create_model('catboost', verbose=False) et = create_model('et', verbose=False) #lgbm = create_model('lightgbm', verbose=False) # Blend all the 4 bests models blend_all = blend_models(estimator_list = [catb, et]) # Finalise models and make predictions final_blender = finalize_model(blend_all) predictions = predict_model(final_blender, data = df_ce[(df_ce.Datetime > '2020-12-01') & (df_ce.Datetime <= '2020-12-07')].drop(['Datetime', 'k_ce'], axis=1)) predictions ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. ``` # Классификация текста обзоров фильмов <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ru/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Запусти в Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ru/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Изучай код на GitHub</a> </td> </table> Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ru). В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения. Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров. Данное руководство использует [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/). ``` # keras.datasets.imdb is broken in 1.13 and 1.14, by np 1.16.3 !pip install tf_nightly import tensorflow.compat.v1 as tf from tensorflow import keras import numpy as np print(tf.__version__) ``` ## Загружаем датасет IMDB Датасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве. Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных): ``` imdb = keras.datasets.imdb (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) ``` Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. ## Знакомимся с данными Давай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1: 0 - негативный обзор, 1 - позитивный. ``` print("Тренировочных записей: {}, меток: {}".format(len(train_data), len(train_labels))) ``` Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор: ``` print(train_data[0]) ``` Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу. ``` len(train_data[0]), len(train_data[1]) ``` ### Конвертируем целые обратно в слова Не будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов: ``` # Назначим словарь, который будет отображать слова из массива данных word_index = imdb.get_word_index() # Зарезервируем первые несколько значений word_index = {k:(v+3) for k,v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK word_index["<UNUSED>"] = 3 reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) def decode_review(text): return ' '.join([reverse_word_index.get(i, '?') for i in text]) ``` Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма: ``` decode_review(train_data[0]) ``` ## Подготавливаем данные Обзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами: * *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews` * Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сети В этом руководстве мы попробуем второй способ. Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению: ``` train_data = keras.preprocessing.sequence.pad_sequences(train_data, value=word_index["<PAD>"], padding='post', maxlen=256) test_data = keras.preprocessing.sequence.pad_sequences(test_data, value=word_index["<PAD>"], padding='post', maxlen=256) ``` Давай теперь посмотрим на длину наших примеров: ``` len(train_data[0]), len(train_data[1]) ``` А также проверим как выглядит первый стандартизированный по длине обзор фильма: ``` print(train_data[0]) ``` ## Строим модель Нейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели: * Сколько слоев будет использовано в модели? * Сколько *скрытых блоков* будет использовано для каждого слоя? В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу: ``` # Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов) vocab_size = 10000 model = keras.Sequential() model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,))) model.add(keras.layers.GlobalAveragePooling1D()) model.add(keras.layers.Dense(16, activation=tf.nn.relu)) model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid)) model.summary() ``` Для создания классификатора все слои проходят процесс стека, или наложения: 1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)` 2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины 3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками 4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели ### Скрытые блоки Вышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения. Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. ### Функция потерь и оптимизатор Для модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия"). Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями. Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE). А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь: ``` model.compile(optimizer=tf.train.AdamOptimizer(), loss='binary_crossentropy', metrics=['accuracy']) ``` ## Создадим проверочный набор данных Во время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор. Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность. ``` x_val = train_data[:10000] partial_x_train = train_data[10000:] y_val = train_labels[:10000] partial_y_train = train_labels[10000:] ``` ## Обучаем модель Начнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных: ``` history = model.fit(partial_x_train, partial_y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1) ``` ## Оценим точность модели Теперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель. Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*. ``` results = model.evaluate(test_data, test_labels, verbose=2) print(results) ``` Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. ## Построим временной график точности и потерь Метод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения: ``` history_dict = history.history history_dict.keys() ``` Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения: ``` import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) # "bo" означает "blue dot", синяя точка plt.plot(epochs, loss, 'bo', label='Потери обучения') # "b" означает "solid blue line", непрерывная синяя линия plt.plot(epochs, val_loss, 'b', label='Потери проверки') plt.title('Потери во время обучения и проверки') plt.xlabel('Эпохи') plt.ylabel('Потери') plt.legend() plt.show() plt.clf() # Очистим график acc_values = history_dict['acc'] val_acc_values = history_dict['val_acc'] plt.plot(epochs, acc, 'bo', label='Точность обучения') plt.plot(epochs, val_acc, 'b', label='Точность проверки') plt.title('Точность во время обучения и проверки') plt.xlabel('Эпохи') plt.ylabel('Точность') plt.legend() plt.show() ``` На графиках точками отмечены потери и точность модели во время обучения, а линией - во время проверки. Обрати внимание что потери во время обучения *уменьшаются*, а точность - *увеличивается* с каждой следующей эпохой. Это вполне ожидаемо, поскольку мы используем градиентный спуск *gradient descent* - он минимизирует показатели потерь с каждой итерацией настолько быстро, насколько это возможно. Но это совсем не тот случай если мы посмотрим на потери и точность во время проверки модели: после приблизительно 20 эпох они находятся на пике. Это явный пример переобучения: модель показывает более лучшие показатели на данных для обучения, нежели на новых, которых она еще не видела. После этого момента модель начинает переоптимизироваться и обучается представлениям, которые *являются свойственны* только данным для обучения. Таким образом, модель не учится *обобщать* новые, проверочные данные. Именно здесь мы можем предотвратить переобучение просто прекратив тренировку сразу после 20 эпох обучения. Далее мы посмотрим, как это можно сделать автоматически при помощи *callback*, функции обратного вызова.
github_jupyter
``` import tweepy import os import pandas as pd # ================ TWITTER ====================== def get_user_tweets(api, username, count=200): tweets = api.user_timeline(username, count=count) texts = [tweet.text for tweet in tweets] return texts#twitter authentication def get_tweets(): #twitter authentication CONSUMER_KEY = os.getenv('api-key') CONSUMER_SECRET = os.getenv('api-secret-key') ACCESS_TOKEN = os.getenv('access-token') ACCESS_TOKEN_SECRET = os.getenv('access-secret-token') AUTH = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) AUTH.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET) api = tweepy.API(AUTH) return(get_user_tweets(api, username),api.get_user(username).name) username="LaRagazzaTurca_" all_tweets = get_tweets()[0] name = get_tweets()[1] mn=0 rt=0 tw=0 mentions=[] retweets=[] tweets=[] for m in all_tweets: if m[0] == "@": mn = mn + 1 mentions.append(m) elif m[0:2] == "RT": rt = rt + 1 retweets.append(m) else: tw = tw + 1 tweets.append(m) print("Total Mention:",mn) print("Total Retweet:",rt) print("Total Tweet:",tw) df_retweets = pd.DataFrame({'retweets': retweets}) df_tweets = pd.DataFrame({'tweets': tweets}) df_mentions = pd.DataFrame({'mentions': mentions}) df_all=pd.concat([df_retweets,df_tweets,df_mentions],ignore_index=True, axis=1) df_all.columns = [ 'Retweets','Tweets', 'Mentions'] print(df_all.head()) df_all= df_all.applymap(lambda s:s.lower() if type(s) == str else s) for m in df_all['Tweets']: print(m) disa_donuk=['!',"konser","arkadaş","oley",'hadi',"hey",'tatlım','canım','kuzum','bebek','bebeğim','mükemmel','şaka', 'selam','kutlarım','sosyal'] ice_donuk=['yalnız','keşke','pişman','ağla','gözyaşı','utanç','hayır','peki','belki','bilgilendirici','ciddi'] gercekci=['mümkün','net','olamaz','olur','oldu','olacak','tamam'] sezgisel=['belki','muhtemelen','acaba','ihtimal','his','düş','rüya','sevgi','sevmek','sezgi','seviyorum','hayranım', 'gerçeklik'] dusunen=['düşünce','düşünüyorum','aslında','mantıklı','doğru','yanlış','tespit','olmalı','tahmin','anlamlı','manalı','şüpheli', 'şüpheci','çünkü'] hassas=['kırık','buruk','hüzün','kırgın','ağla','yeterince','teşekkür','hassas','kırılgan'] sorgulayan=['neden','ne','nerede','niçin''ara','zaman','saat','ilk','son','net'] algılari_acik=['öğrendim','öğretici','bence',] #Dışa dönük / Gerçekçi / Düşünen / Sorgulayan Kisilik_1=[] #İçe dönük / Gerçekçi / Düşünen / Sorgulayan Kisilik_2=[] #Dışa dönük / Gerçekçi / Hassas / Sorgulayan Kisilik_3=[] #İçe dönük / Gerçekçi / Hassas / Sorgulayan Kisilik_4=[] total_disa_donuk = df_all['Tweets'].str.contains('|'.join(disa_donuk)) total_ice_donuk = df_all['Tweets'].str.contains('|'.join(ice_donuk)) total_gercekci = df_all['Tweets'].str.contains('|'.join(gercekci)) total_sezgisel = df_all['Tweets'].str.contains('|'.join(sezgisel)) total_dusunen = df_all['Tweets'].str.contains('|'.join(dusunen)) total_hassas = df_all['Tweets'].str.contains('|'.join(hassas)) total_sorgulayan = df_all['Tweets'].str.contains('|'.join(sorgulayan)) total_algılari_acik = df_all['Tweets'].str.contains('|'.join(algılari_acik)) df_total=pd.concat([total_disa_donuk,total_ice_donuk,total_gercekci,total_sezgisel,total_dusunen,total_hassas,total_sorgulayan,total_algılari_acik],ignore_index=True, axis=1) df_total.columns = [ 'disa_donuk','ice_donuk','gercekci','sezgisel','dusunen','hassas','sorgulayan','algılari_acik'] print(df_total.head(10)) Dıs=df_total['disa_donuk'][df_total['disa_donuk']==True].count().sum() Ic=df_total['ice_donuk'][df_total['ice_donuk']==True].count().sum() if(Dıs>Ic): print("Dışa Dönük ! ") elif(Dıs==Ic): print("Dengeli.") else: print("İçe Dönük...") G=df_total['gercekci'][df_total['gercekci']==True].count().sum() S=df_total['sezgisel'][df_total['sezgisel']==True].count().sum() if(G>S): print("Gerçekçi ! ") elif(G==S): print("Dengeli.") else: print("Sezgisel...") D=df_total['dusunen'][df_total['dusunen']==True].count().sum() H=df_total['hassas'][df_total['hassas']==True].count().sum() if(D>H): print("Düşünen..") elif(D==H): print("Dengeli.") else: print("Hassas...") Sor=df_total['sorgulayan'][df_total['sorgulayan']==True].count().sum() Alg=df_total['algılari_acik'][df_total['algılari_acik']==True].count().sum() if(Sor>Alg): print("Sorgulayan..") elif(Sor==Alg): print("Dengeli.") else: print("Algıları Açık...") ```
github_jupyter
``` import pickle import matplotlib.pyplot as plt from scipy.stats.mstats import gmean import seaborn as sns %matplotlib inline price = pickle.load(open("price_record.p", "rb")) len(price[2]) ##Compare price of agent versus group price_100e = pickle.load(open("total_price.p","rb")) price_100 = pickle.load(open("C:\\Users\\ymamo\Google Drive\\1. PhD\\Dissertation\\SugarScape\Initial\\NetScape_Elegant\\total_price1.p", "rb")) from collections import defaultdict def make_distro(price_100): all_stds =[] total_log = defaultdict(list) for run, output in price_100.items(): for step, prices in output.items(): log_pr = [log(p) for p in prices] if len(log_pr) <2: pass else: out = stdev(log_pr) total_log[run].append(out) all_stds.append(out) return all_stds price_cluster = make_distro(price_100e) price_norm = make_distro(price_100) fig7, ax7 = plt.subplots(figsize = (7,7)) ax7.hist(price_cluster, 500, label = "Agent Groups") ax7.hist(price_norm, 500, label = "No Groups") plt.title("Explicit Approach:\nPrice Distribution of SDLM of 100 Runs", fontsize = 20, fontweight = "bold") plt.xlabel("SDLM of Step", fontsize = 15, fontweight = "bold") plt.ylabel("Frequency of SDLM", fontsize = 15, fontweight = "bold") #plt.xlim(.75,2) #plt.ylim(0,5) plt.legend() ## Calculate price x = [] y =[] for st, pr in price.items(): #if step <=400: x.append(st) y.append(gmean(pr)) y[0] fig, ax = plt.subplots(figsize = (7,7)) ax.scatter(x,y) plt.title("Explicit By Group: Mean Trade Price\n \ 10 Trades - With Policy", fontsize = 20, fontweight = "bold") plt.xlabel("Time", fontsize = 15, fontweight = "bold") plt.ylabel("Price", fontsize = 15, fontweight = "bold") x_vol = [] y_vol = [] total = 0 for s, p in price.items(): #if step <=400: x_vol.append(s) y_vol.append(len(p)) total += len(p) total fig2, ax2 = plt.subplots(figsize = (7,7)) ax2.hist(y_vol, 100) plt.title("Trade Volume Histogram", fontsize = 20, fontweight = "bold") plt.xlabel("Trade Volume of Step", fontsize = 15, fontweight = "bold") plt.ylabel("Frequency Trade Volume", fontsize = 15, fontweight = "bold") #plt.ylim(0,400) fig2, ax2 = plt.subplots(figsize = (7,7)) ax2.plot(x_vol, y_vol) plt.title("Explicit By Group: Trade Volume\n\ 10 Trades - With Policy", fontsize = 20, fontweight = "bold") plt.xlabel("Time", fontsize = 15, fontweight = "bold") plt.ylabel("Volume", fontsize = 15, fontweight = "bold") ax2.text(600,300, "Total Trade Volume: \n "+str(total), fontsize = 15, fontweight = 'bold') #plt.ylim(0,400) from statistics import stdev from math import log x_dev =[] y_dev = [] x_all = [] y_all = [] log_prices = {} for step, prices in price.items(): log_prices[step] = [log(p) for p in prices] for step, log_p in log_prices.items(): #if step <= 400: if len(log_p) <2: pass else: for each in log_p: x_all.append(step) y_all.append(each) x_dev.append(step) y_dev.append(stdev(log_p)) from numpy.polynomial.polynomial import polyfit fig3, ax3 = plt.subplots(figsize=(7,7)) ax3.scatter(x_all,y_all, label = "Logarithmic Price") plt.plot(x_dev,y_dev,'-', color ='red', label = "Standard Deviation Logarithmic Mean") plt.title("Explicit By Group:SDLM \n10 Trades - With Policy", fontsize = 20, fontweight = "bold") plt.xlabel("Time", fontsize = 15, fontweight = "bold") plt.ylabel("Logarithmic Price", fontsize = 15, fontweight = "bold") plt.legend() b_time =pickle.load(open("Time_stats.p", "rb")) x = [x for x in range(1000)] y = list(b_time["Time Per Step"]) fig3, ax3 = plt.subplots(figsize=(7,7)) ax3.scatter(x,y) plt.scatter(x,y, color="blue") ```
github_jupyter
``` from torch_geometric.data import DataLoader import torch.distributions as D import matplotlib.pyplot as plt from rdkit import Chem, DataStructs from rdkit.Chem import AllChem, Draw, Descriptors, rdMolTransforms from rdkit import rdBase import glob import os from deepdock.utils.distributions import * from deepdock.utils.data import * from deepdock.models import * from deepdock.DockingFunction import optimze_conformation from scipy.optimize import basinhopping, brute, differential_evolution import copy # set the random seeds for reproducibility np.random.seed(123) torch.cuda.manual_seed_all(123) torch.manual_seed(123) %matplotlib inline %%time db_complex = PDBbind_complex_dataset(data_path='../data/dataset_CASF-2016_285.tar', min_target_nodes=None, max_ligand_nodes=None) print('Complexes in CASF2016 Core Set:', len(db_complex)) #device = 'cuda' if torch.cuda.is_available() else 'cpu' device = 'cpu' ligand_model = LigandNet(28, residual_layers=10, dropout_rate=0.10) target_model = TargetNet(4, residual_layers=10, dropout_rate=0.10) model = DeepDock(ligand_model, target_model, hidden_dim=64, n_gaussians=10, dropout_rate=0.10, dist_threhold=7.).to(device) checkpoint = torch.load('../Trained_models/DeepDock_pdbbindv2019_13K_minTestLoss.chk') model.load_state_dict(checkpoint['model_state_dict']) ``` ``` def dock_compound(data, dist_threshold=3., popsize=150): np.random.seed(123) torch.cuda.manual_seed_all(123) torch.manual_seed(123) model.eval() ligand, target, activity, pdbid = data ligand, target = ligand.to(device), target.to(device) pi, sigma, mu, dist, atom_types, bond_types, batch = model(ligand, target) pdb_id = pdbid[0] real_mol = Chem.MolFromMol2File('../../DeepDock/data/CASF-2016/coreset/' + pdb_id + '/' + pdb_id +'_ligand.mol2', sanitize=False, cleanupSubstructures=False, removeHs=False) opt = optimze_conformation(mol=real_mol, target_coords=target.pos.cpu(), n_particles=1, pi=pi.cpu(), mu=mu.cpu(), sigma=sigma.cpu(), dist_threshold=dist_threshold) #Define bounds max_bound = np.concatenate([[np.pi]*3, target.pos.cpu().max(0)[0].numpy(), [np.pi]*len(opt.rotable_bonds)], axis=0) min_bound = np.concatenate([[-np.pi]*3, target.pos.cpu().min(0)[0].numpy(), [-np.pi]*len(opt.rotable_bonds)], axis=0) bounds = (min_bound, max_bound) # Optimize conformations result = differential_evolution(opt.score_conformation, list(zip(bounds[0],bounds[1])), maxiter=500, popsize=int(np.ceil(popsize/(len(opt.rotable_bonds)+6))), mutation=(0.5, 1), recombination=0.8, disp=False, seed=123) # Get optimized molecule and RMSD opt_mol = opt.apply_changes(opt.mol, result['x']) ligCoords = torch.stack([torch.tensor(m.GetConformer().GetPositions()[opt.noHidx]) for m in [opt_mol]]) dist = opt.compute_euclidean_distances_matrix(ligCoords, opt.targetCoords).flatten().unsqueeze(1) result['num_MixOfGauss'] = torch.where(dist <= dist_threshold)[0].size(0) result['rmsd'] = Chem.rdMolAlign.AlignMol(opt_mol, real_mol, atomMap=list(zip(opt.noHidx,opt.noHidx))) result['pdb_id'] = pdb_id # Get score of real conformation ligCoords = torch.stack([torch.tensor(m.GetConformer().GetPositions()[opt.noHidx]) for m in [real_mol]]) dist = opt.compute_euclidean_distances_matrix(ligCoords, opt.targetCoords).flatten().unsqueeze(1) score_real_mol = opt.calculate_probablity(opt.pi, opt.sigma, opt.mu, dist) score_real_mol[torch.where(dist > dist_threshold)[0]] = 0. result['score_real_mol'] = pdb_id = score_real_mol.reshape(opt.n_particles, -1).sum(1).item() del ligCoords, dist, score_real_mol result['pkx'] = data[2][0].item() result['num_atoms'] = real_mol.GetNumHeavyAtoms() result['num_rotbonds'] = len(opt.rotable_bonds) result['rotbonds'] = opt.rotable_bonds #result['num_MixOfGauss'] = mu.size(0) return result %%time loader = DataLoader(db_complex, batch_size=1, shuffle=False) results = [] i = 0 for data in loader: try: results.append(dock_compound(data)) d = {} for k in results[0].keys(): if k != 'jac': d[k] = tuple(d[k] for d in results) torch.save(d, 'DockingResults_CASF2016_CoreSet.chk') results_df = pd.DataFrame.from_dict(d) results_df.to_csv('DockingResults_CASF2016_CoreSet.csv', index=False) i += 1 except: print(i, data[3]) #break i += 1 [if isinstance(r, list): -r[0] else -r for r in results_df.fun] results_df.head() plt.hist(results_df['nit'][results_df.success == True]) #plt.hist(results_df['nit'][results_df.success == False]) plt.hist(results_df['rmsd'][results_df.success == False]) plt.hist(results_df['rmsd'][results_df.success == True]) #plt.hist(results_df['rmsd'][results_df.success == False]) print('Mean RMSD of all compounds:', results_df.rmsd.mean()) print('Mean RMSD of compounds with succesful optimization:', results_df[results_df.success == True].rmsd.mean()) norm_scores = [-r[0] if isinstance(r, list) else -r for r in results_df.fun[results_df.success == True]] plt.scatter(norm_scores, results_df[results_df.success == True].pkx) norm_scores = [-r[0] if isinstance(r, list) else -r for r in results_df.fun[results_df.success == True]] plt.scatter(norm_scores, results_df[results_df.success == True].pkx) norm_scores = [-r[0] if isinstance(r, list) else -r for r in results_df.fun[results_df.success == True]] plt.scatter(results_df[results_df.success == True].num_rotbonds, norm_scores) norm_scores = [-r[0] if isinstance(r, list) else -r for r in results_df.fun[results_df.success == True]] plt.scatter(results_df[results_df.success == True].num_atoms, norm_scores) norm_scores = [-r[0] if isinstance(r, list) else -r for r in results_df.fun[results_df.success == True]] plt.scatter(results_df[results_df.success == True].num_atoms, results_df[results_df.success == True].rmsd) plt.scatter(results_df[results_df.success == False].num_atoms, results_df[results_df.success == False].rmsd) plt.scatter(results_df[results_df.success == True].num_rotbonds, results_df[results_df.success == True].rmsd) plt.scatter(results_df[results_df.success == False].num_rotbonds, results_df[results_df.success == False].rmsd) norm_scores = [-r[0] if isinstance(r, list) else -r for r in results_df.fun[results_df.success == True]] plt.scatter(norm_scores, results_df[results_df.success == True].score_real_mol) plt.plot([0,300], [0,300], '-r') d = torch.load('DockingResults_TestSet.chk') reults_df = pd.DataFrame.from_dict(d) reults_df.head() #%%time #loader = DataLoader(db_complex_train[5:1000], batch_size=1, shuffle=False) #data = next(iter(loader)) pdb_id = data[3][0] real_mol = Chem.MolFromMol2File('data/pdbbind_v2019_other_refined/' + pdb_id + '/' + pdb_id +'_ligand.mol2') mol = Chem.MolFromSmiles(Chem.MolToSmiles(Chem.MolFromMol2File('data/pdbbind_v2019_other_refined/' + pdb_id + '/' + pdb_id +'_ligand.mol2'))) Chem.rdchem.Mol.Compute2DCoords(mol) Chem.rdMolTransforms.CanonicalizeConformer(mol.GetConformer()) mol = Chem.AddHs(mol) AllChem.EmbedMolecule(mol, randomSeed=123) AllChem.MMFFOptimizeMolecule(mol) mol = Chem.RemoveHs(mol) opt = optimze_conformation(mol=mol, target_coords=torch.tensor([0]), n_particles=1, pi=torch.tensor([0]), mu=torch.tensor([0]), sigma=torch.tensor([0])) opt_mol = copy.copy(mol) values = t['x'] # aplply rotations [opt.SetDihedral(opt_mol.GetConformer(), opt.rotable_bonds[r], values[6+r]) for r in range(len(opt.rotable_bonds))] # aplply transformation matrix rdMolTransforms.TransformConformer(opt_mol.GetConformer(), opt.GetTransformationMatrix(values[:6])) opt_mol import py3Dmol p = py3Dmol.view(width=400,height=400) p.addModel(Chem.MolToMolBlock(opt_mol),'sdf') p.addModel(Chem.MolToMolBlock(real_mol),'sdf') p.setStyle({'stick':{}}) p.zoomTo() p.show() ```
github_jupyter
``` import fitz import time import re import os import sys from PIL import Image from tqdm import tqdm_notebook as tqdm print(fitz.__doc__) def recoverpix(doc, item): x = item[0] # xref of PDF image s = item[1] # xref of its /SMask pix1 = fitz.Pixmap(doc, x) if s == 0: # has no /SMask return pix1 # no special handling pix2 = fitz.Pixmap(doc, s) # create pixmap of /SMask entry # check that we are safe if not (pix1.irect == pix2.irect and \ pix1.alpha == pix2.alpha == 0 and \ pix2.n == 1): print("pix1", pix1, "pix2", pix2) #raise ValueError("unexpected situation") pix = fitz.Pixmap(pix1) # copy of pix1, alpha channel added pix.setAlpha(pix2.samples) # treat pix2.samples as alpha value pix1 = pix2 = None # free temp pixmaps return pix def pdf2pic(prefix, pdf_path, outputfolder_path): xreflist = [] imgcount = 0 pdf = fitz.open(pdf_path) for i in range(len(pdf)): imglist = pdf.getPageImageList(i) for img in imglist: if img[0] in xreflist: # this image has been processed continue xreflist.append(img[0]) # take note of the xref if img[2]!=0 and img[3]!= 0: #make sure the width or height of image is not zero. pix = recoverpix(pdf, img[:2]) # make pixmap from image if pix.n - pix.alpha < 4: # can be saved as PNG pass else: # must convert CMYK first pix0 = fitz.Pixmap(fitz.csRGB, pix) pix = pix0 pic_name = prefix+"_p"+str(i)+img[7]+".png" pix.writePNG(pic_name) current_dir = os.path.join(os.getcwd(), pic_name) destination_dir = os.path.join(outputfolder_path, pic_name) if os.path.exists(destination_dir): destination_dir += 'x' os.rename(current_dir, destination_dir) imgcount += 1 pix = None pdf.close() def pdfs2pics(pdffolder_path, outputfolder_path): pdfs= os.listdir(pdffolder_path) for pdf in tqdm(pdfs): if not os.path.isdir(pdf): #only open when it's not a folder prefix = pdf[0:40] pdf_path = os.path.join(pdffolder_path, pdf) pdf2pic(prefix, pdf_path, outputfolder_path) pdf_folder = "PDFS" current_path = os.getcwd() pdffolder_path = os.path.join(current_path, pdf_folder) outputfolder_path = os.path.join(current_path, "Pdf2Pic_output") if os.path.exists(outputfolder_path): print("foler already existed, please create a new folder!") raise SystemExit else: os.makedirs(outputfolder_path) pdfs2pics(pdffolder_path, outputfolder_path) ```
github_jupyter
# Develops the Calculation Model for the Online Calculator ``` import sys from math import nan, isnan, inf import numbers from importlib import reload import inspect from pprint import pprint import pandas as pd import numpy as np # import matplotlib pyplot commands from matplotlib.pyplot import * from IPython.display import Image, Markdown from qgrid import show_grid as sh # Show Plots in the Notebook %matplotlib inline # 'style' the plot like fivethirtyeight.com website style.use('bmh') rcParams['figure.figsize']= (10, 6) # set Chart Size rcParams['font.size'] = 14 # set Font size in Chart # Access the directory where some utility modules are located in the # actual heat pump calculator. #sys.path.insert(0, '../../heat-pump-calc/heatpump/') sys.path.insert(0, '../../heat-pump-calc/') import heatpump.library as lib reload(lib) import heatpump.hp_model reload(heatpump.hp_model) import heatpump.home_heat_model reload(heatpump.home_heat_model) sh(lib.df_city[['Name', 'ElecUtilities']]) lib.fuels() lib.city_from_id(45) lib.heat_pump_from_id(601) util_id = 1 utility = lib.util_from_id(util_id) utility utility = lib.util_from_id(1) inputs1 = dict( city_id=1, utility=utility, pce_limit=500, co2_lbs_per_kwh=1.1, exist_heat_fuel_id=2, exist_unit_fuel_cost=0.97852, exist_fuel_use=1600, exist_heat_effic=.8, exist_kwh_per_mmbtu=8, includes_dhw=True, occupant_count=3, includes_dryer=True, includes_cooking=False, elec_use_jan=550, elec_use_may=400, hp_model_id=575, low_temp_cutoff=5, garage_stall_count=2, garage_heated_by_hp=False, bldg_floor_area=3600, indoor_heat_setpoint=70, insul_level=3, pct_exposed_to_hp=0.46, doors_open_to_adjacent=False, bedroom_temp_tolerance='med', capital_cost=4500, rebate_dol=500, pct_financed=0.5, loan_term=10, loan_interest=0.05, hp_life=14, op_cost_chg=10, sales_tax=0.02, discount_rate=0.05, inflation_rate=0.02, fuel_esc_rate=0.03, elec_esc_rate=0.02, ) utility = lib.util_from_id(202) inputs2 = dict( city_id=45, utility=utility, pce_limit=500, co2_lbs_per_kwh=1.6, exist_heat_fuel_id=4, exist_unit_fuel_cost=8.0, exist_fuel_use=450, exist_heat_effic=.86, exist_kwh_per_mmbtu=8, includes_dhw=False, occupant_count=3, includes_dryer=False, includes_cooking=False, elec_use_jan=550, elec_use_may=400, hp_model_id=601, low_temp_cutoff=5, garage_stall_count=0, garage_heated_by_hp=False, bldg_floor_area=800, indoor_heat_setpoint=70, insul_level=2, pct_exposed_to_hp=1.0, doors_open_to_adjacent=False, bedroom_temp_tolerance='med', capital_cost=6500, rebate_dol=0, pct_financed=0.0, loan_term=10, loan_interest=0.05, hp_life=14, op_cost_chg=0, sales_tax=0.00, discount_rate=0.05, inflation_rate=0.02, fuel_esc_rate=0.03, elec_esc_rate=0.02, ) mc = heatpump.hp_model.HP_model(**inputs2) mc.run() pprint(mc.summary) mc.df_mo_dol_base mc.df_mo_dol_hp mc.df_cash_flow mc.df_mo_en_base mc.df_mo_en_hp mc.city # Model an average Enstar Home, probably somewhere between an # insulation level 1 and 2. Space Heating for Average Enstar # House is about 1326 CCF, from "accessible_UA.ipynb" m = heatpump.hp_model.HP_model(**inputs1) m.insul_level = 2 m.exist_heat_effic = 0.76 m.bldg_floor_area = 2100 m.garage_stall_count = 1 m.exist_fuel_use = None m.run() f_level2 = m.summary['fuel_use_base'] m.insul_level = 1 m.run() f_level1 = m.summary['fuel_use_base'] # Assuming 2/3 Level 2 and 1/3 Level 1 print(0.67 * f_level2 + 0.33 * f_level1) # Check whether COP is getting modeled correctly m.df_hourly.plot(x='db_temp', y='cop', marker='.', linewidth=0) off_days = set(m.df_hourly.query('running == False')['day_of_year']) off_days, len(off_days) dft = m.df_hourly[['db_temp', 'day_of_year']].copy() dft.head() sh(m.df_hourly[['db_temp', 'running']]) m ```
github_jupyter
``` import globals import re from rank_bm25 import BM25Okapi import math import sqlite3 from sqlite3 import Error import spacy from spacy.tokens import DocBin # Initialize spacy 'en' model, keeping only tagger component needed for lemmatization nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner']) from pathlib import Path from shutil import rmtree import os from os import listdir from os.path import isfile, join import nltk from nltk import sent_tokenize, tokenize, word_tokenize nltk.download("punkt") import sys import tqdm import json from wasabi import msg # Reset bm25 dirs globals.resetDir(globals.bm25_dir) globals.resetDir(globals.bm25_tmp_dir) # Merge all files into a single file merged_text_path = Path(globals.merged_text_dir) if merged_text_path.exists(): rmtree(merged_text_path) merged_text_path.mkdir(parents=True) inFiles = globals.getFilesInDir(os.path.join(globals.processed_text_dir, globals.blob_container_path)) counter = 0 with open(os.path.join(globals.merged_text_dir, globals.merged_text_file_name), "wb") as mergedFile: for fileToProcess in inFiles: counter += 1 if counter % 1000 == 0: print ("Merged file count:", counter) with open(os.path.join(globals.processed_text_dir, globals.blob_container_path, fileToProcess), "rb") as infile: mergedFile.write(infile.read()) print ("Merged file count:", counter) # Get the word counts over the merged file # Get the avg word count print ("Get word counts over merged files...") in_doc_counts = dict() total_word_count = 0.0 avg_word_count = 0.0 # Open the file in read mode text = open(os.path.join(globals.merged_text_dir, globals.merged_text_file_name), "r", encoding='utf-8') # This is the total bag of words counts across all docs word_counts = dict() # This is the total word counts / doc in_doc_total_word_counts = dict() # Loop through each line of the file doc_counter = 0 for line in text: doc_counter += 1 # Remove the leading spaces and newline character line = line.strip() # Split the line into words words = line.split(" ") total_word_count += len(words) in_doc_total_word_counts[doc_counter] = len(words) # Iterate over each word in line sc = {} for word in words: if word in word_counts: word_counts[word] = word_counts[word] + 1 else: word_counts[word] = 1 if word in sc: sc[word] = sc[word] + 1 else: sc[word] = 1 in_doc_counts[doc_counter] = sc print ("Get avg word counts...") avg_word_count = total_word_count / doc_counter totalDocCount = doc_counter bm25Values = dict() count1 = 0 count2 = 0 print ("Calculating avg BM25 values...") word_count_keys_as_list = list(word_counts.keys()) word_count_keys_len = len(word_count_keys_as_list) word_count_key_counter = 0 print ('Total terms to process:', word_count_keys_len) in_doc_counts_keys_as_list = list(in_doc_counts.keys()) # Get the number of sentences in_doc_counts_len = len(in_doc_counts_keys_as_list) in_doc_counts_counter = 0 # convert the in_doc_counts to a dataframe so it can be processed faster print ("Writing doc term counts to text...") counter = 0 with open(os.path.join(globals.bm25_tmp_dir, "doc_terms.txt"), "w", encoding="utf-8") as outfile: for doc_key in list(in_doc_counts.keys()): counter += 1 if counter % 100000 == 0: print ("Completed", counter, "of", in_doc_counts_len, "...") for doc_term in in_doc_counts[doc_key]: outfile.write(str(doc_key) + '\t' + str(doc_term) + '\t' + str(in_doc_counts[doc_key][doc_term]) + '\r\n') print ("Completed", counter, "of", in_doc_counts_len, "...") # load the doc terms into a indexed sqlite db print ("Loading doc term counts into indexed db...") conn = sqlite3.connect(os.path.join(globals.bm25_tmp_dir,"bm25.sqlite")) try: sql = "drop table if exists doc_terms" conn.execute(sql) sql = "create table doc_terms (doc_key int, term text, count int)" conn.execute(sql) except Error as e: print(e) # load the doc terms count = 0 rows = [] c = conn.cursor() with open(os.path.join(globals.bm25_tmp_dir, "doc_terms.txt"), encoding='utf-8') as fp: while True: line = fp.readline() if not line: break fields = line.split('\t') rows.append((fields[0], fields[1], fields[2])) count += 1 if count % 100000 == 0: sql = 'insert into doc_terms (doc_key, term, count) values (?,?,?)' c.executemany(sql, rows) conn.commit() print ("Inserted:", str(count)) rows = [] if len(rows) > 0: sql = 'insert into doc_terms (doc_key, term, count) values (?,?,?)' c.executemany(sql, rows) conn.commit() print ("Inserted:", str(count)) print ("Creating indexes...") sql = "create index idx_doc_terms_doc_key on doc_terms (doc_key)" conn.execute(sql) sql = "create index idx_doc_terms_terms on doc_terms (term)" conn.execute(sql) counter = 0 for key in word_count_keys_as_list: counter += 1 if counter % 100 == 0: print ("Completed", counter, "of", word_count_keys_len, "...") uniqueWord = key # print ('uniqueWord: ', uniqueWord) bm25Total = 0.0 wordCounter = 0.0 avgBM25 = 0.0 try: # get all the docs that contain this uniqueword cur = conn.cursor() cur.execute("SELECT doc_key, count FROM doc_terms WHERE term=?", (uniqueWord,)) rows = cur.fetchall() for row in rows: wordCountOfThisDoc = row[0] termFreqInDocument = row[1] termFreqInIndex = word_counts[key] this_bm25 = math.log((totalDocCount - termFreqInIndex + 0.5) / (termFreqInIndex + 0.5)) * (termFreqInDocument * (globals.k1 + 1)) / (termFreqInDocument + globals.k1 * (1 - globals.b + (globals.b * wordCountOfThisDoc / avg_word_count))) bm25Total += this_bm25 wordCounter += 1 avgBM25 = bm25Total / wordCounter if avgBM25 < 0: avgBM25 = 0 except Exception as e: ## as long as it is not a divid b 0, print error if str(e) != 'math domain error': print("error:", e) bm25Values[uniqueWord] = 0 bm25Values[uniqueWord] = avgBM25 # write avg bm25 values to file print ("Writing avg BM25 values...") with open(os.path.join(globals.bm25_dir, globals.bm25_file), 'w', encoding='utf-8') as f: for bm25_key in list(bm25Values.keys()): f.write(bm25_key + '\t' + str(bm25Values[bm25_key]) + '\r\n') ```
github_jupyter
# 诸闭壳层量子化学方法的密度矩阵 > 创建时间:2021-01-04;最后修改:2021-06-10 在这份简短笔记中,我们会回顾一些量子化学方法的密度矩阵,及其性质。大体的结论在下述表格中。 我们在这里只讨论闭壳层与实函数的情况。 | 方法 | RHF 轨道基函数 | 能量关系 | $P_p^q$ 对称性 | $\Gamma_{pr}^{qs}$ 对称性 | 1-RDM 与电子数 | $\Gamma_{pr}^{qs}$ 与 $P_p^q$ 的关系 | $\mathbf{P}$ 幂等性 | $P_i^a$ 为零 | $\Gamma_{ij}^{ab}$ 为零 | $F_p^q$ 对称性 | 1-RDM 偶极矩 | | ------- |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | RHF | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | | Full-CI | √ | √ | √ | √ | √ | √ | × | × | × | √ | √ | | MP2 | √ | √ | √ | √ | √ | × | × | √ | × | × | × | | CCSD | √ | √ | √ | √ | √ | √ | × | × | × | × | × | | CISD | √ | √ | √ | √ | √ | √ | × | × | × | × | × | | CASCI | × | √ | √ | √ | √ | √ | × | N/A | N/A | × | × | | CASSCF | × | √ | √ | √ | √ | √ | × | N/A | N/A | √ | √ | 之所以上面表格中 CASCI、CASSCF 方法不能说 $P_i^a$ (密度矩阵的占据-非占) 与 $\Gamma_{ij}^{ab}$ (2-RDM 的占据-非占),是因为它们并是非基于 RHF 参考态的方法,不存在确切的占据与未占轨道。 ## 预定义 ``` from pyscf import gto, scf, mp, cc, ci, mcscf, fci import numpy as np from functools import partial np.einsum = partial(np.einsum, optimize=True) np.set_printoptions(precision=5, linewidth=150, suppress=True) ``` 这里讨论的密度矩阵并非是 Full-CI 的情形矩阵,而是会因方法各异而不同的。 这里采用相对比较严格的 Einstein Summation Convention,即被求和角标必须是一个在上,一个在下。 这份文档中使用下述上下标: - $p, q, r, s, m, n$ 分子轨道 - $i, j$ 分子占据轨道,$a, b$ 分子未占轨道 - $\mu, \nu, \kappa, \lambda$ 原子轨道 分子轨道函数 $\phi_p (\boldsymbol{r})$ 与原子轨道函数 $\phi_\mu (\boldsymbol{r})$ 之间满足关系 ($C_p^\mu$ 称为原子轨道系数) $$ \phi_p (\boldsymbol{r}) = C_p^\mu \phi_\mu (\boldsymbol{r}) $$ 我们假定了研究体系必然是实函数,但我们暂且定义函数的共轭记号如下:(不使用 Einstein Summation) $$ \phi^p (\boldsymbol{r}) = \phi_p^* (\boldsymbol{r}) $$ 分子轨道之间是正交归一的,但原子轨道需要用重叠积分: $$ \int \phi_p (\boldsymbol{r}) \phi^q (\boldsymbol{r}) \, \mathrm{d} \boldsymbol{r} = \delta_p^q, \; \int \phi_\mu (\boldsymbol{r}) \phi^\nu (\boldsymbol{r}) \, \mathrm{d} \boldsymbol{r} = S_\mu^\nu $$ ## Full-CI 密度矩阵的定义与性质 这里只作理论上的讨论。Full-CI 密度矩阵程序上的实现会在后面呈现。 密度矩阵与约化密度的定义有关。由于我们只讨论闭壳层情形,因此波函数可以安全地写成空间坐标的函数。 ### 1-RDM 与基转换关系 我们回顾一阶约化密度 $\rho(\boldsymbol{r}; \boldsymbol{r}')$: $$ \rho(\boldsymbol{r}; \boldsymbol{r}') = \idotsint \Psi^* (\boldsymbol{r}, \boldsymbol{r}_2, \boldsymbol{r}_3, \cdots, \boldsymbol{r}_{n_\mathrm{elec}}) \Psi (\boldsymbol{r}', \boldsymbol{r}_2, \boldsymbol{r}_3, \cdots, \boldsymbol{r}_{n_\mathrm{elec}}) \, \mathrm{d} \boldsymbol{r}_2 \, \mathrm{d} \boldsymbol{r}_3 \cdots \, \mathrm{d} \boldsymbol{r}_{n_\mathrm{elec}} $$ 但是现在只有有限的基函数展开一阶约化密度;如果这组基函数是 RHF 分子轨道 $\{ \phi_{p} (\boldsymbol{r}) \}$,那么定义下述分子轨道基一阶约化密度矩阵 $P_p^q$ (One-Order Reduced Density Matrix, 1-RDM) $$ \rho(\boldsymbol{r}; \boldsymbol{r}') = P_p^q \phi^p (\boldsymbol{r}) \phi_q (\boldsymbol{r}') $$ 如果是原子轨道 $\{ \phi_\mu (\boldsymbol{r}) \}$,那么它称为原子轨道基 1-RDM $P_\mu^\nu$ $$ \rho(\boldsymbol{r}; \boldsymbol{r}') = P_\mu^\nu \phi^\mu (\boldsymbol{r}) \phi_\nu (\boldsymbol{r}') $$ 依据分子轨道与原子轨道间的关系,有 $$ \rho(\boldsymbol{r}; \boldsymbol{r}') = C_\mu^p P_p^q C_q^\nu \phi^\mu (\boldsymbol{r}) \phi_\nu (\boldsymbol{r}') $$ 因此,原子轨道基与分子轨道基的 1-RDM 间存在关系 $$ P_\mu^\nu = C_\mu^p P_p^q C_q^\nu $$ ### 1-RDM 迹 当 $\boldsymbol{r}, \boldsymbol{r}'$ 相同时,我们会将一阶约化密度简记为电子密度 $\rho(\boldsymbol{r}) = \rho(\boldsymbol{r}; \boldsymbol{r})$。 一阶约化密度具有积分为电子数的性质: $$ \int \rho(\boldsymbol{r}) \, \mathrm{d} \boldsymbol{r} = n_\mathrm{elec} $$ 在分子轨道基的表示下,上式可以写为 $$ P_p^q \int \phi^p (\boldsymbol{r}) \phi_q (\boldsymbol{r}) \, \mathrm{d} \boldsymbol{r} = P_p^q \delta^p_q = \mathrm{tr} (\mathbf{P}) = n_\mathrm{nelec} $$ ### 1-RDM 对称性 首先我们可以证明下述等式: $$ \begin{align} &\quad\ \iint \phi_p (\boldsymbol{r}) \rho(\boldsymbol{r}; \boldsymbol{r}') \phi^q (\boldsymbol{r}') \, \mathrm{d} \boldsymbol{r} \, \mathrm{d} \boldsymbol{r}' \\ &= \iint \phi_p (\boldsymbol{r}) P_r^s \phi^r (\boldsymbol{r}) \phi_s (\boldsymbol{r}') \phi^q (\boldsymbol{r}') \, \mathrm{d} \boldsymbol{r} \, \mathrm{d} \boldsymbol{r}' \\ &= P_r^s \delta_p^r \delta_s^q = P_p^q \end{align} $$ 对于上式,如果我们交换被积元变量 $\boldsymbol{r}, \boldsymbol{r'}$、并对表达式取共轭,得到 (不使用 Einstein Summation) $$ \iint \phi_p (\boldsymbol{r}) \rho(\boldsymbol{r}; \boldsymbol{r}') \phi^q (\boldsymbol{r}') \, \mathrm{d} \boldsymbol{r} \, \mathrm{d} \boldsymbol{r}' = \iint \phi_q (\boldsymbol{r}) \rho^*(\boldsymbol{r}'; \boldsymbol{r}) \phi^p (\boldsymbol{r}') \, \mathrm{d} \boldsymbol{r} \, \mathrm{d} \boldsymbol{r}' $$ 如果我们再利用实数情形下,根据一阶约化密度的定义,有 $\rho(\boldsymbol{r}; \boldsymbol{r}') = \rho(\boldsymbol{r}'; \boldsymbol{r}) = \rho^*(\boldsymbol{r}'; \boldsymbol{r})$,那么可以立即得到 (不使用 Einstein Summation) $$ P_p^q = \iint \phi_p (\boldsymbol{r}) \rho(\boldsymbol{r}; \boldsymbol{r}') \phi^q (\boldsymbol{r}') \, \mathrm{d} \boldsymbol{r} \, \mathrm{d} \boldsymbol{r}' = \iint \phi_q (\boldsymbol{r}) \rho(\boldsymbol{r}; \boldsymbol{r}') \phi^p (\boldsymbol{r}') \, \mathrm{d} \boldsymbol{r} \, \mathrm{d} \boldsymbol{r}' = P_q^p $$ 即 1-RDM 矩阵 $P_p^q$ 是对称矩阵。 ### 2-RDM 与 1-RDM 相同地,依据二阶约化密度 $\gamma (\boldsymbol{r}_1, \boldsymbol{r}_2; \boldsymbol{r}'_1, \boldsymbol{r}'_2)$ 的定义: $$ \gamma (\boldsymbol{r}_1, \boldsymbol{r}_2; \boldsymbol{r}'_1, \boldsymbol{r}'_2) = \idotsint \Psi^* (\boldsymbol{r}_1, \boldsymbol{r}_2, \boldsymbol{r}_3, \cdots, \boldsymbol{r}_{n_\mathrm{elec}}) \Psi (\boldsymbol{r}'_1, \boldsymbol{r}'_2, \boldsymbol{r}_3, \cdots, \boldsymbol{r}_{n_\mathrm{elec}}) \, \mathrm{d} \boldsymbol{r}_3 \cdots \, \mathrm{d} \boldsymbol{r}_{n_\mathrm{elec}} $$ 用分子轨道基作展开,可以定义分子轨道基的二阶约化密度矩阵 $\Gamma_{pr}^{qs}$ (Two-Order Reduced Density Matrix, 2-RDM) $$ \gamma (\boldsymbol{r}_1, \boldsymbol{r}_2; \boldsymbol{r}'_1, \boldsymbol{r}'_2) = \Gamma_{pr}^{qs} \phi^p (\boldsymbol{r}_1) \phi_q (\boldsymbol{r}'_1) \phi^r (\boldsymbol{r}_2) \phi_s (\boldsymbol{r}'_2) $$ 原子与分子轨道基转换也与 1-RDM 类似: $$ \Gamma_{\mu \kappa}^{\nu \lambda} = C_\mu^p C^\nu_q \Gamma_{pr}^{qs} C_\kappa^r C^\lambda_s $$ ### 2-RDM 与 1-RDM 的关系 出于全同粒子的性质,2-RDM 与 1-RDM 之间存在关系: $$ \rho(\boldsymbol{r}_1; \boldsymbol{r}'_1) = \frac{1}{n_\mathrm{elec} - 1} \iint \gamma (\boldsymbol{r}_1, \boldsymbol{r}_2; \boldsymbol{r}'_1, \boldsymbol{r}_2) \, \mathrm{d} \boldsymbol{r}_2 $$ 对上式展开并作一部分积分后,可以得到 $$ P_p^q \phi^p (\boldsymbol{r}_1) \phi_q (\boldsymbol{r}'_1) = \frac{1}{n_\mathrm{elec} - 1} \Gamma_{pr}^{qm} \phi^p (\boldsymbol{r}_1) \phi_q (\boldsymbol{r}'_1) \delta_m^r $$ 由于上式要在任意的 $\boldsymbol{r}_1, \boldsymbol{r}'_1$ 的取值下成立,因此可以认为 $$ P_p^q = \frac{1}{n_\mathrm{elec} - 1} \Gamma_{pr}^{qm} \delta_m^r $$ 注意上式要对等式右边作关于 $r, m$ 角标的求和。 ### 2-RDM 对称性 分析 2-RDM 对称性相对比较麻烦。这里就略过讨论了。我们仅指出,实数闭壳层下的 2-RDM 应当具有二重对称性,而不具有更高的对称性: $$ \Gamma_{pr}^{qs} = \Gamma_{rp}^{sq} $$ ### 密度矩阵与能量 这是最关键的一个性质。密度矩阵可以用来表示电子态的能量。 现在记原子轨道基组下的单电子算符积分为 $h_\mu^\nu$、双电子算符积分为 $g_{\mu \kappa}^{\nu \lambda}$,其中单电子算符包含动能、原子核-电子库伦势能、电场势能等贡献,双电子算符包含电子-电子库伦势能贡献。那么,体系单点能为 $$ E_\mathrm{tot} = E_\mathrm{elec} + E_\mathrm{nuc} = h_\mu^\nu P_\nu^\mu + \frac{1}{2} g_{\mu \kappa}^{\nu \lambda} \Gamma_{\nu \lambda}^{\mu \kappa} + E_\mathrm{nuc} $$ ### 1-RDM 与偶极矩 Full-CI 的 1-RDM 可以直接用以计算偶极矩;以 $z$ 轴方向施加的电场为例: $$ d_z = - z_\mu^\nu P_\nu^\mu $$ 其中 $z_\mu^\nu = \langle \mu | z | \nu \rangle$。 ### 广义 Fock 矩阵 广义 Fock 矩阵定义为 $$ F_p^q = h_p^r P_r^q + g_{pr}^{ms} \Gamma_{ms}^{qr} $$ 特别地,在 RHF 下,它是对角矩阵。而在 Full-CI 下,它会是对称矩阵。 ## RHF 密度矩阵特有性质 ### 幂等性 在 Conanical-HF 下,幂等性是几乎显然的:$P_p^q$ 一定是对角矩阵;如果 $p$ 所代表的轨道是占据轨道,那么一定填了因为填了两个电子而值为 2,否则为零。因此,Conanical-HF 下一定满足 $$ P_p^m P_m^q = 2 P_p^q $$ 一般程序都只会给出 Conanical-HF 的结果。但若讨论 Nonconanical-HF 时 $P_p^q$ 未必是对角矩阵,但上述结论应仍然成立。 ### 非占-占据部分为零 Hartree-Fock 方法严格地将轨道分为占据与非占据。因此,Canonical 或 Nonconanical HF 方法都会保证 1-RDM 是块状对角化的;即在占据-非占 $P_i^a$,非占-占据 $P_a^i$,非占-非占 $P_a^b$ 均严格为零。对于 2-RDM 是类似的。 由于 Hartree-Fock 方法没有考虑非占轨道的贡献,因此任何 Post-HF 方法均一定程度上有激发态的贡献。一般来说,非占-非占的 $P_a^b$ 贡献总是存在的;但占据-非占或非占-占据的 $P_i^a$ 与 $P_a^i$ 则未必存在。 ## 通用计算函数 下面的文档仅仅是验证开头的表格的代码。 ### 分子定义:水分子 - `mol` 水分子实例; - `nelec` $n_\mathrm{elec}$ 电子数; - `nocc` $n_\mathrm{occ}$ 占据轨道数; - `h` 原子轨道基 $h_\mu^\nu$,维度 $(\mu, \nu)$; - `g` 原子轨道基 $g_{\mu \kappa}^{\nu \lambda}$,维度 $(\mu, \nu, \kappa, \lambda)$; - `S` 原子轨道基 $S_\mu^\nu$,维度 $(\mu, \nu)$; - `mf_rhf` RHF 实例。 ``` mol = gto.Mole() mol.atom = """ O 0. 0. 0. H 0. 0. 1. H 0. 1. 0. """ mol.basis = "6-31G" mol.verbose = 0 mol.build() nelec = mol.nelectron nocc = mol.nelec[0] nelec, nocc h = mol.intor("int1e_kin") + mol.intor("int1e_nuc") g = mol.intor("int2e") S = mol.intor("int1e_ovlp") mf_rhf = scf.RHF(mol).run() ``` ### 验证能量表达式 验证 $$ E_\mathrm{tot} = h_\mu^\nu P_\nu^\mu + \frac{1}{2} g_{\mu \kappa}^{\nu \lambda} \Gamma_{\nu \lambda}^{\mu \kappa} + E_\mathrm{nuc} = h_p^q P_q^p + \frac{1}{2} g_{pr}^{qs} \Gamma_{qs}^{pr} + E_\mathrm{nuc} $$ ``` eng_nuc = mol.energy_nuc() def verify_energy_relation(eng, eng_nuc, rdm1, rdm2, h_mo, g_mo): return np.allclose(np.einsum("pq, qp ->", h_mo, rdm1) + 0.5 * np.einsum("pqrs, qpsr ->", g_mo, rdm2) + eng_nuc, eng) ``` ### 验证 1-RDM 对称性 验证 $P_p^q = P_q^p$。 ``` def verify_rdm1_symm(rdm1): # Output: 1-RDM symmetric property return np.allclose(rdm1, rdm1.T) ``` ### 验证 2-RDM 对称性 验证 $\Gamma_{pr}^{qs} = \Gamma_{rp}^{sq}$。 ``` def verify_rdm2_symm(rdm2): return np.allclose(rdm2, np.einsum("pqrs -> rspq", rdm2)) ``` ### 验证 1-RDM 的迹 验证 $P_p^r \delta_r^p = n_\mathrm{elec}$。 ``` def verify_rdm1_tr(rdm1): return np.allclose(rdm1.trace(), nelec) ``` ### 验证 1-RDM 与 2-RDM 的关系 验证 $P_p^q = (n_\mathrm{elec} - 1)^{-1} \Gamma_{pr}^{qm} \delta_m^r$。 ``` def verify_rdm12_relation(rdm1, rdm2): return np.allclose(rdm1, (nelec - 1)**-1 * rdm2.diagonal(axis1=-1, axis2=-2).sum(axis=-1)) ``` ### 验证 1-RDM 幂等性 验证 $P_p^m P_m^q = 2 P_p^q$。 ``` def verify_rdm1_idomp(rdm1): return np.allclose(rdm1 @ rdm1, 2 * rdm1) ``` ### 验证 $P_i^a$ 为零 这里实际上同时验证 $P_a^i$ 是否为零。 ``` def verify_rdm1_ov(rdm1): mat1 = rdm1[nocc:, :nocc] mat2 = rdm1[:nocc, nocc:] return np.allclose(mat1, np.zeros_like(mat1)) and np.allclose(mat2, np.zeros_like(mat2)) ``` ### 验证 $\Gamma_{ij}^{ab}$ 为零 ``` def verify_rdm2_ovov(rdm2): mat = rdm2[:nocc, nocc:, :nocc, nocc:] return np.allclose(mat, np.zeros_like(mat)) ``` ### 验证广义 Fock 矩阵对称性 $$ F_p^q = h_p^r P_r^q + g_{pr}^{ms} \Gamma_{ms}^{qr} $$ ``` def verify_gF_symm(rdm1, rdm2, h_mo, g_mo): gF = np.einsum("pr, rq -> pq", h_mo, rdm1) + np.einsum("pmrs, mqsr -> pq", g_mo, rdm2) return np.allclose(gF, gF.T, atol=1e-4) ``` ### 偶极矩的验证 通过 1-RDM 计算的偶极矩为 (不考虑原子核影响) $$ d_z = - z_\mu^\nu P_\nu^\mu $$ 但另一种偶极矩的计算方式是对 $h_\mu^\nu$ 作更改,求得该情形下的能量作数值差分得到。数值差分的间隙设定为 1e-4 单位电场强度。 ``` h_field = 1e-4 def get_hcore_p(mol_=mol): return mol.intor("int1e_kin") + mol.intor("int1e_nuc") - h_field * mol.intor("int1e_r")[2] def get_hcore_m(mol_=mol): return mol.intor("int1e_kin") + mol.intor("int1e_nuc") + h_field * mol.intor("int1e_r")[2] mf_rhf_p, mf_rhf_m = scf.RHF(mol), scf.RHF(mol) mf_rhf_p.get_hcore = get_hcore_p mf_rhf_m.get_hcore = get_hcore_m mf_rhf_p.run(), mf_rhf_m.run() charges = mol.atom_charges() coords = mol.atom_coords() nucl_dip = np.einsum('i,ix->x', charges, coords) def verify_dip(method, rdm1, z_intg): mf_met_m, _, _, _ = method(mf_rhf_m) mf_met_p, _, _, _ = method(mf_rhf_p) dip_num = (mf_met_p.e_tot - mf_met_m.e_tot) / (2 * h_field) + nucl_dip[2] dip_rdm1 = - (rdm1 * z_intg).sum() + nucl_dip[2] return np.allclose(dip_num, dip_rdm1, atol=1e-4) ``` ## 各种方法的验证 ### 总验证程序 ``` def verify_all(method): # rdm1, rdm2 here are both in mo_basis mf_met, C, rdm1, rdm2 = method(mf_rhf) h_mo = C.T @ h @ C g_mo = np.einsum("up, vq, uvkl, kr, ls -> pqrs", C, C, g, C, C) z_intg = C.T @ mol.intor("int1e_r")[2] @ C print("=== Energy Relat === ", verify_energy_relation(mf_met.e_tot, eng_nuc, rdm1, rdm2, h_mo, g_mo)) print("=== 1-RDM Symm === ", verify_rdm1_symm(rdm1)) print("=== 2-RDM Symm === ", verify_rdm2_symm(rdm2)) print("=== 1-RDM Trace === ", verify_rdm1_tr(rdm1)) print("=== 12-RDM Relat === ", verify_rdm12_relation(rdm1, rdm2)) print("=== 1-RDM Idomp === ", verify_rdm1_idomp(rdm1)) print("=== 1-RDM ov === ", verify_rdm1_ov(rdm1)) print("=== 2-RDM ovov === ", verify_rdm2_ovov(rdm2)) print("=== GenFock Symm === ", verify_gF_symm(rdm1, rdm2, h_mo, g_mo)) print("=== 1-RDM Dipole === ", verify_dip(method, rdm1, z_intg)) ``` ### RHF ``` def method_rhf(mf_rhf): mf_met = mf_rhf C = mf_rhf.mo_coeff Cinv = np.linalg.inv(C) # In AO basis rdm1 = mf_rhf.make_rdm1() rdm2 = np.einsum("uv, kl -> uvkl", rdm1, rdm1) - 0.5 * np.einsum("uv, kl -> ukvl", rdm1, rdm1) # Transform to MO basis rdm1 = np.einsum("pu, uv, qv -> pq", Cinv, rdm1, Cinv) rdm2 = np.einsum("pu, qv, uvkl, rk, sl -> pqrs", Cinv, Cinv, rdm2, Cinv, Cinv) return mf_met, C, rdm1, rdm2 verify_all(method_rhf) ``` ### Full-CI ``` def method_fci(mf_rhf): mf_met = fci.FCI(mf_rhf).run() C = mf_rhf.mo_coeff # In MO basis rdm1, rdm2 = mf_met.make_rdm12(mf_met.ci, mol.nao, mol.nelec) return mf_met, C, rdm1, rdm2 verify_all(method_fci) ``` ### MP2 ``` def method_mp2(mf_rhf): mf_met = mp.MP2(mf_rhf).run() C = mf_rhf.mo_coeff # In MO basis rdm1, rdm2 = mf_met.make_rdm1(), mf_met.make_rdm2() return mf_met, C, rdm1, rdm2 verify_all(method_mp2) ``` ### CCSD ``` def method_ccsd(mf_rhf): mf_met = cc.CCSD(mf_rhf).run() C = mf_rhf.mo_coeff # In MO basis rdm1, rdm2 = mf_met.make_rdm1(), mf_met.make_rdm2() return mf_met, C, rdm1, rdm2 verify_all(method_ccsd) ``` ### CISD ``` def method_cisd(mf_rhf): mf_met = ci.CISD(mf_rhf).run() C = mf_rhf.mo_coeff # In MO basis rdm1, rdm2 = mf_met.make_rdm1(), mf_met.make_rdm2() return mf_met, C, rdm1, rdm2 verify_all(method_cisd) ``` ### CASCI ``` def method_casci(mf_rhf): mf_met = mcscf.CASCI(mf_rhf, ncas=4, nelecas=4).run() C = mf_met.mo_coeff Cinv = np.linalg.inv(C) # In AO basis rdm1, rdm2 = mcscf.addons.make_rdm12(mf_met) # Transform to MO basis rdm1 = np.einsum("pu, uv, qv -> pq", Cinv, rdm1, Cinv) rdm2 = np.einsum("pu, qv, uvkl, rk, sl -> pqrs", Cinv, Cinv, rdm2, Cinv, Cinv) return mf_met, C, rdm1, rdm2 verify_all(method_casci) ``` ### CASSCF ``` def method_casscf(mf_rhf): mf_met = mcscf.CASSCF(mf_rhf, ncas=4, nelecas=4).run() C = mf_met.mo_coeff Cinv = np.linalg.inv(C) # In AO basis rdm1, rdm2 = mcscf.addons.make_rdm12(mf_met) # Transform to MO basis rdm1 = np.einsum("pu, uv, qv -> pq", Cinv, rdm1, Cinv) rdm2 = np.einsum("pu, qv, uvkl, rk, sl -> pqrs", Cinv, Cinv, rdm2, Cinv, Cinv) return mf_met, C, rdm1, rdm2 verify_all(method_casscf) ``` ## 补充 感谢 [hebrewsnabla](https://github.com/hebrewsnabla) 对 CASCI、CASSCF 密度矩阵的讨论。
github_jupyter
## 论文引用网络中的节点分类任务 在这一教程中,我们将展示 GraphScope 如何结合图分析、图查询和图学习的能力,处理论文引用网络中的节点分类任务。 在这个例子中我们使用的是 [ogbn-mag](https://ogb.stanford.edu/docs/nodeprop/#ogbn-mag) 数据集。"ogbn" 是由微软学术关系图(Microsoft Academic Graph)的子集组成的异构图网络。该图中包含4种类型的实体(即论文、作者、机构和研究领域),以及连接两个实体的四种类型的有向关系边。 我们需要处理的任务是,给出异构的 ogbn-mag 数据,在该图上预测每篇论文的类别。这是一个节点分类任务,该任务可以归类在各个领域、各个方向或研究小组的论文,通过对论文属性和引用图上的结构信息对论文进行分类。在该数据中,每个论文节点包含了一个从论文标题、摘要抽取的 128 维 word2vec 向量作为表征,该表征是经过预训练提前获取的;而结构信息是在计算过程中即时计算的。 这一教程将会分为以下几个步骤: - 通过gremlin交互式查询图; - 执行图算法做图分析; - 执行基于图数据的机器学习任务; ``` # Install graphscope package if you are NOT in the Playground !pip3 install graphscope !pip3 uninstall -y importlib_metadata # Address an module conflict issue on colab.google. Remove this line if you are not on colab. # Import the graphscope module import graphscope graphscope.set_option(show_log=False) # enable logging # Load the obgn_mag dataset as a graph from graphscope.dataset import load_ogbn_mag graph = load_ogbn_mag() ``` ## Interactive query with gremlin 在此示例中,我们启动了一个交互查询引擎,然后使用图遍历来查看两位给定作者共同撰写的论文数量。为了简化查询,我们假设作者可以分别由ID 2 和 4307 唯一标识。 ``` # Get the entrypoint for submitting Gremlin queries on graph g. interactive = graphscope.gremlin(graph) # Count the number of papers two authors (with id 2 and 4307) have co-authored. papers = interactive.execute( "g.V().has('author', 'id', 2).out('writes').where(__.in('writes').has('id', 4307)).count()" ).one() print("result", papers) ``` ## Graph analytics with analytical engine 继续我们的示例,下面我们在图数据中进行图分析来生成节点结构特征。我们首先通过在特定周期内从全图中提取论文(使用Gremlin!)来导出一个子图,然后运行 k-core 分解和三角形计数以生成每个论文节点的结构特征。 ``` # Exact a subgraph of publication within a time range. sub_graph = interactive.subgraph("g.V().has('year', inside(2014, 2020)).outE('cites')") # Project the subgraph to simple graph by selecting papers and their citations. simple_g = sub_graph.project(vertices={"paper": []}, edges={"cites": []}) # compute the kcore and triangle-counting. kc_result = graphscope.k_core(simple_g, k=5) tc_result = graphscope.triangles(simple_g) # Add the results as new columns to the citation graph. sub_graph = sub_graph.add_column(kc_result, {"kcore": "r"}) sub_graph = sub_graph.add_column(tc_result, {"tc": "r"}) ``` ## Graph neural networks (GNNs) 接着我们利用生成的结构特征和原有特征通过 GraphScope 的学习引擎来训练一个学习模型。 在本例中,我们训练了 GCN 模型,将节点(论文)分类为349个类别,每个类别代表一个出处(例如预印本和会议)。 ``` # Define the features for learning, # we chose original 128-dimension feature and k-core, triangle count result as new features. paper_features = [] for i in range(128): paper_features.append("feat_" + str(i)) paper_features.append("kcore") paper_features.append("tc") # Launch a learning engine. here we split the dataset, 75% as train, 10% as validation and 15% as test. lg = graphscope.graphlearn( sub_graph, nodes=[("paper", paper_features)], edges=[("paper", "cites", "paper")], gen_labels=[ ("train", "paper", 100, (0, 75)), ("val", "paper", 100, (75, 85)), ("test", "paper", 100, (85, 100)), ], ) # Then we define the training process, use internal GCN model. import graphscope.learning from graphscope.learning.examples import GCN from graphscope.learning.graphlearn.python.model.tf.optimizer import get_tf_optimizer from graphscope.learning.graphlearn.python.model.tf.trainer import LocalTFTrainer def train(config, graph): def model_fn(): return GCN( graph, config["class_num"], config["features_num"], config["batch_size"], val_batch_size=config["val_batch_size"], test_batch_size=config["test_batch_size"], categorical_attrs_desc=config["categorical_attrs_desc"], hidden_dim=config["hidden_dim"], in_drop_rate=config["in_drop_rate"], neighs_num=config["neighs_num"], hops_num=config["hops_num"], node_type=config["node_type"], edge_type=config["edge_type"], full_graph_mode=config["full_graph_mode"], ) graphscope.learning.reset_default_tf_graph() trainer = LocalTFTrainer( model_fn, epoch=config["epoch"], optimizer=get_tf_optimizer( config["learning_algo"], config["learning_rate"], config["weight_decay"] ), ) trainer.train_and_evaluate() # hyperparameters config. config = { "class_num": 349, # output dimension "features_num": 130, # 128 dimension + kcore + triangle count "batch_size": 500, "val_batch_size": 100, "test_batch_size": 100, "categorical_attrs_desc": "", "hidden_dim": 256, "in_drop_rate": 0.5, "hops_num": 2, "neighs_num": [5, 10], "full_graph_mode": False, "agg_type": "gcn", # mean, sum "learning_algo": "adam", "learning_rate": 0.01, "weight_decay": 0.0005, "epoch": 5, "node_type": "paper", "edge_type": "cites", } # Start traning and evaluating train(config, lg) ```
github_jupyter
# Introduction Derivatives are used to solve a large variety of modern-day problems. There are three general methods used to calculate derivatives: 1. Symbolic differentiation 2. Numerical differentiation 3. Automatic differentiation Symbolic differentiation can be very useful, but there are some functions that do not have a symbolic derivative. Additionally, symbolic differentiation can be very costly, as it may recalculate the same expressions many times, or the expression for the derivative may grow exponentially. Sometimes we can avoid these issues by numerically differentiating our function. Often this means using finite differences. The method of finite differences calculates derivative at point $x$ by using the following definition: $$f'(x) = \lim_{h\to 0} f(x) \frac{f(x+h)-f(x)}{h}$$ Finite differences can also be very effective in certain situations. However, as with symbolic differentiation, finite differences has its problems. The biggest issue is that to obtain the most accurate estimate of $f'(x)$, we would like to make $h$ as small as possible; in fact, we would like $h$ to be infinitely small. However, we cannot *actually* make $h$ zero, and thus we must compromise and choose some small-but-not-zero value for $h$, which brings us to our second problem: we cannot precisely represent all numbers. Our machines only have a certain level of precision. When we compute our derivatives numerically we introduce error by approximating values to their closest machine equivalent. To avoid these issues, we turn to our third approach: automatic differentiation. # Background Automatic differentiation (AD) allows us to calculate the derivative to machine precision while avoiding symbolic differentiation's shortcomings. Our package implements on version of AD, the forward mode, by using an extension of the real numbers called the "dual numbers." The forward mode of AD finds the derivative of all intermediate variables with respect to our independent variable and combines them into a final derivative using the chain rule. AD can also be used in "reverse mode," which we will not discuss in detail her, but this method shares many of the same characteristics as forward mode. However, the reverse mode computes derivatives of the dependent variable with respect to the intermediate variables. #### Dual Numbers To carry out the forward mode AD we utilize dual numbers. Dual numbers are defined as numbers of the form $x + x'\epsilon$, where $\epsilon^2=0$ and $x \in \mathbb{R}^n$. We use operator overloading to redefine elementary operations to suit our problem. To see why this is useful, let's examine how dual numbers behave under different mathematical operations: Addition: $(x+x'\epsilon) + (y + y'\epsilon) = x+y + (x'+y')\epsilon$ Subtraction: $(x+x'\epsilon) - (y + y'\epsilon) = x-y + (x'-y')\epsilon$ So far, this is as we might expect. Multiplication: $(x+x'\epsilon) \times (y + y'\epsilon) = xy + y(x')\epsilon+ x(y')\epsilon$ Our definition of $\epsilon$ allows the multiplication of dual numbers to behave like the product rule. Division: $\frac{(x+x'\epsilon)}{(y + y'\epsilon)} = \frac{(x+x'\epsilon)(y - y'\epsilon)}{(y + y'\epsilon)(y - y'\epsilon)} = \frac{xy+xy'\epsilon-yx'\epsilon}{y^2} = \frac{x}{y}+\epsilon \frac{xy'-yx'}{y^2}$ Division also follows rules for derivatives. Finally, observe how functions of dual numbers behave: $f(x+x'\epsilon) = f(x)+\epsilon f'(x)x'$ Which implies the following: $g(f(x+x'\epsilon)) = g(f(x)+\epsilon f'(x)x') = g(f(x))+\epsilon g'(f(x))f'(x)x'$ The above example illustrates how dual numbers can be used to simultaneously calculate the value of a function at a point, $g(f(x))$, and the derivative, $g'(f(x))f'(x)x'$. #### Tracing the computational graph By keeping track of the intermediate values of the derivative we can calculate the derivative of composition of many elementary functions. We can picture this decomposition as a graph or table. For example, consider the following function$^{1}$: $$f\left(x, y, z\right) = \dfrac{1}{xyz} + \sin\left(\dfrac{1}{x} + \dfrac{1}{y} + \dfrac{1}{z}\right).$$ If we want to evaluate $f$ at $\left(1, 2, 3\right)$, we can construct the following table which keeps track for the elementary function, current value, and the elementary function derivative (evaluated with respect to all our variables). | Trace | Elementary Function | Current Value | Elementary Function Derivative | $\nabla_{x}$ Value | $\nabla_{y}$ Value | $\nabla_{z}$ Value | | :---: | :-----------------: | :-----------: | :----------------------------: | :-----------------: | :-----------------: | :-----------------: | | $x_{1}$ | $x_{1}$ | $1$ | $\dot{x}_1$ | $1$ | $0$ | $0$ | | $x_{2}$ | $x_{2}$ | $2$ | $\dot{x}_2$ | $0$ | $1$ | $0$ | | $x_{3}$ | $x_{3}$ | $3$ | $\dot{x}_3$ | $0$ | $0$ | $1$ | | $x_{4}$ | $1/x_{1}$ | $1$ | $-\dot{x}_{1}/x_{1}^{2}$ | $-1$ | $0$ | $0$ | | $x_{5}$ | $1/x_{2}$ | $\frac{1}{2}$ | $-\dot{x}_{2}/x_{2}^{2}$ | $0$ | $-\frac{1}{4}$ | $0$ | | $x_{6}$ | $1/x_{3}$ | $\frac{1}{3}$ | $-\dot{x}_{3}/x_{3}^{2}$ | $0$ | $0$ | $-\frac{1}{9}$ | | $x_{7}$ | $x_4 x_5 x_6$ | $\frac{1}{6}$ | $x_4(x_5\dot{x}_6 + x_6\dot{x}_5) + x_5x_6\dot{x}_4$ | $-\frac{1}{6}$ | $-\frac{1}{12}$ | $-\frac{1}{18}$ | | $x_{8}$ | $x_4 + x_5 + x_6$ | $\frac{11}{6}$ | $\dot{x}_4 + \dot{x}_5 + \dot{x}_6$ | $-1$ | $-\frac{1}{4}$ | $-\frac{1}{9}$ | | $x_{9}$ | $sin(x_8)$ | $sin(\frac{11}{6})$ | $cos(x_8)\dot{x}_8$ | $-cos(\frac{11}{6})$ | $-\frac{1}{4}cos(\frac{11}{6})$ | $-\frac{1}{9}cos(\frac{11}{6})$ | | $x_{10}$ | $x_7 + x_9$ | $sin(\frac{11}{6})+\frac{1}{6}$ | $\dot{x}_7 + \dot{x}_9$ | $-cos(\frac{11}{6})-\frac{1}{6}$ | $-\frac{1}{4}cos(\frac{11}{6})-\frac{1}{12}$ | $-\frac{1}{9}cos(\frac{11}{6})-\frac{1}{18}$ | As this example shows, we can use AD for both scalar and vector functions. AD can also be used for vector valued functions. The follow sections will make the implementation of these varients clear. $^1$Example from Harvard CS207 Homework 4 # Package Usage ## Installation Please follow these two steps in sequence to install: 1. Clone https://github.com/autodiff-cs207/AutoDiff.git 2. After cloning, please run: ` python setup.py install` ## Package Import Design `>>> from AutoDiff import DiffObj, Variable, Constant` `>>> from AutoDiff import MathOps as mo` `>>> from AutoDiff import MathOps as ops` **We have created a comprehensive Jupyter Notebook, which demostrates the functionality of our AutoDiff package. The notebook may be found at [this link](https://github.com/autodiff-cs207/AutoDiff/blob/master/AutoDiff/AutoDiff_Demo.ipynb). We have also provided a demostration of how our AutoDiff may be used to calculate roots using Newton-Raphson method for the following function:** $$ f(x) = 5^{\left(1 + sin\left(log\left(5 + x^2\right)\right)\right)} - 10 $$ ## Example package use Each variable and constant term which appears in a function that the user wants to differentiate, should be an instance of the classes `Variable` and `Constant` respectively. ### Declaring Variables and Constants All variables and constants need to be instances of the classes `Variable` and `Constant` respectively. `>>> x = Variable('x') ` `>>> y = Variable('y') ` `>>> c1 = Constant('c1', 5.0) ` ### Evaluating Function and Calculating Gradients Suppose the user wants to differentiate $5sin(x + y)$ at $x= \pi/2$ and $y= \pi/3$: 1. First declare the variables and constants which will be used in the function: `>>> x = Variable('x') ` `>>> y = Variable('y') ` `>>> c1 = Constant('c1', 5.0) ` 2. Next assign the desired function to $f$: `>>> f = c1*mo.sin(x + y)` $f$ is now an object of the class DiffObj. 3. Now create a dictionary which stores the values at which you want to evaluate $f$ and its gradient: `>>> val_dict = {'x': math.pi/2, 'y': math.pi/3} ` 4. Now we are ready to evaluate our function. We can do this by invoking the method `get_val` on $f$. `>>> print(f.get_val(val_dict)` ` 2.5000000000000018` Simiarly we can now get the gradient by invoking `get_der` on $f$: `>>> print(f.get_der(val_dict)` ` {'x': -4.330127018922193, 'y': -4.330127018922193}` 5. Lastly, if the user just needs the gradient with respect to $x$, it can be done as follows: `>>> f.get_der(val_dict, with_respect_to=['x'])` ` {'x': -4.330127018922193}` # Software Organization ### Directory Structure ``` AutoDiff/ AutoDiff/ AutoDiff.py tests/ test_AutoDiff.py README.md AutoDiff_Demo.ipynb __init__.py setup.py LICENSE ``` ### Modules and Functionality We currently have a module named AutoDiff which contains the classes DiffObj(), Variable(DiffObj), Constant(DiffObj), MathOps(DiffObj). Their basic functionality is defined below in the Implementation Details section. Within these modules, we use the math library to access functions like math.sin(). ### Testing Our test suite resides in the AutoDiff directory (as shown in the directory structure above), and we have used both TravisCI and Coveralls to automate testing. In addition, we have also written DocTest code for each class function and our package passes all doctest by running doctest.testmod(). ### Distribution We eventually plan to dsitribute via PyPI, however for now we have provided package installation instructions above. There are two steps to it: (1) Clone our AutoDiff repo (we provide the repo path above), and (2) Run the `setup.py` file, which we provide with the repo. # Implementation Details Our basic approach is to capture the essence of Chain Rule through use of recursion for calculating derivatives. Each mathematical expression, from a simple constant (like 5.0) or variable (such as 'x'), to complex expressions like $\log(sin(x^2 + yz))$, are all instances of our class `DiffObj` or of sub-classes which inherit from `DiffObj`. And the `DiffObj` class requies that anything which inherits from it, implements two functions: `get_val` for evaluating a mathematical expression, and `get_der` for evaluating gradient of a mathematical expression. This allows us to use Chain Rule by recursively calling these two functions on parts of an expresison. ## Functionality Our AutoDiff package is currently capable of calculating gradients for real valued functions of multiple variables. For functions with more than one variable, the package has the capability to return partial derivatives with respect to all variables that appear in the user's function expression. We support the following elementary math operators currently: * log (Natural log) * sin (sine function from trigonometry) * cos (cosine function from trigonometry) * tan (tangent function from trigonometry) * exp (exponentiation with base equal to Euler's Number $e$) Further, we overload the following Python math operators: * \_\_add\__ and \__radd\__ (allows addition of two DiffObj instances) * \__sub\__ and \__rsub\__ (allows subtraction between two DiffObj instances) * \__truediv\__ and \__rtruediv\__ (allows division between two DiffObj instances) * \__mul\__ and \__rmul\__ (allows multiplication between two DiffObj instances) * \__pow\__ and \__rpow\__ (allows exponentiation between two DiffObj instances) * \__neg\__ (unary operator for negation of a DiffObj instance) ## Class Structure We have implemented the following classes in our package: 1. Class DiffObj() 2. Class Variable(DiffObj) 3. Class Constant(DiffObj) 4. Class MathOps(DiffObj) These are described along with their Class Attributes and Class Methods below. ### 1. Class DiffObj() Any function for which a user wants to evaluate its value and gradient, will be represented by an instance of this class DiffObj, or by instances of classes which inherit from DiffObj (e.g. class Variable, class Constant etc.) A mathematical equivalent of a DiffObj object will be: * a constant such as $5.0$, which we have implemented via a Sub-class 'Constant' * a variable such as $x$, which we have implemented via a Sub-class 'Variable' * a mathematical expression such as $x^2 + sin(y)$. DiffObj enforces that each class which inherits from it, must implement two functions: CLASS FUNCTIONS ================== The functions get_val and get_der are exposed to the user, that is, a user of our package can call these functions. (1) get_val: This is used to evaluate the function represented by a DiffObj instance at a particular point. (2) get_der: This is used to evalate the gradient of the function repreesnted by a DiffObj instance, at a particular point. CLASS ATTRIBUTES ================ The attributes are not meant to be used by an end-user of our package, and they are meant for internal computation. name_list: A list of strings, where each item in the list represents the variables inside the function represented by this DiffObj. E.g. for f(x,y) = x + y, the name_list for a DiffObj representing f will be ['x', 'y'] (assuming the x.name_list = ['x'] and y.name_list = ['y']. operator: A single string representing the "operator". By default, DiffObj assumes that it represents two DiffObj's connected by an binary operator such as 'add'. However, we use the same definition for unary operators such as negation or cosine. operand_list: A list of two DiffObjs, which together with self.operator, comprise this instance of DiffObj. CLASS FUNCTIONS ================ get_val(self, value_dict) INPUT ====== value_dict: A dictionary, whose keys are strings representing variables which feature in the formula represented by this DiffObj. The values at those keys are the values at which the formula representing this DiffObj will be evaluated. E.g. For a DiffObj which represents the function f(x,y) = x + y, the value_dict argument may look like value_dict = {'x': 10, 'y': 5} OUTPUT ====== DOCTEST ====== >>> z=x+y >>> z.get_val({'x':1,'y':1}) 2 result: A floating point number, which equals the evaluation of the function represented by this DiffObj, at the variable values given by val_dict. get_der(self, value_dict, with_respect_to=None) INPUT ====== value_dict: A dictionary, whose keys are strings representing variables which feature in the formula represented by this DiffObj. The values at those keys are the values at which the gradient of formula representing this DiffObj will be evaluated. E.g. For a DiffObj which represents the function f(x,y) = x + y, the value_dict argument may look like value_dict = {'x': 10, 'y': 5} with_respect_to: A list of strings representing variables, with respect to which we want the gradient of this DifObj. By default, if this list is not provided, then the gradient with respect to all variables featuring in the DiffObj is returned. OUTPUT ====== result: A dictionary, whose keys are strings representing variables which feature in the formula represented by this DiffObj. The value associated withe each key is a floating point number which is the partial derivative of this DiffObj with respect to that variable. DOCTEST ====== >>> z=x+y >>> z.get_der({'x':0,'y':0}) {'y': 1, 'x': 1} Other class functions: These include the overloaded operators listed in the functionality section above. We have provided detailed documentation for these overloaded functions inside our code. ### 2. Class Variable(DiffObj) This subclass inherits from DiffObj, and is basically used for representing a variable such as x or y. All variables inside a function whose derivative and value a user wants to calculate, will be instances of the Variable class, which inherits from DiffObj and implements get_val and get_der CLASS ATTRIBUTES ================ var_name: A string, which is unique to this Variable instance. E.g. x = Variable('x') CLASS FUNCTIONS =============== This implements get_val and get_der, a description of which is provided in the Super-class DiffObj. ### 3. Class Constant(DiffObj) All constants inside a function whose derivative and value a user wants to calculate, will be instances of the Constant class, which inherits from DiffObj and implements get_val and get_der CLASS ATTRIBUTES ================ const_name: A string, which is unique to this Constant instance. const_val: An int or float number, which will be the value assigned to this instance. E.g. c = Constant('c', 10.0) CLASS FUNCTIONS =============== This implements get_val and get_der, a description of which is provided in the Super-class DiffObj. As expected, get_val simply returns self.const_val while get_der will return 0. ### 4. Class MathOps() This class inherits from the DiffObj class. It implements non-elementary unary functions including: sin, cos, tan, log, exp. INSTANTIATION =============== If a is of type DiffObj, then the invoking the constructor as follows will return an object b of type MathOps: b = MathOps.sin(a) CLASS ATTRIBUTES ================ The attributes are not meant to be used by an end-user of our package, and they are meant for internal computation. name_list: A list of strings, where each item in the list represents the variables inside the function represented by this DiffObj. E.g. for f(x,y) = x + y, the name_list for a DiffObj representing f will be ['x', 'y'] (assuming the x.name_list = ['x'] and y.name_list = ['y']. operator: A string, such as 'sin' or 'log', which represents one of the unary math operators implemented by this class. operand_list: A list of length 1 containing the DiffObj which the user has passed as an argument to one of the classmethods of MathOps. CLASS FUNCTIONS ================ Note: This class implements classmethods named 'sin', 'cos', 'tan', 'log' and 'exp', and these classmethods basically return an instance of type DiffObj, which supports get_val and get_der for functions like sin, log, cos, tan, exp. ## Core Data Structures There are two core data structures in our implementation: 1. **Lists**: The name_list (a list of strings) representing variable names, that is stored in every Diffobj instance to indicate the variables influencing that instance. Eg. for the DiffObj w, where w represents sin(x)+y, the name_list of Variable x is ['x'], the name_list of Variable y is ['y'] and the name_list of w is ['x','y']. 2. **Dictionaries**: The dictionary value_dict, an argument of DiffObj.get_der, containing names and values that indicate the point in space at which we need to compute derivative and evaluate an expression, for example in w.get_val(value_dict). We also use Dictionarie for storing partial derivatives with respect to variables. ## External Dependencies As of now we believe we will use the following third party libraries: 1. `Math` We may use `NumPy` when we extend our AD package. ## Dealing with Elementary Functions In our design, we have provided a Class called MathOps, which allows calculations related to elementary functions such as $sin$ and $log$. In line with our philosophy to make anything which is differentiable, an instance of `DiffObj`, out `Mathops` class provides certain classmethods, whose job is to wrap an expression of the form $sin(some\_diff\_obj)$ into another `DiffObj`, which supports the `get_val` and `get_der` functions which are crucial to our implementation of the Chain Rule. # Future Extensions We have planned the following enhancements to our AD package: 1. Future AD package should support Vector variables. Vector variables can easily be implemented as a list of Variables, but such implementation may be inefficient. The challenge for us is to use a better structure to represent Vectors and algorithms to calculate derivatives with respect to Vectors, without too much Python structure overhead. 2. Further support on matrices. We are aiming to implement Backpropogation. We will explore how to incorporate matrix operations into our code. 3. We aim to support a more natural coding experience. For example, as of now, the only way to use a constant in a function is to make the constant an instance of the Constant class. We are cognizant of the fact that the Constnat class is not really providing much functionality. We will aim to allow users to directly use constant numbers in their mathetmatical expressions.
github_jupyter
``` import json import pickle import pandas as pd import numpy as np import os from tqdm.notebook import tqdm os.chdir('../') from imports.metrics import multiclass_stats with open('config.json', 'r') as f: config = json.load(f)['sklearn'] amb = pd.read_csv(config['amb_data_path']) amb.head() feature_cols = ['touristy', 'hipster', 'romantic', 'divey', 'intimate', 'upscale'] from sklearn.model_selection import train_test_split X_train_files, X_test_files, y_train, y_test = train_test_split(amb.photo_id, amb[feature_cols], train_size = 0.9, random_state=420, stratify=amb[feature_cols]) COCO_INSTANCE_CATEGORY_NAMES = [ '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table', 'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush' ] with open(config['obj_feats_path'], 'rb') as io: objects = pickle.load(io) trf_features = np.load(config['trf_feats_path'], allow_pickle = True)['arr_0'][()] list(objects.items())[:2] vector_size = len(COCO_INSTANCE_CATEGORY_NAMES) binary_feature_vectors = {} for name, boxes in tqdm(objects.items()): confidence_vector = np.zeros(vector_size) counts_vector = np.zeros(vector_size) for box in boxes: if box: _, idx, confidence = box confidence_vector[idx] = max(confidence_vector[idx], confidence) counts_vector[idx] += 1 binary_feature_vectors[name[0]] = np.concatenate((confidence_vector, counts_vector)) all_vectors = np.array(list(binary_feature_vectors.values())) empty_columns = [] trans_arr = all_vectors.T for i in range(trans_arr.shape[0]): if np.all(trans_arr[i] == trans_arr[i][0]): empty_columns.append(i) for c in empty_columns[::-1]: if c < len(COCO_INSTANCE_CATEGORY_NAMES): del COCO_INSTANCE_CATEGORY_NAMES[c] all_vectors = np.delete(all_vectors, c, 1) names = list(binary_feature_vectors.keys()) features = { names[i]: np.concatenate((trf_features[names[i]], all_vectors[i])) for i in range(len(names)) } X_train, X_test = [], [] for filename in tqdm(X_train_files): X_train.append(features[filename]) for filename in tqdm(X_test_files): X_test.append(features[filename]) X_train = np.array(X_train) X_test = np.array(X_test) X_test.shape y_train = y_train.to_numpy(dtype='int') y_test = y_test.to_numpy(dtype='int') from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.multioutput import MultiOutputClassifier from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import BernoulliNB from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report, hamming_loss, f1_score, roc_auc_score lr_clf = make_pipeline(StandardScaler(), MultiOutputClassifier( LogisticRegression(max_iter=10000, random_state=42, class_weight='balanced')) ).fit(X_train, y_train) y_pred = lr_clf.predict(X_test) report, stats = multiclass_stats(y_test, y_pred) stats print(report) nb_clf = make_pipeline(StandardScaler(), MultiOutputClassifier( BernoulliNB()) ).fit(X_train, y_train) y_pred = nb_clf.predict(X_test) report, stats = multiclass_stats(y_test, y_pred) stats print(report) rf_clf = make_pipeline(StandardScaler(), RandomForestClassifier(random_state=42) ).fit(X_train, y_train) y_pred = rf_clf.predict(X_test) report, stats = multiclass_stats(y_test, y_pred) stats print(report) ```
github_jupyter
# Extended Kalman Filter : X and Y as input ### Importing necessary modules ``` import math import numpy as np from numpy import genfromtxt import matplotlib.pyplot as plt ``` ### Importing csv files and reading the data ``` true_odo = genfromtxt('true_odometry.csv', delimiter=',') sen_odo = genfromtxt('sensor_odom.csv',delimiter=',') ``` ### Splitting the data into individual arrays ``` sen_pos_x, sen_pos_y = sen_odo[1:,1], sen_odo[1:,2] sen_pos_theta = sen_odo[1:,3] true_x, true_y, true_theta = true_odo[1:,1], true_odo[1:,2], true_odo[1:,3] v, w = true_odo[1:,4], true_odo[1:,5] time = sen_odo[1:,0] ``` ### Observation that we are making - Theta ``` z = np.c_[sen_pos_theta] ``` ### Defining Prediction Function ``` def Prediction(x_t, P_t, F_t, B_t, U_t, G_t, Q_t): x_t = F_t.dot(x_t) + B_t.dot(U_t) P_t = (G_t.dot(P_t).dot(G_t.T)) + Q_t return x_t, P_t ``` ### Defining Update Function ``` def Update(x_t, P_t, Z_t, R_t, H_t): S = np.linalg.inv( (H_t.dot(P_t).dot(H_t.T)) + R_t ) K = P_t.dot(H_t.T).dot(S) x_t = x_t + K.dot( Z_t - H_t.dot(x_t) ) P_t = P_t - K.dot(H_t).dot(P_t) return x_t, P_t ``` ### Defining various Matrix that will be used in the Filter ``` # Transition Matrix F_t = np.array([ [1, 0, 0], [0, 1, 0], [0, 0, 1] ]) # Initial Covariance State P_t = 0.005 * np.identity(3) # Process Covariance Q_t = 0.004 * np.identity(3) # Measurement Covariance R_t = np.array([ [0.24] ]) # Measurement Matrix H_t = np.array([ [0, 0, 1] ]) # Initial State x_t = np.array([ [sen_pos_x[0]], [sen_pos_y[0]], [sen_pos_theta[0]] ]) ``` ### Defining empty lists which will be used for plotting purposes later ``` kal_x, kal_y, kal_theta = [], [], [] ``` ## Kalman Filter _ main loop ``` for i in range(2113): if i > 0: dt = time[i] - time[i-1] else: dt = 0 # Jacobian Matrix - G G_t = np.array([ [1, 0, -v[i]*(math.sin(sen_pos_theta[i]))*dt], [0, 1, v[i]*(math.cos(sen_pos_theta[i]))*dt], [0, 0, 1] ]) # Input Transition Matrix - B B_t = np.array([ [dt * (math.cos(sen_pos_theta[i])), 0], [dt * (math.sin(sen_pos_theta[i])), 0], [0, dt] ]) # Input to the system - v and w ( velocity and turning rate ) U_t = np.array([ [v[i]], [w[i]] ]) # Prediction Step x_t, P_t = Prediction(x_t, P_t, F_t, B_t, U_t, G_t, Q_t) # Reshaping the measurement data Z_t = z[i].transpose() Z_t = Z_t.reshape(Z_t.shape[0], -1) # Update Step x_t, P_t = Update(x_t, P_t, Z_t, R_t, H_t) kal_x.append(x_t[0]) kal_y.append(x_t[1]) kal_theta.append(x_t[2]) print('*'*50) print('\n'," Final Filter State Matrix : \n", x_t,'\n') print('*'*50) ``` ## Plotting ### For Plotting Purposes ``` kal_x = np.concatenate(kal_x).ravel() kal_y = np.concatenate(kal_y).ravel() kal_theta = np.concatenate(kal_theta).ravel() plt.figure(1) plt.title('Estimated (Kalman) Pos X vs True Pos X', fontweight='bold') plt.plot(time,kal_x[:],'g--') plt.plot(time,true_x, linewidth=3) plt.figure(2) plt.title('Estimated (Kalman) Pos Y vs True Pos Y', fontweight='bold') plt.plot(time,kal_y[:],'g--') plt.plot(time,true_y, linewidth=3) plt.figure(3) plt.title('Estimated (Kalman) Theta vs True Theta', fontweight='bold') plt.plot(time,kal_theta[:],'o--') plt.plot(time,true_theta, linewidth=2) plt.figure(4) plt.title('Robot Position : Kalman vs True', fontweight='bold') plt.plot(kal_x,kal_y,'g--') plt.plot(true_x,true_y, linewidth=3) plt.show() ``` ### Statistically Comparing True and Filtered (Estimated) Data ``` std_k_x = np.std(kal_x) std_true_x = np.std(true_x) print('*'*10) print(" X co-ordinate") print(' Standard Deviation Kalman : ', std_k_x) print(' Standard Deviation True : ', std_true_x) mean_k_x = np.mean(kal_x) mean_true_x = np.mean(true_x) print(' Mean Kalman : ', mean_k_x) print(' Mean True : ', mean_true_x, '\n') std_k_y = np.std(kal_y) std_true_y = np.std(true_y) print('*'*10) print(" Y co-ordinate ") print(' Standard Deviation Kalman : ', std_k_y) print(' Standard Deviation True : ', std_true_y) mean_k_y = np.mean(kal_y) mean_true_y = np.mean(true_y) print(' Mean Kalman : ', mean_k_y) print(' Mean True : ', mean_true_y, '\n') std_k_theta = np.std(kal_theta) std_true_theta = np.std(true_theta) print('*'*10) print(" Theta ") print(' Standard Deviation Kalman : ', std_k_theta) print(' Standard Deviation True : ', std_true_theta) mean_k_theta = np.mean(kal_theta) mean_true_theta = np.mean(true_theta) print(' Mean Kalman : ', mean_k_theta) print(' Mean True : ', mean_true_theta, '\n') ```
github_jupyter
# Input of Mapping Pipeline ## Input Files In order to run the pipeline for a single cell library, you need to have 2 things: 1. FASTQ files generated by bcl2fastq - If the samplesheet used in bcl2fastq is made by yap, you don't need to make fastq dataframe about the fastq files, just provide the path is fine. - If the samplesheet is not made by yap, you need to make a FASTQ dataframe, see next step. 2. mapping_config.ini for mapping parameters ## FASTQ File Name Requirements - FASTQ file should be generated by SampleSheet made from previous step. Because the pipeline heavily relies on pre-defined FASTQ file name patterns to automatically parse uid, lane, read_type etc. - If the SampleSheet is not generated by previous step, FASTQ names may not support automatic parse. You need to provide the FASTQ dataframe by yourself. See the documentation of making FASTQ dataframe. The mapping summary part should also be done manually. ## mapping_config.ini ### What is mapping_config.ini file? - It's a place gather all adjustable parameters of mapping pipeline into a single file in [INI format](https://en.wikipedia.org/wiki/INI_file), so you don't need to put in 100 parameters in a shell command... - INI format is super simple: ``` ; comment start with semicolon [section1] key1=value1 key2=value2 [section2] key1=value1 key2=value2 ``` - Currently, the pipeline don't allow to change the sections and keys, so just change values according to your needs. ### How to prepare mapping_config.ini file? You can print out the default config file, save it to your own place and modify the value. ```shell # MODE should be in snmC, NOMe, snmCT, snmCT-NOMe, depending on the library type yap default-mapping-config --mode MODE ``` Here is an example of getting snmC-seq default mapping_config.ini file. You need to change the place holders to correct value, such as providing correct barcode version (V1, V2) or path to the reference genome ``` !yap default-mapping-config --mode mc ``` ### Mapping Modes of the config file yap support several different mapping mode for different experiments, which is controlled by mapping_config.ini as described bellow. #### snmC-seq2 (mc) - Normal snmC-seq2 library #### snmCT-seq (mct) - snmCT-seq library, each cell contain mixed reads from DNA and RNA - The major differences are: - Need to do STAR mapping - Filter bismark BAM file to get DNA reads - Filter STAR BAM file to get RNA reads #### snmC-seq + NOMe treatment (nome) - snmC-seq with NOMe treatment, where the GCH contain open chromatin information, HCN contain normal methylation information. - The major differences are: - Add additional one base in the context column of ALLC, so we can distinguish GpC sites with HpC sites #### snmCT-seq + NOMe treatment (mct-nome) - snmCT-seq with NOMe treatment, each cell contain mixed reads from DNA and RNA, and in DNA reads, the GCH contain open chromatin information, HCN contain normal methylation. - The major differences are: - Need to do STAR mapping - Filter bismark BAM file to get DNA reads - Filter STAR BAM file to get RNA reads - Add additional one base in the context column of ALLC, so we can distinguish GpC sites with HpC sites
github_jupyter
# Declarative 500-hPa Absolute Vorticity By: Kevin Goebbert This example uses the declarative syntax available through the MetPy package to allow a more convenient method for creating simple maps of atmospheric data. To plot aboslute vorticity, the data is scaled and reassigned to the xarray object for use in the declarative plotting interface. ``` from datetime import datetime import xarray as xr from metpy.plots import declarative from metpy.units import units # Set date for desired dataset dt = datetime(2012, 10, 31, 12) # Open dataset from NCEI ds = xr.open_dataset('https://www.ncei.noaa.gov/thredds/dodsC/' f'model-gfs-g4-anl-files-old/{dt:%Y%m}/{dt:%Y%m%d}/' f'gfsanl_4_{dt:%Y%m%d}_{dt:%H}00_000.grb2' ).metpy.parse_cf() # Subset Data to be just over CONUS ds_us = ds.sel(lon=slice(360-150, 360-50), lat=slice(65, 20)) ``` ## Contour Intervals Since absolute vorticity rarely goes below zero in the Northern Hemisphere, we can set up a list of contour levels that doesn't include values near but greater than zero. The following code yields a list containing: `[-8, -7, -6, -5, -4, -3, -2, -1, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45]` ``` # Absolute Vorticity colors # Use two different colormaps from matplotlib and combine into one color set clevs_500_avor = list(range(-8, 1, 1))+list(range(8, 46, 1)) ``` ## The Plot Using the declarative interface in MetPy to plot the 500-hPa Geopotential Heights and Absolute Vorticity. ``` # Set Contour Plot Parameters contour = declarative.ContourPlot() contour.data = ds_us contour.time = dt contour.field = 'Geopotential_height_isobaric' contour.level = 500 * units.hPa contour.linecolor = 'black' contour.linestyle = '-' contour.linewidth = 2 contour.clabels = True contour.contours = list(range(0, 20000, 60)) # Set Color-filled Contour Parameters cfill = declarative.FilledContourPlot() cfill.data = ds_us cfill.time = dt cfill.field = 'Absolute_vorticity_isobaric' cfill.level = 500 * units.hPa cfill.contours = clevs_500_avor cfill.colormap = 'PuOr_r' cfill.image_range = (-45, 45) cfill.colorbar = 'horizontal' cfill.scale = 1e5 # Panel for plot with Map features panel = declarative.MapPanel() panel.layout = (1, 1, 1) panel.area = (-124, -72, 24, 53) panel.projection = 'lcc' panel.layers = ['coastline', 'borders', 'states'] panel.title = (f'{cfill.level} GFS Geopotential Heights' f'and Absolute Vorticity at {dt}') panel.plots = [cfill, contour] # Bringing it all together pc = declarative.PanelContainer() pc.size = (15, 14) pc.panels = [panel] pc.show() ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Normalizations <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/layers_normalizations"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/layers_normalizations.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/layers_normalizations.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/layers_normalizations.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> ## Overview This notebook gives a brief introduction into the [normalization layers](https://github.com/tensorflow/addons/blob/master/tensorflow_addons/layers/normalizations.py) of TensorFlow. Currently supported layers are: * **Group Normalization** (TensorFlow Addons) * **Instance Normalization** (TensorFlow Addons) * **Layer Normalization** (TensorFlow Core) The basic idea behind these layers is to normalize the output of an activation layer to improve the convergence during training. In contrast to [batch normalization](https://keras.io/layers/normalization/) these normalizations do not work on batches, instead they normalize the activations of a single sample, making them suitable for recurrent neual networks as well. Typically the normalization is performed by calculating the mean and the standard deviation of a subgroup in your input tensor. It is also possible to apply a scale and an offset factor to this as well. $y_{i} = \frac{\gamma ( x_{i} - \mu )}{\sigma }+ \beta$ $ y$ : Output $x$ : Input $\gamma$ : Scale factor $\mu$: mean $\sigma$: standard deviation $\beta$: Offset factor The following image demonstrates the difference between these techniques. Each subplot shows an input tensor, with N as the batch axis, C as the channel axis, and (H, W) as the spatial axes (Height and Width of a picture for example). The pixels in blue are normalized by the same mean and variance, computed by aggregating the values of these pixels. ![](https://github.com/shaohua0116/Group-Normalization-Tensorflow/raw/master/figure/gn.png) Source: (https://arxiv.org/pdf/1803.08494.pdf) The weights gamma and beta are trainable in all normalization layers to compensate for the possible lost of representational ability. You can activate these factors by setting the `center` or the `scale` flag to `True`. Of course you can use `initializers`, `constraints` and `regularizer` for `beta` and `gamma` to tune these values during the training process. ## Setup ### Install Tensorflow 2.0 and Tensorflow-Addons ``` from __future__ import absolute_import, division, print_function try: %tensorflow_version 2.x except: pass import tensorflow as tf !pip install -q --no-deps tensorflow-addons~=0.6 import tensorflow_addons as tfa ``` ### Preparing Dataset ``` mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ``` ## Group Normalization Tutorial ### Introduction Group Normalization(GN) divides the channels of your inputs into smaller sub groups and normalizes these values based on their mean and variance. Since GN works on a single example this technique is batchsize independent. GN experimentally scored closed to batch normalization in image classification tasks. It can be beneficial to use GN instead of Batch Normalization in case your overall batch_size is low, which would lead to bad performance of batch normalization ###Example Splitting 10 channels after a Conv2D layer into 5 subgroups in a standard "channels last" setting: ``` model = tf.keras.models.Sequential([ # Reshape into "channels last" setup. tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)), tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"), # Groupnorm Layer tfa.layers.GroupNormalization(groups=5, axis=3), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_test, y_test) ``` ## Instance Normalization Tutorial ### Introduction Instance Normalization is special case of group normalization where the group size is the same size as the channel size (or the axis size). Experimental results show that instance normalization performs well on style transfer when replacing batch normalization. Recently, instance normalization has also been used as a replacement for batch normalization in GANs. ### Example Applying InstanceNormalization after a Conv2D Layer and using a uniformed initialized scale and offset factor. ``` model = tf.keras.models.Sequential([ # Reshape into "channels last" setup. tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)), tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"), # LayerNorm Layer tfa.layers.InstanceNormalization(axis=3, center=True, scale=True, beta_initializer="random_uniform", gamma_initializer="random_uniform"), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_test, y_test) ``` ## Layer Normalization Tutorial ### Introduction Layer Normalization is special case of group normalization where the group size is 1. The mean and standard deviation is calculated from all activations of a single sample. Experimental results show that Layer normalization is well suited for Recurrent Neural Networks, since it works batchsize independt. ### Example Applying Layernormalization after a Conv2D Layer and using a scale and offset factor. ``` model = tf.keras.models.Sequential([ # Reshape into "channels last" setup. tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)), tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"), # LayerNorm Layer tf.keras.layers.LayerNormalization(axis=1 , center=True , scale=True), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_test, y_test) ``` ## Literature [Layer norm](https://arxiv.org/pdf/1607.06450.pdf) [Instance norm](https://arxiv.org/pdf/1607.08022.pdf) [Group Norm](https://arxiv.org/pdf/1803.08494.pdf) [Complete Normalizations Overview](http://mlexplained.com/2018/11/30/an-overview-of-normalization-methods-in-deep-learning/)
github_jupyter
``` import pandas as pd import numpy as np import time import seaborn as sns import matplotlib.pyplot as plt from sklearn import preprocessing as pp from sklearn.model_selection import StratifiedKFold from sklearn.metrics import accuracy_score from sklearn import preprocessing import xgboost as xgb from sklearn.ensemble import BaggingClassifier import lightgbm as lgb from sklearn.naive_bayes import GaussianNB from sklearn import preprocessing as pp from sklearn.neighbors import KNeighborsClassifier from sklearn import tree from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_val_predict from statistics import mode from sklearn.model_selection import cross_val_score, cross_validate, train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier import xgboost as xgb import lightgbm as lgb #Todas las librerías para los distintos algoritmos from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import ComplementNB from sklearn.naive_bayes import BernoulliNB from sklearn.naive_bayes import MultinomialNB from sklearn.calibration import CalibratedClassifierCV from sklearn.svm import LinearSVC from sklearn.svm import OneClassSVM from sklearn.svm import SVC from sklearn.svm import NuSVC from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier import matplotlib.pyplot as plt import sklearn.metrics as metrics from sklearn.neural_network import MLPClassifier from sklearn.ensemble import BaggingClassifier import statistics from sklearn.preprocessing import LabelEncoder from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.decomposition import PCA from sklearn.tree import DecisionTreeClassifier from pylab import rcParams from collections import Counter data_train= pd.read_csv("./datos/train.csv",na_values=["?"]) data_test= pd.read_csv("./datos/test.csv",na_values=["?"]) data_trainCopia = data_train.copy() data_testCopia = data_test.copy() Nombre = LabelEncoder().fit(pd.read_csv("./datos/nombre.csv").Nombre) Año = LabelEncoder().fit(pd.read_csv("./datos/ao.csv").Año) Ciudad = LabelEncoder().fit(pd.read_csv("./datos/ciudad.csv").Ciudad) Combustible = LabelEncoder().fit(pd.read_csv("./datos/combustible.csv").Combustible) Consumo = LabelEncoder().fit(pd.read_csv("./datos/consumo.csv").Consumo) Descuento = LabelEncoder().fit(pd.read_csv("./datos/descuento.csv").Descuento) Kilometros = LabelEncoder().fit(pd.read_csv("./datos/kilometros.csv").Kilometros) Mano = LabelEncoder().fit(pd.read_csv("./datos/mano.csv").Mano) Potencia = LabelEncoder().fit(pd.read_csv("./datos/potencia.csv").Potencia) Asientos = LabelEncoder().fit(pd.read_csv("./datos/asientos.csv").Asientos) Motor_CC=LabelEncoder().fit(pd.read_csv("./datos/motor_cc.csv").Motor_CC) data_trainCopia['Nombre']=data_trainCopia['Nombre'].fillna(mode(data_trainCopia['Nombre'])) data_trainCopia['Año']=data_trainCopia['Año'].fillna(mode(data_trainCopia['Año'])) data_trainCopia['Ciudad']=data_trainCopia['Ciudad'].fillna(mode(data_trainCopia['Ciudad'])) data_trainCopia['Kilometros']=data_trainCopia['Kilometros'].fillna(mode(data_trainCopia['Kilometros'])) data_trainCopia['Combustible']=data_trainCopia['Combustible'].fillna(mode(data_trainCopia['Combustible'])) data_trainCopia['Tipo_marchas']=data_trainCopia['Tipo_marchas'].fillna(mode(data_trainCopia['Tipo_marchas'])) data_trainCopia['Mano']=data_trainCopia['Mano'].fillna(mode(data_trainCopia['Mano'])) data_trainCopia['Consumo']=data_trainCopia['Consumo'].fillna(mode(data_trainCopia['Consumo'])) data_trainCopia['Motor_CC']=data_trainCopia['Motor_CC'].fillna(mode(data_trainCopia['Motor_CC'])) data_trainCopia['Potencia']=data_trainCopia['Potencia'].fillna(mode(data_trainCopia['Potencia'])) data_trainCopia['Asientos']=data_trainCopia['Asientos'].fillna(mode(data_trainCopia['Asientos'])) data_trainCopia['Descuento']=data_trainCopia['Descuento'].fillna(mode(data_trainCopia['Descuento'])) data_testCopia['Nombre']=data_testCopia['Nombre'].fillna(mode(data_testCopia['Nombre'])) data_testCopia['Año']=data_testCopia['Año'].fillna(mode(data_testCopia['Año'])) data_testCopia['Ciudad']=data_testCopia['Ciudad'].fillna(mode(data_testCopia['Ciudad'])) data_testCopia['Kilometros']=data_testCopia['Kilometros'].fillna(mode(data_testCopia['Kilometros'])) data_testCopia['Combustible']=data_testCopia['Combustible'].fillna(mode(data_testCopia['Combustible'])) data_testCopia['Tipo_marchas']=data_testCopia['Tipo_marchas'].fillna(mode(data_testCopia['Tipo_marchas'])) data_testCopia['Mano']=data_testCopia['Mano'].fillna(mode(data_testCopia['Mano'])) data_testCopia['Consumo']=data_testCopia['Consumo'].fillna(mode(data_testCopia['Consumo'])) data_testCopia['Motor_CC']=data_testCopia['Motor_CC'].fillna(mode(data_testCopia['Motor_CC'])) data_testCopia['Potencia']=data_testCopia['Potencia'].fillna(mode(data_testCopia['Potencia'])) data_testCopia['Asientos']=data_testCopia['Asientos'].fillna(mode(data_testCopia['Asientos'])) data_testCopia['Descuento']=data_testCopia['Descuento'].fillna(mode(data_testCopia['Descuento'])) #Eliminamos las columnas que no necesitamos data_trainCopia=data_trainCopia.drop(['Descuento'], axis=1) data_trainCopia=data_trainCopia.drop(['id'], axis=1) data_testCopia=data_testCopia.drop(['Descuento'], axis=1) data_testCopia=data_testCopia.drop(['id'], axis=1) #Eliminamos los nan de los ids data_trainCopia=data_trainCopia.dropna() data_testCopia=data_testCopia.dropna() #Codificación de las filas data_trainCopia.Nombre = Nombre.transform(data_trainCopia.Nombre) data_trainCopia.Año = Año.transform(data_trainCopia.Año) data_trainCopia.Ciudad = Ciudad.transform(data_trainCopia.Ciudad) data_trainCopia.Combustible = Combustible.transform(data_trainCopia.Combustible) data_trainCopia.Potencia = Potencia.transform(data_trainCopia.Potencia) data_trainCopia.Consumo = Consumo.transform(data_trainCopia.Consumo) data_trainCopia.Kilometros = Kilometros.transform(data_trainCopia.Kilometros) data_trainCopia.Mano = Mano.transform(data_trainCopia.Mano) data_trainCopia.Motor_CC = Motor_CC.transform(data_trainCopia.Motor_CC) data_trainCopia.Asientos = Asientos.transform(data_trainCopia.Asientos) data_trainCopia.Tipo_marchas = LabelEncoder().fit_transform(data_trainCopia.Tipo_marchas) #------------------------------------------------------------------------------------------- data_testCopia.Nombre = Nombre.transform(data_testCopia.Nombre) data_testCopia.Año = Año.transform(data_testCopia.Año) data_testCopia.Ciudad = Ciudad.transform(data_testCopia.Ciudad) data_testCopia.Combustible = Combustible.transform(data_testCopia.Combustible) data_testCopia.Potencia = Potencia.transform(data_testCopia.Potencia) data_testCopia.Consumo = Consumo.transform(data_testCopia.Consumo) data_testCopia.Kilometros = Kilometros.transform(data_testCopia.Kilometros) data_testCopia.Mano = Mano.transform(data_testCopia.Mano) data_testCopia.Asientos = Asientos.transform(data_testCopia.Asientos) data_testCopia.Motor_CC = Motor_CC.transform(data_testCopia.Motor_CC) data_testCopia.Tipo_marchas = LabelEncoder().fit_transform(data_testCopia.Tipo_marchas) #Obtener el resto de los atributos target_train=data_trainCopia['Precio_cat'] data_trainCopia=data_trainCopia.drop(['Precio_cat'], axis=1) atributos=data_trainCopia[['Nombre','Ciudad','Año','Kilometros','Combustible','Tipo_marchas','Mano','Consumo','Motor_CC','Potencia']] target = pd.read_csv('./datos/precio_cat.csv') from imblearn.under_sampling import RandomUnderSampler rus = RandomUnderSampler(random_state=40, sampling_strategy='majority', replacement=False) Xu, yu = rus.fit_resample(data_trainCopia, target_train) Counter(yu) ax=sns.distplot(yu) from imblearn.over_sampling import SMOTE Xo, yo = SMOTE().fit_resample(data_trainCopia, target_train) Counter(yo) ax=sns.distplot(yo) lgbm = lgb.LGBMClassifier(objective='regression_l1',n_estimators=200,n_jobs=2, num_leaves=40, learning_rate=0.05) bagEntrenado = bagging.fit(Xo, yo) preBaging = bagEntrenado.predict(data_testCopia) scores = cross_val_score(bagEntrenado, atributos, target_train, cv=5, scoring='accuracy') print("Score Validacion Cruzada", np.mean(scores)*100) lgbm = lgb.LGBMClassifier(objective='multiclassova',n_estimators=200,n_jobs=-1) lgbmEntrenado = lgbm.fit(Xu, yu) preLgb = lgbmEntrenado.predict(data_testCopia) scores = cross_val_score(lgbmEntrenado, atributos, target_train, cv=5, scoring='accuracy') print("Score Validacion Cruzada", np.mean(scores)*100) lgbm = lgb.LGBMClassifier(objective='regression_l1',n_estimators=200,n_jobs=2, num_leaves=40, learning_rate=0.05) bagEntrenado = bagging.fit(Xu, yu) preBaging = bagEntrenado.predict(data_testCopia) scores = cross_val_score(bagEntrenado, atributos, target_train, cv=5, scoring='accuracy') print("Score Validacion Cruzada", np.mean(scores)*100) lgbm = lgb.LGBMClassifier(objective='regression_l1',n_estimators=200,n_jobs=2, num_leaves=40, learning_rate=0.09) bagEntrenado = bagging.fit(Xu, yu) preBaging = bagEntrenado.predict(data_testCopia) scores = cross_val_score(bagEntrenado, atributos, target_train, cv=5, scoring='accuracy') print("Score Validacion Cruzada", np.mean(scores)*100) dfAux = pd.DataFrame({'id':data_test['id']}) dfAux.set_index('id', inplace=True) dfFinal = pd.DataFrame({'id': data_test['id'], 'Precio_cat': preBaging}, columns=['id', 'Precio_cat']) dfFinal.set_index('id', inplace=True) dfFinal.to_csv("./soluciones/solucion8281UnderSamplinglgbmasbagging.csv") ```
github_jupyter
<H1>MADATORY PYTHON LIBRARIES</H1> ``` %matplotlib inline import xarray import os import pandas as pd import matplotlib.pyplot as plt from matplotlib import colors from mpl_toolkits.mplot3d import Axes3D import numpy as np from shapely.geometry import box, Point plt.rcParams.update({'font.size': 15}) ``` <h1>IN SITU PROFILERS</h1> In Situ 'profilers' comprehends a wide range of devices able to drift with the currents as well as submerge every now and then in the watter column reporting certain parameters along its trajectory. In Situ profilers produce only PR data (profiles) and its platform data type is PF. <h1>PLOTTING PROFILES</h1> Imagine you have downloaded some <i>_PR_ (profile)</i> dataset from In Situ profilers (see how to download files from a certain [platform data source](https://github.com/CopernicusMarineInsitu/INSTACTraining-Phase2UPDATE/blob/master/PythonNotebooks/In_Situ_data_download_by_platform_data_source.ipynb) or [platform category](https://github.com/CopernicusMarineInsitu/INSTACTraining-Phase2UPDATE/blob/master/PythonNotebooks/In_Situ_data_download_by_platform_category.ipynb)) like: [MO_PR_PF_6901885.nc](ftp://nrt.cmems-du.eu/Core/INSITU_MED_NRT_OBSERVATIONS_013_035/history/profiler-glider/MO_PR_PF_6901885.nc) ``` dataset = 'MO_PR_PF_6901885.nc' ``` Let's have a look to its content: ``` full_path2file = os.getcwd()+'/'+dataset #default to current directory print('path2file: %s'%(full_path2file)) ds = xarray.open_dataset(dataset) ``` ds contains all the information about the dataset (relevant metadata, variables, dimensions etc): ``` ds ``` these attributes can be accesed individually; i.e: ``` ds.variables.keys() ``` Every of the above parameters varies along certain dimensions (within parenthesis when checking the parameter metadata): ``` ds['TEMP'] ``` Every of the above variables have a corresponding '_QC' variable, which is the variable that contains the data quality flags: ``` ds['TEMP_QC'] ``` This '_QC' variable will therefore guide us when when working with the parameter data to distinguish good from bad data: ``` pd.DataFrame(data=ds['TEMP_QC'].attrs['flag_values'], index = ds['TEMP_QC'].attrs['flag_meanings'].split(' '), columns = ['quality flag']) ``` This way, we will be able to work with good data by selecting only those values with QC flag 1: ``` good_data = ds['TEMP'].where(ds['TEMP_QC'] == 1) ``` Now, let's see how many profiles has been taken by this profiler: ``` cmap = plt.cm.Spectral_r norm = colors.Normalize(vmin=good_data.min().values.tolist(), vmax=good_data.max().values.tolist()) fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111, projection='3d') for a in range(0, len(ds['TIME'])): lat = ds['LATITUDE'].values.tolist()[a] lon = ds['LONGITUDE'].values.tolist()[a] plt.scatter(lon*np.ones(len(ds['PRES'].values.tolist()[a])),lat*np.ones(len(ds['PRES'].values.tolist()[a])), zs=-ds['PRES'][a,:], zdir='z', s=20, c=good_data[a,:], edgecolor='None', cmap=cmap, norm=norm) cbar = plt.colorbar(orientation="horizontal", pad=0.02) cbar.ax.set_xlabel('TEMP') ax.set_title(str(a+1)+' temperature profiles from '+ ds.id, y=1.08) ax.set_zlabel('depth',labelpad=20,rotation=90) ax.set_ylabel('latitude',labelpad=20) ax.set_xlabel('longitude',labelpad=20) ``` <h1>PLOTTING AVAILABLE PROFILES IN A CERTAIN TIME RANGE</h1> ``` subset = ds.sel(TIME=slice('2015-01-01', '2015-12-31')) subset_good_data = subset['TEMP'].where(ds['TEMP_QC'] == 1) cmap = plt.cm.Spectral_r norm = colors.Normalize(vmin=subset_good_data.min().values.tolist(), vmax=subset_good_data.max().values.tolist()) fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111, projection='3d') for a in range(0, len(subset['TIME'])): lat = subset['LATITUDE'].values.tolist()[a] lon = subset['LONGITUDE'].values.tolist()[a] plt.scatter(lon*np.ones(len(subset['PRES'].values.tolist()[a])),lat*np.ones(len(subset['PRES'].values.tolist()[a])), zs=-subset['PRES'][a,:], zdir='z', s=20, c=subset_good_data[a,:], edgecolor='None', cmap=cmap, norm=norm) cbar = plt.colorbar(orientation="horizontal", pad=0.02) cbar.ax.set_xlabel('TEMP') ax.set_title(str(a+1)+' temperature profiles from '+ ds.id, y=1.08) ax.set_zlabel('depth',labelpad=20,rotation=90) ax.set_ylabel('latitude',labelpad=20) ax.set_xlabel('longitude',labelpad=20) ``` <h1>PLOTTING AVAILABLE PROFILES IN A CERTAIN AREA</h1> ``` targeted_lon_min = 24 targeted_lon_max = 26 targeted_lat_min = 35 targeted_lat_max = 38 targeted_area = box(targeted_lon_min, targeted_lat_min, targeted_lon_max, targeted_lat_max) cmap = plt.cm.Spectral_r norm = colors.Normalize(vmin=good_data.min().values.tolist(), vmax=good_data.max().values.tolist()) fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111, projection='3d') b=0 for a in range(0, len(ds['TIME'])): lat = ds['LATITUDE'].values.tolist()[a] lon = ds['LONGITUDE'].values.tolist()[a] xy_point = Point(lon,lat) # = Point(x,y) if targeted_area.contains(xy_point): b = b+1 plt.scatter(lon*np.ones(len(ds['PRES'].values.tolist()[a])),lat*np.ones(len(ds['PRES'].values.tolist()[a])), zs=-ds['PRES'][a,:], zdir='z', s=20, c=good_data[a,:], edgecolor='None', cmap=cmap, norm=norm) cbar = plt.colorbar(orientation="horizontal", pad=0.02) cbar.ax.set_xlabel('TEMP') ax.set_title(str(b)+' temperature profiles from '+ ds.id, y=1.08) ax.set_zlabel('depth',labelpad=20,rotation=90) ax.set_ylabel('latitude',labelpad=20) ax.set_xlabel('longitude',labelpad=20) ```
github_jupyter
# Chapters with only "frequent" words Task: find the chapters without more than 20 rare words, where a rare word has a frequency (as lexeme) of less than 70. A question posed by Oliver Glanz. ``` %load_ext autoreload %autoreload 2 import os from tf.fabric import Fabric from tf.app import use from tf.lib import writeSets, readSets TF = Fabric(modules="etcbc/bhsa/tf/c") api = TF.load("book freq_lex", silent=True) A = use("bhsa", api=api, hoist=globals()) FREQ = 70 AMOUNT = 20 ``` ## Query A straightforward query is: ``` query = f""" chapter /without/ word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} < word freq_lex<{FREQ} /-/ """ ``` Two problems with this query: * it is very inelegant * it does not perform, in fact, you cannot wait for it. So, better not search with this one. ``` # indent(reset=True) # info('start query') # results = S.search(query, limit=1) # info('end query') # len(results) ``` # By hand On the other hand, with a bit of hand coding it is very easy, and almost instantaneous: ``` results = [] allChapters = F.otype.s("chapter") for chapter in allChapters: if ( len([word for word in L.d(chapter, otype="word") if F.freq_lex.v(word) < FREQ]) < AMOUNT ): results.append(chapter) print(f"{len(results)} chapters out of {len(allChapters)}") resultsByBook = dict() for chapter in results: (bk, ch, vs) = T.sectionFromNode(chapter) resultsByBook.setdefault(bk, []).append(ch) for (bk, chps) in resultsByBook.items(): print("{} {}".format(bk, ", ".join(str(c) for c in chps))) ``` # Custom sets Once you have these chapters, you can put them in a set and use them in queries. We show how to query results as far as they occur in an "ordinary" chapter. First we search for a phenomenon in all chapters. The phenomenon is a clause with a subject consisting of a single noun in the plural and a verb in the plural. ``` sets = dict(ochapter=set(results)) query1 = """ verse clause phrase function=Pred word pdp=verb nu=sg phrase function=Subj =: word pdp=subs nu=pl := """ results1 = A.search(query1) A.table(results1, start=1, end=10) ``` Now we want to restrict results to ordinary chapters: ``` query2 = """ ochapter verse clause phrase function=Pred word pdp=verb nu=sg phrase function=Subj =: word pdp=subs nu=pl := """ ``` Note that we use the name of a set here: `ochapter`. It is not a known node type in the BHSA, so we have to tell it what it means. We do that by passing a dictionary of custom sets. The keys are the names of the sets, which are the values. Then we may use those keys in queries, everywhere where a node type is expected. ``` results2 = A.search(query2, sets=sets) A.table(results2) ``` ## Custom sets in the browser We save the sets in a file. But before we do so, we also want to save all ordinary verses in a set, and all ordinary words. ``` queryV = f""" verse /without/ word freq_lex<{FREQ} /-/ """ resultsV = A.search(queryV, shallow=True) sets["overse"] = resultsV sets["oword"] = {w for w in F.otype.s("word") if F.freq_lex.v(w) >= FREQ} SETS_FILE = os.path.expanduser("~/Downloads/ordinary.set") writeSets(sets, SETS_FILE) testSets = readSets(SETS_FILE) for s in sorted(testSets): elems = len(testSets[s]) oelems = len(sets[s]) print(f"{s} with {elems} nb {elems - oelems}") ``` # Appendix: investigation Let's investigate the number of ordinary chapters with shifting definitions of ordinary ``` allChapters = F.otype.s("chapter") longestChapter = max(len(L.d(chapter, otype="word")) for chapter in allChapters) print(f"There are {len(allChapters)} chapters, the longest is {longestChapter} words") def getOrdinary(freq, amount): results = [] for chapter in allChapters: if ( len( [ word for word in L.d(chapter, otype="word") if F.freq_lex.v(word) < freq ] ) < amount ): results.append(chapter) return results def overview(freq): for amount in range(20, 1700, 10): results = getOrdinary(freq, amount) print( f"for freq={freq:>3} and amount={amount:>4}: {len(results):>4} ordinary chapters" ) if len(results) >= len(allChapters): break for freq in (40, 70, 100): overview(freq) ```
github_jupyter
# Creating a Sampled Dataset **Learning Objectives** - Sample the natality dataset to create train/eval/test sets - Preprocess the data in Pandas dataframe ## Introduction In this notebook we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe. ``` PROJECT = "cloud-training-demos" # Replace with your PROJECT BUCKET = "cloud-training-bucket" # Replace with your BUCKET REGION = "us-central1" # Choose an available region for Cloud MLE TFVERSION = "1.14" # TF version for CMLE to use import os os.environ["BUCKET"] = BUCKET os.environ["PROJECT"] = PROJECT os.environ["REGION"] = REGION os.environ["TFVERSION"] = TFVERSION %%bash if ! gsutil ls | grep -q gs://${BUCKET}/; then gsutil mb -l ${REGION} gs://${BUCKET} fi ``` ## Create ML datasets by sampling using BigQuery We'll begin by sampling the BigQuery data to create smaller datasets. ``` # Create SQL query using natality data after the year 2000 query_string = """ WITH CTE_hash_cols_fixed AS ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, year, month, CASE WHEN day IS NULL AND wday IS NULL THEN 0 ELSE CASE WHEN day IS NULL THEN wday ELSE wday END END AS date, IFNULL(state, "Unknown") AS state, IFNULL(mother_birth_state, "Unknown") AS mother_birth_state FROM publicdata.samples.natality WHERE year > 2000) SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, ABS(FARM_FINGERPRINT(CONCAT(CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING)))) AS hashvalues FROM CTE_hash_cols_fixed """ ``` There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We'll call BigQuery but group by the hashcolumn and see the number of records for each group. This will enable us to get the correct train/eval/test percentages ``` from google.cloud import bigquery bq = bigquery.Client(project = PROJECT) df = bq.query("SELECT hashvalues, COUNT(weight_pounds) AS num_babies FROM (" + query_string + ") GROUP BY hashvalues").to_dataframe() print("There are {} unique hashvalues.".format(len(df))) df.head() ``` We can make a query to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly ``` sampling_percentages_query = """ WITH -- Get label, features, and column that we are going to use to split into buckets on CTE_hash_cols_fixed AS ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, year, month, CASE WHEN day IS NULL AND wday IS NULL THEN 0 ELSE CASE WHEN day IS NULL THEN wday ELSE wday END END AS date, IFNULL(state, "Unknown") AS state, IFNULL(mother_birth_state, "Unknown") AS mother_birth_state FROM publicdata.samples.natality WHERE year > 2000), CTE_data AS ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, ABS(FARM_FINGERPRINT(CONCAT(CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING)))) AS hashvalues FROM CTE_hash_cols_fixed), -- Get the counts of each of the unique hashs of our splitting column CTE_first_bucketing AS ( SELECT hashvalues, COUNT(*) AS num_records FROM CTE_data GROUP BY hashvalues ), -- Get the number of records in each of the hash buckets CTE_second_bucketing AS ( SELECT MOD(hashvalues, {0}) AS bucket_index, SUM(num_records) AS num_records FROM CTE_first_bucketing GROUP BY MOD(hashvalues, {0})), -- Calculate the overall percentages CTE_percentages AS ( SELECT bucket_index, num_records, CAST(num_records AS FLOAT64) / ( SELECT SUM(num_records) FROM CTE_second_bucketing) AS percent_records FROM CTE_second_bucketing ), -- Choose which of the hash buckets will be used for training and pull in their statistics CTE_train AS ( SELECT *, "train" AS dataset_name FROM CTE_percentages WHERE bucket_index >= 0 AND bucket_index < {1}), -- Choose which of the hash buckets will be used for validation and pull in their statistics CTE_eval AS ( SELECT *, "eval" AS dataset_name FROM CTE_percentages WHERE bucket_index >= {1} AND bucket_index < {2}), -- Choose which of the hash buckets will be used for testing and pull in their statistics CTE_test AS ( SELECT *, "test" AS dataset_name FROM CTE_percentages WHERE bucket_index >= {2} AND bucket_index < {0}), -- Union the training, validation, and testing dataset statistics CTE_union AS ( SELECT 0 AS dataset_id, * FROM CTE_train UNION ALL SELECT 1 AS dataset_id, * FROM CTE_eval UNION ALL SELECT 2 AS dataset_id, * FROM CTE_test ), -- Show final splitting and associated statistics CTE_split AS ( SELECT dataset_id, dataset_name, SUM(num_records) AS num_records, SUM(percent_records) AS percent_records FROM CTE_union GROUP BY dataset_id, dataset_name ) SELECT * FROM CTE_split ORDER BY dataset_id """ modulo_divisor = 100 train_percent = 80.0 eval_percent = 10.0 train_buckets = int(modulo_divisor * train_percent / 100.0) eval_buckets = int(modulo_divisor * eval_percent / 100.0) df = bq.query(sampling_percentages_query.format(modulo_divisor, train_buckets, train_buckets + eval_buckets)).to_dataframe() df.head() ``` Here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap. ``` # Added every_n so that we can now subsample from each of the hash values to get approximately the record counts we want every_n = 500 train_query = "SELECT * FROM ({0}) WHERE MOD(hashvalues, {1} * 100) < 80".format(query_string, every_n) eval_query = "SELECT * FROM ({0}) WHERE MOD(hashvalues, {1} * 100) >= 80 AND MOD(hashvalues, {1} * 100) < 90".format(query_string, every_n) test_query = "SELECT * FROM ({0}) WHERE MOD(hashvalues, {1} * 100) >= 90 AND MOD(hashvalues, {1} * 100) < 100".format(query_string, every_n) train_df = bq.query(train_query).to_dataframe() eval_df = bq.query(eval_query).to_dataframe() test_df = bq.query(test_query).to_dataframe() print("There are {} examples in the train dataset.".format(len(train_df))) print("There are {} examples in the validation dataset.".format(len(eval_df))) print("There are {} examples in the test dataset.".format(len(test_df))) ``` ## Preprocess data using Pandas We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, We'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is. ``` train_df.head() ``` Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data) ``` train_df.describe() ``` It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect. ``` import pandas as pd def preprocess(df): # Clean up data # Remove what we don"t want to use for training df = df[df.weight_pounds > 0] df = df[df.mother_age > 0] df = df[df.gestation_weeks > 0] df = df[df.plurality > 0] # Modify plurality field to be a string twins_etc = dict(zip([1,2,3,4,5], ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"])) df["plurality"].replace(twins_etc, inplace = True) # Now create extra rows to simulate lack of ultrasound no_ultrasound = df.copy(deep = True) no_ultrasound.loc[no_ultrasound["plurality"] != "Single(1)", "plurality"] = "Multiple(2+)" no_ultrasound["is_male"] = "Unknown" # Concatenate both datasets together and shuffle return pd.concat([df, no_ultrasound]).sample(frac=1).reset_index(drop=True) ``` Let's process the train/eval/test set and see a small sample of the training data after our preprocessing: ``` train_df = preprocess(train_df) eval_df = preprocess(eval_df) test_df = preprocess(test_df) train_df.head() train_df.tail() ``` Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up. ``` train_df.describe() ``` ## Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers. ``` columns = "weight_pounds,is_male,mother_age,plurality,gestation_weeks".split(',') train_df.to_csv(path_or_buf = "train.csv", columns = columns, header = False, index = False) eval_df.to_csv(path_or_buf = "eval.csv", columns = columns, header = False, index = False) test_df.to_csv(path_or_buf = "test.csv", columns = columns, header = False, index = False) %%bash wc -l *.csv %%bash head *.csv %%bash tail *.csv ``` Copyright 2017-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
# Parallel Experimentation with BERT on AzureML ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/nlp/examples/sentence_similarity/bert_senteval.png) [SentEval](https://github.com/facebookresearch/SentEval) is a widely used benchmarking tool developed by Facebook Research for evaluating general-purpose sentence embeddings. It provides a simple interface for evaluating embeddings on up to 17 supported downstream tasks (such as sentiment classification, natural language inference, semantic similarity, etc.) Due to the fact that different BERT layers capture different information, and that the choice of pooling layer and pooling strategy for the encoding is highly dependent on the final finetuning task, we use SentEval to evaluate different combinations of these encoding parameters on the STSBenchmark dataset. In this notebook, we aim to show an example of * running SentEval experiments with BERT encodings * running parallel jobs on AzureML compute targets for faster experimentation (extracting sequence encodings from BERT with 110M parameters is computationally expensive, even without finetuning. Each experiment could take an hour or more, depending on the specs of the machine, so running multiple experiments sequentially can quickly add up) ### 00 Global Settings ``` import os import sys import pickle import shutil import itertools import glob import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import scrapbook as sb import azureml from azureml.core import Experiment from azureml.data.data_reference import DataReference from azureml.train.dnn import PyTorch from azureml.widgets import RunDetails sys.path.append("../../") from utils_nlp.azureml.azureml_utils import get_or_create_workspace, get_or_create_amlcompute from utils_nlp.models.bert.common import Language, Tokenizer from utils_nlp.models.bert.sequence_encoding import BERTSentenceEncoder, PoolingStrategy from utils_nlp.eval.senteval import SentEvalConfig %matplotlib inline print("System version: {}".format(sys.version)) print("AzureML version: {}".format(azureml.core.VERSION)) # azureml config subscription_id = "YOUR_SUBSCRIPTION_ID" resource_group = "YOUR_RESOURCE_GROUP_NAME" workspace_name = "YOUR_WORKSPACE_NAME" workspace_region = "YOUR_WORKSPACE_REGION" # path config CACHE_DIR = "./temp" LOCAL_UTILS = "../../utils_nlp" LOCAL_SENTEVAL = "../../utils_nlp/eval/SentEval" EXPERIMENT_NAME = "NLP-SS-bert" CLUSTER_NAME = "eval-gpu" MAX_NODES = None # we scale the number of nodes in the cluster automatically os.makedirs(CACHE_DIR, exist_ok=True) ``` We evaluate 768-dimensional encodings from BERT with each combination of 12 BERT layers and 2 pooling strategies (mean and max) for a total of 24 experiments. To run a smaller number of experiments or customize the pooling layers/strategies of interest, edit `EXP_PARAMS`. ``` MODEL_PARAMS = { "num_gpus": 1, "language": Language.ENGLISH, "to_lower": True, "max_len": 128, "cache_dir": CACHE_DIR } SENTEVAL_PARAMS = { "usepytorch": True, "batch_size": 128, "transfer_tasks": ["STSBenchmark"] } EXP_PARAMS = { "layer_index": range(12), "pooling_strategy": [PoolingStrategy.MEAN, PoolingStrategy.MAX], } ``` ### 01 Set up AzureML resources We set up the following AzureML resources for this example: * A [Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), a centralized hub for all the artifacts you create when you use Azure Machine Learning service * An [Experiment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py), which acts a container for trials or model runs * A [Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data), a compute location-independent abstraction of data in Azure storage accounts The following cell looks to set up the connection to your [Azure Machine Learning service Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace). You can choose to connect to an existing workspace or create a new one. **To access an existing workspace:** 1. If you have a `config.json` file, you do not need to provide the workspace information; you will only need to update the `config_path` variable that is defined above which contains the file. 2. Otherwise, you will need to supply the following: * The name of your workspace * Your subscription id * The resource group name **To create a new workspace:** Set the following information: * A name for your workspace * Your subscription id * The resource group name * [Azure region](https://azure.microsoft.com/en-us/global-infrastructure/regions/) to create the workspace in, such as `eastus2`. This will automatically create a new resource group for you in the region provided if a resource group with the name given does not already exist. ``` ws = get_or_create_workspace( subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name, workspace_region=workspace_region, ) exp = Experiment(workspace=ws, name=EXPERIMENT_NAME) ds = ws.get_default_datastore() ``` ### 02 Set up SentEval Run the bash script to download the data for auxiliary transfer tasks. ``` data_path = os.path.join(LOCAL_SENTEVAL, "data/downstream") data_script = "get_transfer_data.bash" tokenizer_name = "tokenizer.sed" !cd $data_path && pwd && chmod 777 $data_script && chmod 777 $tokenizer_name &&bash $data_script ``` We upload the SentEval dependency to datastore. ``` ds.upload( src_dir=LOCAL_SENTEVAL, target_path=os.path.join(EXPERIMENT_NAME, "senteval"), overwrite=False, show_progress=False, ) ``` ### 03 Define experiment configurations We define a set of static configurations, which entails model parameters that will stay consistent across all experiments, in `SentEvalConfig`. We also define the parameter space that will vary across the experiments. We serialize the configuration objects and upload them to our datastore to make them accessible to all experiments. ``` sc = SentEvalConfig( model_params=MODEL_PARAMS, senteval_params=SENTEVAL_PARAMS, ) parameter_groups = list(itertools.product(*list(EXP_PARAMS.values()))) if MAX_NODES is not None: parameter_groups = parameter_groups[:MAX_NODES] os.makedirs(os.path.join(CACHE_DIR, "config"), exist_ok=True) static_config = ( SentEvalConfig(model_params=MODEL_PARAMS, senteval_params=SENTEVAL_PARAMS), os.path.join(CACHE_DIR, "config", "static_config.pkl"), ) exp_configs = [ ( dict(zip(EXP_PARAMS.keys(), p)), os.path.join(CACHE_DIR, "config", "exp_config_{0:03d}.pkl".format(i)), ) for i, p in enumerate(parameter_groups) ] configs = [static_config] + exp_configs for config in configs: pickle.dump(config[0], open(config[1], "wb")) ds.upload_files( [c[1] for c in configs], target_path="{}/config".format(EXPERIMENT_NAME), overwrite=True, show_progress=False, ) ``` ### 04 Scale the compute target Scale the number of nodes in the compute target to the number of experiments we want to run. ``` compute_target = get_or_create_amlcompute( workspace=ws, compute_name=CLUSTER_NAME, vm_size="STANDARD_NC6", min_nodes=0, max_nodes=len(parameter_groups), idle_seconds_before_scaledown=300, verbose=False, ) print( "Scaling compute target {0} to {1} node(s)".format( CLUSTER_NAME, len(parameter_groups) ) ) ``` ### 05 Define the execution script Here we define the script to be executed for each experiment on the remote compute target. We deserialize the configuration objects from the datastore to specify the model parameters for the experiment, and run the SentEval evaluation engine with that model for the STSBenchmark transfer task. As specified in the SentEval repo, we implement the **batcher** function, which transforms a batch of text sentence into sentence embeddings. After running SentEval, we serialize the output. ``` src_dir = os.path.join(CACHE_DIR, EXPERIMENT_NAME) os.makedirs(src_dir, exist_ok=True) if not os.path.exists(os.path.join(src_dir, "utils_nlp")): shutil.copytree( LOCAL_UTILS, os.path.join(src_dir, "utils_nlp"), ignore=shutil.ignore_patterns("__pycache__", "SentEval"), ) %%writefile $src_dir/run.py import pickle import argparse import os from utils_nlp.eval.senteval import SentEvalConfig from utils_nlp.models.bert.sequence_encoding import BERTSentenceEncoder def prepare_output(output_dir, config_file): os.makedirs(output_dir, exist_ok=True) out = os.path.join( output_dir, "results_{}.pkl".format(config_file.split("/")[-1].split(".")[0][-3:]), ) return out def batcher(params, batch): sentences = [" ".join(s).lower() for s in batch] embeddings = params["model"].encode( sentences, batch_size=params["batch_size"], as_numpy=True ) return embeddings if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--data_dir", type=str, dest="data_dir") parser.add_argument( "--static_config", type=str, dest="static_config", help="Filename of serialized static config object", ) parser.add_argument( "--exp_config", type=str, dest="exp_config", help="Filename of serialized experiment config object", ) parser.add_argument( "--output_dir", type=str, dest="output_dir", help="Directory to write serialized results to", ) args = parser.parse_args() # Import senteval sys.path.insert(0, args.data_dir) import senteval # Deserialize configs static_config = pickle.load(open(args.static_config, "rb")) exp_config = pickle.load(open(args.exp_config, "rb")) # Update senteval params for this experiment params = static_config.senteval_params params["model"] = BERTSentenceEncoder(**static_config.model_params) for k, v in exp_config.items(): setattr(params["model"], k, v) params["task_path"] = "{}/data".format(args.data_dir) # Run the senteval engine se = senteval.engine.SE(params, batcher) results = se.eval(params["transfer_tasks"]) # Pickle the output output_file = prepare_output(args.output_dir, args.exp_config) print("Pickling to {}".format(output_file)) pickle.dump(results, open(output_file, "wb")) ``` ### 06 Run the experiments in parallel We iterate through the experiment parameter combinations and submit each job to AmlCompute as a `PyTorch` estimator. Since we explicitly set `node_count=1` and `process_count_per_node=1` in the estimator, the jobs will run in parallel. ``` runs = [] for i in range(len(parameter_groups)): est = PyTorch( source_directory=src_dir, script_params={ "--data_dir": ds.path("{}/senteval".format(EXPERIMENT_NAME)).as_mount(), "--static_config": ds.path( "{0}/{1}/{2}".format( EXPERIMENT_NAME, "config", static_config[1].split("/")[-1] ) ).as_mount(), "--exp_config": ds.path( "{0}/{1}/{2}".format( EXPERIMENT_NAME, "config", exp_configs[i][1].split("/")[-1] ) ), "--output_dir": "./outputs", }, compute_target=compute_target, entry_script="run.py", inputs=[ DataReference( datastore=ds, path_on_datastore="outputs" ).as_upload( path_on_compute=os.path.join("./outputs/results_{0:03d}.pkl".format(i)) ) ], node_count=1, process_count_per_node=1, use_gpu=True, framework_version="1.1", conda_packages=["numpy", "pandas"], pip_packages=[ "scikit-learn==0.20.3", "azureml-sdk==1.0.53", "pytorch-pretrained-bert>=0.6", "cached-property==1.5.1", ], ) run = exp.submit(est) runs.append(run) ``` Each run object is collected in `runs`, so we can monitor any run via a Jupyter widget for debugging. ``` #RunDetails(runs[0]).show() ``` Alternatively, block until the runs are complete. ``` map(lambda r: r.wait_for_completion(), runs) ``` Finally, we pull down the serialized outputs of each experiment from the datastore and inspect the metrics for analysis. ``` ds.download( target_path=CACHE_DIR, prefix="outputs", show_progress=False, ) ``` Here we aggregate the outputs from each SentEval experiment to plot the distribution of Pearson correlations reported across the different encodings. We can see that for the STS Benchmark downstream task, the first layer achieves the highest Pearson correlation on the test dataset. As suggested in [bert-as-a-service](https://github.com/hanxiao/bert-as-service), this can be interpreted as a representation that is closer to the original word embedding. ``` results = [ pickle.load(open(f, "rb")) for f in sorted(glob.glob(os.path.join(CACHE_DIR, "outputs", "*.pkl"))) ] # For testing sb.glue("pearson", results[0]["STSBenchmark"]["pearson"]) sb.glue("mse", results[0]["STSBenchmark"]["mse"]) if len(results) == 24: df = pd.DataFrame( np.reshape( [r["STSBenchmark"]["pearson"] for r in results], (len(EXP_PARAMS["layer_index"]), len(EXP_PARAMS["pooling_strategy"])), ).T, index=[s.value for s in EXP_PARAMS["pooling_strategy"]], columns=EXP_PARAMS["layer_index"], ) fig, ax = plt.subplots(figsize=(10, 2)) sns.heatmap(df, annot=True, fmt=".2g", ax=ax).set_title( "Pearson correlations of BERT sequence encodings on STS Benchmark" ) ```
github_jupyter
# Preprocessing Structured data in TF 2.3 In TF 2.3, Keras adds new preprocessing layers for image, text and strucured data. The following notebook explores those new layers for dealing with Structured data. For a complete example of how to use the new preprocessing layer for Structured data check the Keras example - [link](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/). ``` #hide %%bash pip install -q tf-nightly #hide import pandas as pd import tensorflow as tf print('TF version: ', tf.__version__) ``` ## Structured data Generate some random data for playing with and seeing what is the output of the preprocessing layers. ``` xdf = pd.DataFrame({ 'categorical_string': ['LOW', 'HIGH', 'HIGH', 'MEDIUM'], 'categorical_integer_1': [1, 0, 1, 0], 'categorical_integer_2': [1, 2, 3, 4], 'numerical_1': [2.3, 0.2, 1.9, 5.8], 'numerical_2': [16, 32, 8, 60] }) ydf = pd.DataFrame({'target': [0, 0, 0, 1]}) ds = tf.data.Dataset.from_tensor_slices((dict(xdf), ydf)) for x, y in ds.take(1): print('X:', x) print('y:', y) from tensorflow.keras.layers.experimental.preprocessing import Normalization from tensorflow.keras.layers.experimental.preprocessing import CategoryEncoding from tensorflow.keras.layers.experimental.preprocessing import StringLookup ``` ## Pre-processing Numercial columns Preprocessing helper function to encode **numercial** features, e.g. 0.1, 0.2, etc. ``` def create_numerical_encoder(dataset, name): # Create a Normalization layer for our feature normalizer = Normalization() # Prepare a Dataset that only yields our feature feature_ds = dataset.map(lambda x, y: x[name]) feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1)) # Learn the statistics of the data normalizer.adapt(feature_ds) return normalizer # Apply normalization to a numerical feature normalizer = create_numerical_encoder(ds, 'numerical_1') normalizer.apply(xdf[name].values) ``` ## Pre-processing Integer categorical columns Preprocessing helper function to encode **integer categorical** features, e.g. 1, 2, 3 ``` def create_integer_categorical_encoder(dataset, name): # Create a CategoryEncoding for our integer indices encoder = CategoryEncoding(output_mode="binary") # Prepare a Dataset that only yields our feature feature_ds = dataset.map(lambda x, y: x[name]) feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1)) # Learn the space of possible indices encoder.adapt(feature_ds) return encoder # Apply one-hot encoding to an integer categorical feature encoder1 = create_integer_categorical_encoder(ds, 'categorical_integer_1') encoder1.apply(xdf['categorical_integer_1'].values) # Apply one-hot encoding to an integer categorical feature encoder2 = create_integer_categorical_encoder(ds, 'categorical_integer_2') encoder2.apply(xdf['categorical_integer_2'].values) ``` ## Pre-processing String categorical columns Preprocessing helper function to encode **string categorical** features, e.g. LOW, HIGH, MEDIUM. This will applying the following to the input feature: 1. Create a token to index lookup table 2. Apply one-hot encoding to the tokens indices ``` def create_string_categorical_encoder(dataset, name): # Create a StringLookup layer which will turn strings into integer indices index = StringLookup() # Prepare a Dataset that only yields our feature feature_ds = dataset.map(lambda x, y: x[name]) feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1)) # Learn the set of possible string values and assign them a fixed integer index index.adapt(feature_ds) # Create a CategoryEncoding for our integer indices encoder = CategoryEncoding(output_mode="binary") # Prepare a dataset of indices feature_ds = feature_ds.map(index) # Learn the space of possible indices encoder.adapt(feature_ds) return index, encoder # Apply one-hot encoding to an integer categorical feature indexer, encoder3 = create_string_categorical_encoder(ds, 'categorical_string') # Turn the string input into integer indices indices = indexer.apply(xdf['categorical_string'].values) # Apply one-hot encoding to our indices encoder3.apply(indices) ``` Notice that the string categorical column was hot encoded into 5 tokens whereas in the input dataframe there is only 3 unique values. This is because the indexer adds 2 more tokens. See the vocabulary: ``` indexer.get_vocabulary() ```
github_jupyter
# Week 6 **`Agent::model-free::DQN`** - binary trader - softmax policy ``` # change current working directory %cd .. # suppress warning messages import warnings warnings.filterwarnings('ignore') # trading environment from qtrader.envs import TradingEnv # agent base class from qtrader.agents.base import Agent # NumPy implementation of Softmax from qtrader.utils.numpy import softmax # OpenAI spaces from qtrader.utils.gym import cardinalities # OpenAI environment automated execution from qtrader.utils.gym import run # one-hot encoded actions from qtrader.utils.gym import one_hot # YAML parser import yaml # built-in containers from collections import deque # random number generator import random # scientific programming import numpy as np import pandas as pd # deep-learning framework import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F # OpenAI Gym import gym # # visualization import matplotlib.pyplot as plt # fetch configuration file config = yaml.load(open('config/log/week_6.yaml', 'r')) # configuration summary print(f"start date: {config['start_date']}") print(f"trading frequency: {config['freq']}") print(f"trading universe: {config['tickers']}") print(f"number of episodes: {config['num_episodes']}") print(f"number of neurons in hidden layer: {config['hidden_layer']}") print(f"replay memory capacity: {config['capacity']}") ``` ## `DQNAgent` Generic Deep Q-Network agent. ``` class Brain(nn.Module): """Neural network used in DQN.""" def __init__(self, in_dim, hidden_dim, out_dim, lr): super(Brain, self).__init__() # network layers self.fc1 = nn.Linear(in_dim, hidden_dim) self.fc2 = nn.Linear(hidden_dim, hidden_dim) self.fc3 = nn.Linear(hidden_dim, out_dim) # optimizer self.optimizer = optim.Adam(self.parameters(), lr=lr) # objective function self.criterion = nn.MSELoss() def forward(self, x): """Inference.""" if isinstance(x, np.ndarray): x = torch.from_numpy(x) out = self.fc1(x) out = F.relu(out) out = self.fc2(out) out = F.relu(out) out = self.fc3(out) return out def fit(self, X, y): """Training.""" if isinstance(y, np.ndarray): y = torch.from_numpy(y) self.optimizer.zero_grad() # prediction y_hat = self(X) # loss function loss = self.criterion(y_hat, y) # get gradients loss.backward() # update weights self.optimizer.step() class DQNAgent(Agent): _id = 'DQN' def __init__(self, state_size, action_size, binary=False, **kwargs): self.state_size = state_size self.action_size = action_size # hyperparameters for DQN self.gamma = kwargs.get('gamma', 0.99) self.lr = kwargs.get('lr', 0.001) self.epsilon = kwargs.get('epsilon', 1.0) self.epsilon_decay = kwargs.get('epsilon_decay', 0.999) self.epsilon_min = kwargs.get('epsilon_min', 0.01) self.batch_size = kwargs.get('batch_size', 64) self.train_start = kwargs.get('train_start', 1000) # replay memory self.memory = deque(maxlen=kwargs.get('capacity', 2000)) # main and target models self.model = Brain(self.state_size, kwargs.get('hidden_layer', 24), self.action_size, self.lr) self.model.double() self.target_model = Brain( self.state_size, kwargs.get('hidden_layer', 24), self.action_size, self.lr) self.target_model.double() # init target model self.update_target_model() # binary actions self._binary = binary ########### # Private ########### def update_target_model(self): self.target_model.load_state_dict(self.model.state_dict()) def get_action(self, state): if np.random.rand() <= self.epsilon: return random.randrange(self.action_size) else: q_value = self.model(state).detach().numpy()[0] return np.argmax(q_value) def append_sample(self, state, action, reward, next_state, done): self.memory.append((state, action, reward, next_state, done)) if self.epsilon > self.epsilon_min: self.epsilon = self.epsilon * self.epsilon_decay def train_model(self): if len(self.memory) < self.train_start: return batch_size = min(self.batch_size, len(self.memory)) mini_batch = random.sample(self.memory, batch_size) update_input = np.zeros((batch_size, self.state_size)) update_target = np.zeros((batch_size, self.state_size)) action, reward, done = [], [], [] for i in range(batch_size): update_input[i] = mini_batch[i][0] action.append(mini_batch[i][1]) reward.append(mini_batch[i][2]) update_target[i] = mini_batch[i][3] done.append(mini_batch[i][4]) update_input = torch.from_numpy(update_input) target = self.model(update_input).detach().numpy() update_target = torch.from_numpy(update_target) target_val = self.target_model(update_target).detach().numpy() for i in range(batch_size): if done[i]: target[i][action[i]] = reward[i] else: target[i][action[i]] = reward[i] + \ self.gamma * np.amax(target_val[i]) self.model.fit(update_input, target) ########### # Public ########### def act(self, observation): if isinstance(observation, dict): observation = observation['prices'] if isinstance(observation, pd.Series): observation = observation.values observation = np.reshape(observation, [1, self.state_size]) _action = self.get_action(observation) if self._binary: return one_hot(_action, self.action_size) else: return _action def observe(self, observation, action, reward, done, next_observation): if isinstance(observation, dict): observation = observation['prices'] if isinstance(observation, pd.Series): observation = observation.values if isinstance(next_observation, dict): next_observation = next_observation['prices'] if isinstance(next_observation, pd.Series): next_observation = next_observation.values observation = np.reshape(observation, [1, self.state_size]) next_observation = np.reshape( next_observation, [1, self.state_size]) if self._binary: action = np.argmax(action) self.append_sample(observation, action, reward, next_observation, done) self.train_model() def end_episode(self): self.update_target_model() ``` ### `CartPole-v1` Proof of concept on a standard environment, the **CartPole-v1**. ``` # initialize environment env = gym.make('CartPole-v1') # get environment spaces observation_space, action_space = cardinalities(env) # initialize agent agent = DQNAgent(observation_space, action_space) # execute environment rewards, actions = run(env, agent, config['num_episodes'], True, False) # initialize figure & axes fig, axes = plt.subplots(figsize=(19.2, 4.8)) # plot cumulative reward per-episode for j in range(len(rewards)): axes.bar(j+1, sum(rewards[j]), color='g') # axes settings axes.set(title='CartPole-v1: Score per Episode', ylabel='Score', xlabel='Episode, #'); ``` ### `TradingEnv` Evaluation on the custom trading environment, the **TradingEnv**. ``` # initialize environment env = TradingEnv(universe=config['tickers'], trading_period=config['freq'], start_date=config['start_date'], csv=config['csv_file_prices']) # get environment spaces observation_space, action_space = cardinalities(env) # initialize agent agent = DQNAgent(observation_space, action_space, binary=True, hidden_layer=config['hidden_layer'], capacity=config['capacity']) # execute environment rewards, actions = run(env, agent, config['num_episodes'], True, False) # initialize figure & axes fig, axes = plt.subplots(figsize=(19.2, 4.8)) # plot cumulative reward per-episode for j in range(len(rewards)): axes.bar(j+1, sum(rewards[j]), color='r') # axes settings axes.set(title='TradingEnv: Score per Episode', ylabel='Score', xlabel='Episode, #') # visualize env.render() for asset in env.universe: # access episode actions acts = env.agents[agent.name].actions[asset] fig, ax = plt.subplots(figsize=(19.2, 4.8)) env._prices[asset].plot(ax=ax) sc = ax.scatter(env._prices[asset].index, env._prices[asset].values, c=acts.values, cmap=plt.cm.Reds, marker='|', s=1000, vmin=0, vmax=1) ax.set(ylabel='Prices, $', title='%s: Prices and Agent Actions' % asset) fig.colorbar(sc, ax=ax) ```
github_jupyter
``` import os import clipboard def limit_len(itemname, maxlen=20): if len(itemname)>maxlen: # the item name length less equals 4 characters itemname=itemname[:maxlen] return itemname def fix_line(line): index=line.find(" ") itemname=line[:index] itembody=line[index:] # itemname=itemname.replace("-","") # itemname=itemname.replace(".","") # itemname=itemname.replace("@","") itemname=itemname.translate({ord(i):None for i in '-.@?!~$%'}) # print(itemname+"#") itemname=limit_len(itemname) line=itemname+" "+itembody return line ## append the reverse word def proc_words(line, allcontent): parts=line.strip().split(" ") # print(parts[0]+".") itemcnt=parts[0][1:] for part in parts[1:]: allcontent.append(limit_len(part)+" "+itemcnt) def filter_file(filename, allcontent): f = open(filename, "r") lines=f.readlines() for line in lines: line=line.rstrip() if len(line)==0: continue if line.lstrip().startswith("%"): continue if line.lstrip().startswith("@"): proc_words(line, allcontent) continue allcontent.append(fix_line(line)) def join_all_files(files): allcontent=[] for filename in files: print('♮', filename) filter_file(filename, allcontent) # allcontent=allcontent+f.read() return allcontent import glob def proc_data_files(glob_filter): ## basics: katakana.txt hiragana.txt # files=["ja-Basics.txt", "ja-Food.txt", "ja-Colors.txt", # "ja-Places.txt", "ja-Politics.txt"] files=glob.glob(glob_filter) result=join_all_files(files) return result def proc_japanese(): ## basics: katakana.txt hiragana.txt # files=["ja-Basics.txt", "ja-Food.txt", "ja-Colors.txt", # "ja-Places.txt", "ja-Politics.txt"] result=proc_data_files('data/synonyms/ja-*.txt') print('\n'.join(result)) proc_japanese() def proc_chinese(): result=proc_data_files('data/synonyms/cn-*.txt') print('\n'.join(result)) proc_chinese() engineers = set(['John', 'Jane', 'Jack', 'Jack', 'Janice']) print(engineers) import re action_index={} word_index={} result=proc_data_files('data/synonyms/cn-*.txt') for line in result: # words=line.split(' ') words=re.split("[ \t]+", line) key=words[0] if key in action_index: # Update the set, adding elements from all others. action_index[key].update(words[1:]) else: action_index[key]=set(words[1:]) for val in words[1:]: if val in word_index: word_index[val].add(key) else: word_index[val]=set([key]) print('actions ...') for k,v in action_index.items(): print(k,v) print('words ...') for k,v in word_index.items(): print(k,v) def convert_form_name(form_name): """ Usage: convert_form_name('EditAgreementItem') :param form_name: :return: like 'Edit Agreement Item' """ from sagas.util.str_converters import to_camel_case, to_snake_case, to_words return to_words(to_snake_case(form_name), True).lower() convert_form_name('EditAgreementItem') ```
github_jupyter
### GAN Model v3 ``` import time import matplotlib.pyplot as plt import numpy as np import pandas as pd from utils import split_sequence, get_apple_close_price, plot_series from utils import plot_residual_forecast_error, print_performance_metrics from utils import get_range, difference, inverse_difference from utils import train_test_split, NN_walk_forward_validation apple_close_price = get_apple_close_price() short_series = get_range(apple_close_price, '2003-01-01') # Model parameters look_back = 5 # days window look back n_features = 1 # our only feature will be Close price n_outputs = 1 # days forecast batch_size = 32 # for NN, batch size before updating weights n_epochs = 100 # for NN, number of training epochs ``` We need to first train/test split, then transform and scale our data ``` from scipy.stats import boxcox from scipy.special import inv_boxcox train, test= train_test_split(apple_close_price,'2018-05-31') boxcox_series, lmbda = boxcox(train.values) transformed_train = boxcox_series transformed_test = boxcox(test, lmbda=lmbda) # transformed_train = train.values # transformed_test = test.values from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaled_train = scaler.fit_transform(transformed_train.reshape(-1, 1)) scaled_test = scaler.transform(transformed_test.reshape(-1, 1)) X_train, y_train = split_sequence(scaled_train, look_back, n_outputs) y_train = y_train.reshape(-1, n_outputs) # Core layers from keras.layers \ import Activation, Dropout, Flatten, Dense, Input, LeakyReLU, Reshape # Recurrent layers from keras.layers import LSTM # Convolutional layers from keras.layers import Conv1D, MaxPooling1D # Normalization layers from keras.layers import BatchNormalization # Merge layers from keras.layers import concatenate # Layer wrappers from keras.layers import Bidirectional, TimeDistributed # Keras models from keras.models import Model, Sequential # Keras optimizers from keras.optimizers import Adam, RMSprop, SGD import keras.backend as K import warnings warnings.simplefilter('ignore') def build_generator(look_back, n_features=1, n_outputs=1): model = Sequential() model.add(LSTM(50, input_shape=(look_back, n_features))) model.add(LeakyReLU(alpha=0.2)) model.add(Dense(n_outputs)) print('Generator summary:') model.summary() return model def build_discriminator(look_back, n_features=1, n_outputs=1, optimizer=Adam()): features_input = Input((look_back, n_features)) target_input = Input((1, n_outputs)) x = Conv1D(64, kernel_size=3, padding='same')(features_input) x = LeakyReLU(alpha=0.2)(x) x = Flatten()(x) y = Conv1D(64, kernel_size=2, padding='same')(target_input) y = LeakyReLU(alpha=0.2)(y) y = Flatten()(y) xy = concatenate([x, y]) xy = Dense(64)(xy) xy = LeakyReLU(alpha=0.2)(xy) valid = Dense(1, activation='sigmoid')(xy) model = Model([features_input, target_input], valid); model.compile(loss='binary_crossentropy', optimizer=optimizer) print('Discriminator summary:') model.summary() return model def build_adversarial(look_back, n_features=1, n_outputs=1, dis_optimizer=Adam(), adv_optimizer=Adam()): discriminator = build_discriminator(look_back, n_features, n_outputs, optimizer=dis_optimizer) generator = build_generator(look_back, n_features, n_outputs) seq = Sequential() seq.add(generator) gen_input = Input((look_back, n_features)) gen_output = seq(gen_input) gen_output = Reshape((1, n_outputs))(gen_output) valid = discriminator([gen_input, gen_output]) model = Model(gen_input, valid) discriminator.trainable = False # We need to freeze the discriminator's weights model.compile(loss='binary_crossentropy', optimizer=adv_optimizer) print('Adversarial summary:') model.summary() return model, discriminator, generator adversarial, discriminator, generator = build_adversarial(look_back, n_features, n_outputs, dis_optimizer=Adam(lr=0.00001), adv_optimizer=Adam(lr=0.00001)) def get_batch(X, y, batch_idx, batch_size): X_batch = X[batch_idx:batch_idx+batch_size] y_batch = y[batch_idx:batch_idx+batch_size] return X_batch, y_batch def train_GAN(X, y, adversarial, discriminator, generator, n_epochs=100, batch_size=100, look_back=3, n_features=1, n_outputs=1): data_len = len(X) hist_d_loss_real = [] hist_d_loss_fake = [] hist_d_loss = [] hist_g_loss = [] for epoch in range(n_epochs): start = time.time() for batch_idx in range(0, data_len, batch_size): X_batch, y_batch = get_batch(X, y, batch_idx, batch_size) noise = np.random.normal(0, 1, X_batch.shape) y_pred = generator.predict(noise) y_batch = y_batch.reshape((y_batch.shape[0], 1, n_outputs)) y_pred = y_pred.reshape((y_pred.shape[0], 1, n_outputs)) # y_real = np.ones((y_batch.shape[0], 1)) # y_fake = np.zeros((y_pred.shape[0], 1)) y_real = np.full((y_batch.shape[0], 1), np.random.uniform(0.9, 1)) y_fake = np.full((y_pred.shape[0], 1), np.random.uniform(0, 0.1)) # Train discriminator d_loss_real = discriminator.train_on_batch([X_batch, y_batch], y_real) d_loss_fake = discriminator.train_on_batch([X_batch, y_pred], y_fake) d_loss = np.add(d_loss_real, d_loss_fake) / 2 # Train generator g_loss = adversarial.train_on_batch(X_batch, y_real) end = time.time() print ("Epoch %d/%d [D loss: %f] [G loss: %f] | %ds" % (epoch+1, n_epochs, d_loss, g_loss, end-start)) hist_d_loss_real.append(d_loss_real) hist_d_loss_fake.append(d_loss_fake) hist_d_loss.append(d_loss) hist_g_loss.append(g_loss) return hist_d_loss_real, hist_d_loss_fake, hist_d_loss, hist_g_loss hist_d_loss_real, hist_d_loss_fake, hist_d_loss, hist_g_loss = \ train_GAN(X_train, y_train, adversarial, discriminator, generator, look_back=look_back, n_features=n_features, n_outputs=n_outputs, n_epochs=n_epochs, batch_size=batch_size) fig, ax = plt.subplots(figsize=(15, 6)) plt.plot(hist_d_loss_real) plt.plot(hist_d_loss_fake) plt.plot(hist_d_loss) plt.plot(hist_g_loss) ax.set_xlabel('Epochs') ax.legend(['D loss - real', 'D loss - fake', 'D loss', 'G loss']) generator.save_weights('gan_generator-model_weights.h5') discriminator.save_weights('gan_discriminator-model_weights.h5') def retrain_gan(adv, dis, gen, X_batch, y_batch): y_pred = gen.predict(X_batch) y_batch = y_batch.reshape((y_batch.shape[0], 1, n_outputs)) y_pred = y_pred.reshape((y_pred.shape[0], 1, n_outputs)) y_real = np.ones((y_batch.shape[0], 1)) y_fake = np.zeros((y_pred.shape[0], 1)) # Train discriminator dis.train_on_batch([X_batch, y_batch], y_real) dis.train_on_batch([X_batch, y_pred], y_fake) # Train generator adv.train_on_batch(X_batch, y_real) # Walk Forward validation. Recursive Multi-step Forecast strategy # See https://machinelearningmastery.com/multi-step-time-series-forecasting/ def GAN_walk_forward_validation(adv, dis, gen, train, test, size=1, look_back=3, n_features=1, n_outputs=1): past = train.reshape(-1,).copy() future = test.reshape(-1,)[:size] predictions = list() limit_range = len(future) # For re-training generator.compile(optimizer=Adam(lr=0.00001), loss='mean_squared_error') for t in range(0, limit_range, n_outputs): x_input = past[-look_back:] # grab the last look_back days from the past preds_seq = [] for p in range(n_outputs): y_output = gen.predict(x_input.reshape(1, look_back, n_features)) y_output = y_output.reshape(-1,) # save the prediction in the sequence preds_seq.append(y_output) # get rid of the first input (first day look back) x_input = x_input[1:] # appends the new predicted one x_input = np.concatenate((x_input, y_output), axis=0) predicted = np.array(preds_seq).reshape(n_outputs,) predictions.append(predicted) # add the new n_outputs days (real ones) to the past past = np.concatenate((past, future[t:t+n_outputs])) if len(future[t:t+n_outputs]) == n_outputs: X_batch = x_input.reshape(1, look_back, n_features) y_batch = future[t:t+n_outputs].reshape(-1, n_outputs) # Time to re-train the model with the new non-seen days generator.train_on_batch(X_batch, y_batch) return np.array(predictions).reshape(-1,)[:limit_range] size = 252 # approx. one year steps = n_outputs predictions = GAN_walk_forward_validation(adversarial, discriminator, generator, scaled_train, scaled_test, size=size, look_back=look_back, n_outputs=steps) from utils import plot_walk_forward_validation from utils import plot_residual_forecast_error, print_performance_metrics ``` We need to revert the scaling and transformation: ``` descaled_preds = scaler.inverse_transform(predictions.reshape(-1, 1)) descaled_test = scaler.inverse_transform(scaled_test.reshape(-1, 1)) descaled_preds = inv_boxcox(descaled_preds, lmbda) descaled_test = inv_boxcox(descaled_test, lmbda) fig, ax = plt.subplots(figsize=(15, 6)) plt.plot(descaled_test[:size]) plt.plot(descaled_preds) # plt.plot(scaled_test) # plt.plot(predictions[:size]) ax.set_title('Walk forward validation - 5 days prediction') ax.legend(['Expected', 'Predicted']) plot_residual_forecast_error(descaled_preds, descaled_test[:size]) print_performance_metrics(descaled_preds, descaled_test[:size], model_name='GAN', total_days=size, steps=n_outputs) generator.load_weights('gan_generator-model_weights.h5') discriminator.load_weights('gan_discriminator-model_weights.h5') ```
github_jupyter
<!--NAVIGATION--> < [Cruise Trajectory](CruiseTrajectory.ipynb) | [Index](Index.ipynb) | [Match (colocalize) Cruise Track with Datasets](MatchCruise.ipynb) > <a href="https://colab.research.google.com/github/simonscmap/pycmap/blob/master/docs/Match.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> <a href="https://mybinder.org/v2/gh/simonscmap/pycmap/master?filepath=docs%2FMatch.ipynb"><img align="right" src="https://mybinder.org/badge_logo.svg" alt="Open in Colab" title="Open and Execute in Binder"></a> ## *match(sourceTable, sourceVar, targetTables, targetVars, dt1, dt2, lat1, lat2, lon1, lon2, depth1, depth2, temporalTolerance, latTolerance, lonTolerance, depthTolerance)* Colocalizes the source variable (from source table) with the target variable (from target table). The matching results rely on the tolerance parameters because they set the matching boundaries between the source and target datasets. Notice the source has to be a single non-climatological variable. You may pass empty string ('') as source variable if you only want to get the time and location info from the source table. Please note that the number of matching entries between each target variable and the source variable might vary depending on the temporal and spatial resolutions of the target variable. In principle, if the source dataset is fully covered by the target variable's spatio-temporal range, there should always be matching results if the tolerance parameters are larger than half of their corresponding spatial/temporal resolutions. Please explore the [catalog](Catalog.ipynb) to find appropriate target variables. <br />This method returns a dataframe containing the source variable joined with the target variable(s). ### Note Currently, the 'match' method is not optimized for matching very large subsets of massive datasets such as models and satellites. It would be best to use this method to colocalize in-situ measurements such as station-based or underway cruise datasets (which are typically 'small') with any other datasets (models, satellites, or other observations). <br />Stay tuned! > **Parameters:** >> **sourceTable: string** >> <br />Table name of the source dataset. A full list of table names can be found in [catalog](Catalog.ipynb). >> <br /> >> <br />**sourceVar: string** >> <br />The source variable short name. The target variables are matched (colocalized) with this variable. A full list of variable short names can be found in [catalog](Catalog.ipynb). >> <br /> >> <br />**targetTables: list of string** >> <br />Table names of the target datasets to be matched with the source data. Notice source dataset can be matched with multiple target datasets. A full list of table names can be found in [catalog](Catalog.ipynb). >> <br /> >> <br />**targetVars: list of string** >> <br />Variable short names to be matched with the source variable. A full list of variable short names can be found in [catalog](Catalog.ipynb). >> <br /> >> <br />**dt1: string** >> <br />Start date or datetime. Both source and target datasets are filtered before matching. This parameter sets the lower bound of the temporal cut. >> <br /> >> <br />**dt2: string** >> <br />End date or datetime. Both source and target datasets are filtered before matching. This parameter sets the upper bound of the temporal cut. >> <br /> >> <br />**lat1: float** >> <br />Start latitude [degree N]. Both source and target datasets are filtered before matching. This parameter sets the lower bound of the meridional cut. Note latitude ranges from -90 to 90 degrees. >> <br /> >> <br />**lat2: float** >> <br />End latitude [degree N]. Both source and target datasets are filtered before matching. This parameter sets the upper bound of the meridional cut. Note latitude ranges from -90 to 90 degrees. >> <br /> >> <br />**lon1: float** >> <br />Start longitude [degree E]. Both source and target datasets are filtered before matching. This parameter sets the lower bound of the zonal cut. Note longitude ranges from -180 to 180 degrees. >> <br /> >> <br />**lon2: float** >> <br />End longitude [degree E]. Both source and target datasets are filtered before matching. This parameter sets the upper bound of the zonal cut. Note longitude ranges from -180 to 180 degrees. >> <br /> >> <br />**depth1: float** >> <br />Start depth [m]. Both source and target datasets are filtered before matching. This parameter sets the lower bound of the vertical cut. Note depth is a positive number (depth is 0 at surface and grows towards ocean floor). >> <br /> >> <br />**depth2: float** >> <br />End depth [m]. Both source and target datasets are filtered before matching. This parameter sets the upper bound of the vertical cut. Note depth is a positive number (depth is 0 at surface and grows towards ocean floor). >> <br /> >> <br />**temporalTolerance: list of int** >> <br />Temporal tolerance values between pairs of source and target datasets. The size and order of values in this list should match those of targetTables. If only a single integer value is given, that would be applied to all target datasets. This parameter is in day units except when the target variable represents monthly climatology data in which case it is in month units. Notice fractional values are not supported in the current version. >> <br /> >> <br />**latTolerance: list of float or int** >> <br />Spatial tolerance values in meridional direction [deg] between pairs of source and target datasets. The size and order of values in this list should match those of targetTables. If only a single float value is given, that would be applied to all target datasets. A "safe" value for this parameter can be slightly larger than the half of the traget variable's spatial resolution. >> <br /> >> <br />**lonTolerance: list of float or int** >> <br />Spatial tolerance values in zonal direction [deg] between pairs of source and target datasets. The size and order of values in this list should match those of targetTables. If only a single float value is given, that would be applied to all target datasets. A "safe" value for this parameter can be slightly larger than the half of the traget variable's spatial resolution. >> <br /> >> <br />**depthTolerance: list of float or int** >> <br />Spatial tolerance values in vertical direction [m] between pairs of source and target datasets. The size and order of values in this list should match those of targetTables. If only a single float value is given, that would be applied to all target datasets. >**Returns:** >> Pandas dataframe. ### Example 1 In this example the abundance of a prochlorococcus strain (MIT9313PCR, see lines 5-6) measured by [Chisholm lab](https://chisholmlab.mit.edu/) during the AMT13 cruise (Atlantic Meridional Transect Cruise 13) is colocalized with 3 target variables (lines 7-8):<br /> * 'MIT9312PCR_Chisholm' from the same source dataset * 'phosphate_WOA_clim' from World Ocean Atlas monthly climatology dataset * 'chl' (chlorophyll) from weekly averaged satellite dataset <br />**Tip1:**<br /> The space-time cut parameters (lines 9-16) have been set in such a way to encompass the entire source dataset 'tblAMT13_Chisholm' (see the [dataset page](https://cmap.readthedocs.io/en/latest/catalog/datasets/Chisholm_AMT13.html#chisholm-amt13) for more details). Notice that the last data point at the source dataset has been measured at '2003-10-12 12:44:00'. For simplicity dt2 has been set to '2003-10-13', but you could also use the exact date-time '2003-10-12 12:44:00'. <br />**Tip2:**<br /> AMT13 cruise trajectory is already in Simons CMAP database. Therefore, another way to find reasonable values for the space-time cut parameters (lines 9-16) is to use the outputs of the following command:<br /> `api.cruise_bounds('AMT13')` <br />**Tip3:**<br /> The temporalTolerance parameter is set to [0, 0, 1] (line 17). This means: * &#177;0 day temporal tolerance when matching with 'MIT9312PCR_Chisholm' (exact date-time matching) * &#177;0 month temporal tolerance when matching with 'phosphate_WOA_clim' (this is a monthly climatology dataset) * &#177;4 day temporal tolerance when matching with 'chl' (this is a weekly averaged dataset) <br />**Tip4:**<br /> The latTolerance and lonTolerance parameters are set to [0, 0.5, 0.25] (lines 18-19). This means: * &#177;0 degree spatial tolerances (in meridional and zonal directions) when matching with 'MIT9312PCR_Chisholm' (exact lat/lon matching) * &#177;0.5 degrees spatial tolerances (in meridional and zonal directions) when matching with 'phosphate_WOA_clim' (this dataset has a 1 degree spatial resolution) * &#177;0.25 degrees spatial tolerances (in meridional and zonal directions) when matching with 'chl'. This dataset has 0.25 degree spatial resolution which means one may reduce the spatial tolerance for this target dataset down to 0.25/2 = 0.125 degrees. <br />**Tip5:**<br /> The depthTolerance parameter is set to [0, 5, 0] (line 20). This means: * &#177;0 meters vertical tolerances when matching with 'MIT9312PCR_Chisholm' (exact depth matching) * &#177;5 meters vertical tolerances when matching with 'phosphate_WOA_clim' (note that this dataset, similar to model outputs, does not have uniform depth levels) ``` #!pip install pycmap -q #uncomment to install pycmap, if necessary import pycmap api = pycmap.API(token='<YOUR_API_KEY>') api.match( sourceTable='tblAMT13_Chisholm', sourceVar='MIT9313PCR_Chisholm', targetTables=['tblAMT13_Chisholm', 'tblWOA_Climatology', 'tblChl_REP'], targetVars=['MIT9312PCR_Chisholm', 'phosphate_WOA_clim', 'chl'], dt1='2003-09-14', dt2='2003-10-13', lat1=-48, lat2=48, lon1=-52, lon2=-11, depth1=0, depth2=240, temporalTolerance=[0, 0, 1], latTolerance=[0, 0.5, 0.25], lonTolerance=[0, 0.5, 0.25], depthTolerance=[0, 5, 0] ) ``` <br /><br /> ### Example 2 The source variable in this example is particulate pseudo cobalamin ('Me_PseudoCobalamin_Particulate_pM' see lines 5-6) measured by [Ingalls lab](https://sites.google.com/view/anitra-ingalls) during the KM1315 cruise (see [dataset page](https://cmap.readthedocs.io/en/latest/catalog/datasets/cobalamines.html#cobalamins) for more details). This variable is colocalized with one target variabele, 'picoprokaryote' concentration, from [Darwin model](http://darwinproject.mit.edu/) (lines 7-8). The colocalized data, then is visualized. please review the above Example 1, since the mentioned tips apply to this example too. <br />**Tip1:**<br /> The employed Darwin model outputs in this example is a 3-day averaged dataset, and therefore &#177;2 day temporal tolerance is used (line 17). <br />**Tip2:**<br /> The employed Darwin model outputs in this example has a 0.5 degree spatial resolution in zonal and meridional directions, and so &#177;0.25 degree spatial tolerance is used (line 18-19). <br />**Tip3:**<br /> Darwin model first depth level is at 5 m (not 0), and so &#177;5 meter vertical tolerance should cover all surface measurements (line 20). ``` # !pip install pycmap -q # uncomment to install pycmap, if necessary %matplotlib inline import matplotlib.pyplot as plt import pycmap api = pycmap.API(token='<YOUR_API_KEY>') df = api.match( sourceTable='tblKM1314_Cobalmins', sourceVar='Me_PseudoCobalamin_Particulate_pM', targetTables=['tblDarwin_Phytoplankton'], targetVars=['picoprokaryote'], dt1='2013-08-11', dt2='2013-09-05', lat1=22.5, lat2=50, lon1=-159, lon2=-128, depth1=0, depth2=300, temporalTolerance=[2], latTolerance=[0.25], lonTolerance=[0.25], depthTolerance=[5] ) plt.plot(df['picoprokaryote'], df['Me_PseudoCobalamin_Particulate_pM'], '.') plt.xlabel('picoprokaryote' + api.get_unit('tblDarwin_Phytoplankton', 'picoprokaryote')) plt.ylabel('Me_PseudoCobalamin_Particulate_pM' + api.get_unit('tblKM1314_Cobalmins', 'Me_PseudoCobalamin_Particulate_pM')) plt.show() ``` <img src="figures/sql.png" alt="SQL" align="left" width="40"/> <br/> ### SQL Statement Here is how to achieve the same results using a direct SQL statement. Please refere to [Query](Query.ipynb) for more information. <code> EXEC uspMatch 'sourceTable', 'sourceVariable', 'targetTable', 'targetVariable', 'dt1', 'dt2', 'lat1', 'lat2', 'lon1', 'lon2', 'depth1', 'depth2', 'timeTolerance', 'latTolerance', 'lonTolerance', 'depthTolerance'</code> **Example:**<br/> <code>EXEC uspMatch 'tblKM1314_Cobalmins', 'Me_PseudoCobalamin_Particulate_pM', 'tblDarwin_Phytoplankton', 'picoprokaryote', '2013-08-09 00:00:00', '2013-09-07 00:00:00', '22.25', '50.25', '-159.25', '-127.75', '-5', '305', '2', '0.25', '0.25', '5'</code>
github_jupyter
# Autoencoder Test for Saddle-Free Optimizer > Copyright 2019 Dave Fernandes. All Rights Reserved. > > Licensed under the Apache License, Version 2.0 (the "License"); > you may not use this file except in compliance with the License. > You may obtain a copy of the License at > > http://www.apache.org/licenses/LICENSE-2.0 > > Unless required by applicable law or agreed to in writing, software > distributed under the License is distributed on an "AS IS" BASIS, > WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > See the License for the specific language governing permissions and > limitations under the License. ## Description This example trains an autoencoder on MNIST data using either the ADAM optimizer or the Saddle-Free (SF) method. ``` import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import time from SFOptimizer import SFOptimizer from SFOptimizer import SFDamping from mnist.dataset import train from mnist.dataset import test import os os.environ['KMP_DUPLICATE_LIB_OK']='True' np.set_printoptions(suppress=True) ``` ### Model Create a layer with sigmoid activation. Weights have a sparse random initialization as per [Martens \(2010\)](http://www.cs.toronto.edu/~jmartens/docs/Deep_HessianFree.pdf). ``` var_list = [] def logistic_layer(layer_name, input_layer, hidden_units, n_random): initial_W = np.zeros((input_layer.shape[1], hidden_units)) for i in range(hidden_units): column = np.zeros((input_layer.shape[1], 1)) column[0:n_random,:] += np.random.randn(n_random, 1) np.random.shuffle(column) initial_W[:, i:i+1] = column with tf.name_scope('layer_' + layer_name): W = tf.get_variable('W_' + layer_name, initializer=tf.convert_to_tensor(initial_W, dtype=input_layer.dtype), use_resource=True) b = tf.get_variable('b_' + layer_name, [hidden_units], initializer=tf.zeros_initializer(), dtype=input_layer.dtype, use_resource=True) y = tf.sigmoid(tf.matmul(input_layer, W) + b) var_list.append(W) var_list.append(b) return W, b, y ``` Deep autoencoder network from [Hinton & Salakhutdinov \(2006\)](https://www.cs.toronto.edu/~hinton/science.pdf). This example is used as a standard test in several optimization papers. ``` def AE_model(x): n_inputs = 28*28 n_hidden1 = 1000 n_hidden2 = 500 n_hidden3 = 250 n_hidden4 = 30 with tf.name_scope('dnn'): _, _, y1 = logistic_layer('1', x, n_hidden1, 15) _, _, y2 = logistic_layer('2', y1, n_hidden2, 15) _, _, y3 = logistic_layer('3', y2, n_hidden3, 15) W4, b4, _ = logistic_layer('4', y3, n_hidden4, 15) y4 = tf.matmul(y3, W4) + b4 _, _, y5 = logistic_layer('5', y4, n_hidden3, 15) _, _, y6 = logistic_layer('6', y5, n_hidden2, 15) _, _, y7 = logistic_layer('7', y6, n_hidden1, 15) W8, b8, y_out = logistic_layer('8', y7, n_inputs, 15) y_logits = tf.matmul(y7, W8) + b8 with tf.name_scope('loss'): cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=x, logits=y_logits) loss = tf.reduce_mean(cross_entropy, name='loss') error = tf.reduce_mean(tf.reduce_sum(tf.squared_difference(x, y_out), axis=1)) return loss, error ``` ### Training Loop Saves weights to data directory. ``` def MNIST_AE_test(use_SF, start_from_previous_run): # Loop hyper-parameters if use_SF: n_epochs = 30 batch_size = 2000 n_little_steps = 5 batch_repeats = 2 * (n_little_steps + 1) print_interval = 1 else: n_epochs = 3000 batch_size = 200 batch_repeats = 1 print_interval = 100 # Set up datasets and iterator mnist_dir = os.path.join(os.getcwd(), 'mnist') train_dataset = train(mnist_dir).batch(batch_size, drop_remainder=True) # Replicate each batch batch_repeats times train_dataset = train_dataset.flat_map(lambda x, y: tf.data.Dataset.zip((tf.data.Dataset.from_tensors(x).repeat(batch_repeats), tf.data.Dataset.from_tensors(y).repeat(batch_repeats)))) train_dataset = train_dataset.repeat(1) test_dataset = test(mnist_dir).batch(100000) iter = tf.data.Iterator.from_structure(train_dataset.output_types, train_dataset.output_shapes) train_init_op = iter.make_initializer(train_dataset) test_init_op = iter.make_initializer(test_dataset) x, labels = iter.get_next() # Set up model loss, error = AE_model(x) model_filepath = os.path.join(os.getcwd(), 'data', 'ae_weights') saver = tf.train.Saver(var_list) # Construct optimizer if use_SF: # See SFOptimizer.py for options optimizer = SFOptimizer(var_list, krylov_dimension=64, damping_type=SFDamping.marquardt, dtype=x.dtype) else: optimizer = tf.train.AdamOptimizer(learning_rate=0.001) train_op = optimizer.minimize(loss) print('Initializing...') sess = tf.Session() sess.run(tf.global_variables_initializer()) if start_from_previous_run: saver.restore(sess, model_filepath) if use_SF: print('Constructing graph...') big_train_op = optimizer.minimize(loss) little_train_op = optimizer.fixed_subspace_step() update_op = optimizer.update() reset_op = optimizer.reset_lambda() history = [] t0 = time.perf_counter() print("Training...") for epoch in range(n_epochs): iteration = 0 total_error = 0.0 sess.run(train_init_op) while True: try: if use_SF: # Reset the damping parameter _ = sess.run(reset_op) # Compute Krylov subspace and take one training step initial_loss, initial_lambda, _ = sess.run([loss, optimizer.lambda_damp, big_train_op]) final_loss, error_train, rho, _ = sess.run([loss, error, optimizer.rho, update_op]) if iteration % print_interval == 0: print('-- Epoch:', epoch + 1, ' Batch:', iteration + 1, '--') print(' Loss_i:', initial_loss, 'Loss_f:', final_loss, 'rho', rho, 'lambda:', initial_lambda) # Take up to 5 more steps without recomputing the Krylov subspace for little_step in range(n_little_steps): initial_loss, initial_lambda, _ = sess.run([loss, optimizer.lambda_damp, little_train_op]) final_loss, error_new, rho, _ = sess.run([loss, error, optimizer.rho, update_op]) if error_new < error_train: error_train = error_new if iteration % print_interval == 0: print(' Loss_i:', initial_loss, 'Loss_f:', final_loss, 'rho', rho, 'lambda:', initial_lambda) else: # Take a gradient descent step _, initial_loss, error_train = sess.run([train_op, loss, error]) if iteration % print_interval == 0: print('-- Epoch:', epoch + 1, ' Batch:', iteration + 1, '--') print(' Loss:', initial_loss) history += [error_train] total_error += error_train if iteration % print_interval == 0: print(' Train error:', error_train) iteration += 1 except tf.errors.OutOfRangeError: break error_train = total_error / iteration sess.run(test_init_op) error_test = sess.run(error) t1 = time.perf_counter() dt = t1 - t0 t0 = t1 print('\n*** Epoch:', epoch + 1, 'Train error:', error_train, ' Test error:', error_test, ' Time:', dt, 'sec\n') save_path = saver.save(sess, model_filepath) sess.close() return history, optimizer.get_name() ``` * Train with `use_SF = False` to use the ADAM method, and with `use_SF = True` to use the Saddle-Free method. * Train with `start_from_previous_run = False` to start from random initialization, and with `start_from_previous_run = True` to start from where you previously left off. ``` history, opt_name = MNIST_AE_test(use_SF = True, start_from_previous_run = False) ``` Plot the error versus training step. For reference, the previous best training error obtained by a first order method was 1.0 \([Sutskever, _et al_., 2013](http://www.cs.toronto.edu/~fritz/absps/momentum.pdf)\), and by the SF method was 0.57 \([Dauphin, _et al_., 2014](https://arxiv.org/abs/1406.2572)\). ``` plt.plot(history) plt.ylabel('MSE') plt.yscale('log') plt.xlabel('Steps') #plt.xscale('log') plt.title(opt_name + ' Optimizer') plt.show() ```
github_jupyter
``` # Importing essential libraries import numpy as np import pandas as pd # Loading the dataset df = pd.read_csv('heart.csv') ``` # **Exploring the dataset** ``` # Returns number of rows and columns of the dataset df.shape # Returns an object with all of the column headers df.columns # Returns different datatypes for each columns (float, int, string, bool, etc.) df.dtypes # Returns the first x number of rows when head(x). Without a number it returns 5 df.head() # Returns the last x number of rows when tail(x). Without a number it returns 5 df.tail() # Returns true for a column having null values, else false df.isnull().any() # Returns basic information on all columns df.info() # Returns basic statistics on numeric columns df.describe().T ``` # **Data Visualization** ``` # Importing essential libraries import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # Plotting histogram for the entire dataset fig = plt.figure(figsize = (15,15)) ax = fig.gca() g = df.hist(ax=ax) # Visualization to check if the dataset is balanced or not g = sns.countplot(x='target', data=df) plt.xlabel('Target') plt.ylabel('Count') ``` # **Feature Engineering** ### Feature Selection ``` # Selecting correlated features using Heatmap # Get correlation of all the features of the dataset corr_matrix = df.corr() top_corr_features = corr_matrix.index # Plotting the heatmap plt.figure(figsize=(20,20)) sns.heatmap(data=df[top_corr_features].corr(), annot=True, cmap='RdYlGn') ``` # **Data Preprocessing** ## Handling categorical features After exploring the dataset, I observed that converting the categorical variables into dummy variables using 'get_dummies()'. Though we don't have any strings in our dataset it is necessary to convert ('sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal') these features. *Example: Consider the 'sex' column, it is a binary feature which has 0's and 1's as its values. Keeping it as it is would lead the algorithm to think 0 is lower value and 1 is a higher value, which should not be the case since the gender cannot be ordinal feature.* ``` dataset = pd.get_dummies(df, columns=['sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal']) ``` ## Feature Scaling ``` dataset.columns from sklearn.preprocessing import StandardScaler standScaler = StandardScaler() columns_to_scale = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak'] dataset[columns_to_scale] = standScaler.fit_transform(dataset[columns_to_scale]) dataset.head() # Splitting the dataset into dependent and independent features X = dataset.drop('target', axis=1) y = dataset['target'] ``` # **Model Building** I will be experimenting with 3 algorithms: 1. KNeighbors Classifier 2. Decision Tree Classifier 3. Random Forest Classifier ## KNeighbors Classifier Model ``` # Importing essential libraries from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import cross_val_score # Finding the best accuracy for knn algorithm using cross_val_score knn_scores = [] for i in range(1, 21): knn_classifier = KNeighborsClassifier(n_neighbors=i) cvs_scores = cross_val_score(knn_classifier, X, y, cv=10) knn_scores.append(round(cvs_scores.mean(),3)) # Plotting the results of knn_scores plt.figure(figsize=(20,15)) plt.plot([k for k in range(1, 21)], knn_scores, color = 'red') for i in range(1,21): plt.text(i, knn_scores[i-1], (i, knn_scores[i-1])) plt.xticks([i for i in range(1, 21)]) plt.xlabel('Number of Neighbors (K)') plt.ylabel('Scores') plt.title('K Neighbors Classifier scores for different K values') # Training the knn classifier model with k value as 12 knn_classifier = KNeighborsClassifier(n_neighbors=12) cvs_scores = cross_val_score(knn_classifier, X, y, cv=10) print("KNeighbours Classifier Accuracy with K=12 is: {}%".format(round(cvs_scores.mean(), 4)*100)) ``` ## Decision Tree Classifier ``` # Importing essential libraries from sklearn.tree import DecisionTreeClassifier # Finding the best accuracy for decision tree algorithm using cross_val_score decision_scores = [] for i in range(1, 11): decision_classifier = DecisionTreeClassifier(max_depth=i) cvs_scores = cross_val_score(decision_classifier, X, y, cv=10) decision_scores.append(round(cvs_scores.mean(),3)) # Plotting the results of decision_scores plt.figure(figsize=(20,15)) plt.plot([i for i in range(1, 11)], decision_scores, color = 'red') for i in range(1,11): plt.text(i, decision_scores[i-1], (i, decision_scores[i-1])) plt.xticks([i for i in range(1, 11)]) plt.xlabel('Depth of Decision Tree (N)') plt.ylabel('Scores') plt.title('Decision Tree Classifier scores for different depth values') # Training the decision tree classifier model with max_depth value as 3 decision_classifier = DecisionTreeClassifier(max_depth=3) cvs_scores = cross_val_score(decision_classifier, X, y, cv=10) print("Decision Tree Classifier Accuracy with max_depth=3 is: {}%".format(round(cvs_scores.mean(), 4)*100)) ``` ## Random Forest Classifier ``` # Importing essential libraries from sklearn.ensemble import RandomForestClassifier # Finding the best accuracy for random forest algorithm using cross_val_score forest_scores = [] for i in range(10, 101, 10): forest_classifier = RandomForestClassifier(n_estimators=i) cvs_scores = cross_val_score(forest_classifier, X, y, cv=5) forest_scores.append(round(cvs_scores.mean(),3)) # Plotting the results of forest_scores plt.figure(figsize=(20,15)) plt.plot([n for n in range(10, 101, 10)], forest_scores, color = 'red') for i in range(1,11): plt.text(i*10, forest_scores[i-1], (i*10, forest_scores[i-1])) plt.xticks([i for i in range(10, 101, 10)]) plt.xlabel('Number of Estimators (N)') plt.ylabel('Scores') plt.title('Random Forest Classifier scores for different N values') # Training the random forest classifier model with n value as 90 forest_classifier = RandomForestClassifier(n_estimators=90) cvs_scores = cross_val_score(forest_classifier, X, y, cv=5) print("Random Forest Classifier Accuracy with n_estimators=90 is: {}%".format(round(cvs_scores.mean(), 4)*100)) ```
github_jupyter
# Convolutional Neural Networks: Application Welcome to Course 4's second assignment! In this notebook, you will: - Implement helper functions that you will use when implementing a TensorFlow model - Implement a fully functioning ConvNet using TensorFlow **After this assignment you will be able to:** - Build and train a ConvNet in TensorFlow for a classification problem We assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 ("*Improving deep neural networks*"). ### <font color='darkblue'> Updates to Assignment <font> #### If you were working on a previous version * The current notebook filename is version "1a". * You can find your work in the file directory as version "1". * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. #### List of Updates * `initialize_parameters`: added details about tf.get_variable, `eval`. Clarified test case. * Added explanations for the kernel (filter) stride values, max pooling, and flatten functions. * Added details about softmax cross entropy with logits. * Added instructions for creating the Adam Optimizer. * Added explanation of how to evaluate tensors (optimizer and cost). * `forward_propagation`: clarified instructions, use "F" to store "flatten" layer. * Updated print statements and 'expected output' for easier visual comparisons. * Many thanks to Kevin P. Brown (mentor for the deep learning specialization) for his suggestions on the assignments in this course! ## 1.0 - TensorFlow model In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. As usual, we will start by loading in the packages. ``` import math import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage import tensorflow as tf from tensorflow.python.framework import ops from cnn_utils import * %matplotlib inline np.random.seed(1) ``` Run the next cell to load the "SIGNS" dataset you are going to use. ``` # Loading the data (signs) X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() ``` As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5. <img src="images/SIGNS.png" style="width:800px;height:300px;"> The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples. ``` # Example of a picture index = 6 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) ``` In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it. To get started, let's examine the shapes of your data. ``` X_train = X_train_orig/255. X_test = X_test_orig/255. Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) conv_layers = {} ``` ### 1.1 - Create placeholders TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session. **Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint: search for the tf.placeholder documentation"](https://www.tensorflow.org/api_docs/python/tf/placeholder). ``` # GRADED FUNCTION: create_placeholders def create_placeholders(n_H0, n_W0, n_C0, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_H0 -- scalar, height of an input image n_W0 -- scalar, width of an input image n_C0 -- scalar, number of channels of the input n_y -- scalar, number of classes Returns: X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float" Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float" """ ### START CODE HERE ### (≈2 lines) X = tf.placeholder(tf.float32, name = "X", shape = [None, n_H0, n_W0, n_C0]) Y = tf.placeholder(tf.float32, name = "Y", shape = [None, n_y]) ### END CODE HERE ### return X, Y X, Y = create_placeholders(64, 64, 3, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) ``` **Expected Output** <table> <tr> <td> X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) </td> </tr> <tr> <td> Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) </td> </tr> </table> ### 1.2 - Initialize parameters You will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment. **Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use: ```python W = tf.get_variable("W", [1,2,3,4], initializer = ...) ``` #### tf.get_variable() [Search for the tf.get_variable documentation](https://www.tensorflow.org/api_docs/python/tf/get_variable). Notice that the documentation says: ``` Gets an existing variable with these parameters or create a new one. ``` So we can use this function to create a tensorflow variable with the specified name, but if the variables already exist, it will get the existing variable with that same name. ``` # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes weight parameters to build a neural network with tensorflow. The shapes are: W1 : [4, 4, 3, 8] W2 : [2, 2, 8, 16] Note that we will hard code the shape values in the function to make the grading simpler. Normally, functions should take values as inputs rather than hard coding. Returns: parameters -- a dictionary of tensors containing W1, W2 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 2 lines of code) W1 = tf.get_variable("W1", [4, 4, 3, 8], initializer=tf.contrib.layers.xavier_initializer(seed=0)) W2 = tf.get_variable("W2", [2, 2, 8, 16], initializer=tf.contrib.layers.xavier_initializer(seed=0)) ### END CODE HERE ### parameters = {"W1": W1, "W2": W2} return parameters tf.reset_default_graph() with tf.Session() as sess_test: parameters = initialize_parameters() init = tf.global_variables_initializer() sess_test.run(init) print("W1[1,1,1] = \n" + str(parameters["W1"].eval()[1,1,1])) print("W1.shape: " + str(parameters["W1"].shape)) print("\n") print("W2[1,1,1] = \n" + str(parameters["W2"].eval()[1,1,1])) print("W2.shape: " + str(parameters["W2"].shape)) ``` ** Expected Output:** ``` W1[1,1,1] = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192] W1.shape: (4, 4, 3, 8) W2[1,1,1] = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 -0.22779644 -0.1601823 -0.16117483 -0.10286498] W2.shape: (2, 2, 8, 16) ``` ### 1.3 - Forward propagation In TensorFlow, there are built-in functions that implement the convolution steps for you. - **tf.nn.conv2d(X,W, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W$, this function convolves $W$'s filters on X. The third parameter ([1,s,s,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). Normally, you'll choose a stride of 1 for the number of examples (the first value) and for the channels (the fourth value), which is why we wrote the value as `[1,s,s,1]`. You can read the full documentation on [conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d). - **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, we usually operate on a single example at a time and a single channel at a time. So the first and fourth value in `[1,f,f,1]` are both 1. You can read the full documentation on [max_pool](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool). - **tf.nn.relu(Z):** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu). - **tf.contrib.layers.flatten(P)**: given a tensor "P", this function takes each training (or test) example in the batch and flattens it into a 1D vector. * If a tensor P has the shape (m,h,w,c), where m is the number of examples (the batch size), it returns a flattened tensor with shape (batch_size, k), where $k=h \times w \times c$. "k" equals the product of all the dimension sizes other than the first dimension. * For example, given a tensor with dimensions [100,2,3,4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [flatten](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten). - **tf.contrib.layers.fully_connected(F, num_outputs):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [full_connected](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected). In the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters. #### Window, kernel, filter The words "window", "kernel", and "filter" are used to refer to the same thing. This is why the parameter `ksize` refers to "kernel size", and we use `(f,f)` to refer to the filter size. Both "kernel" and "filter" refer to the "window." **Exercise** Implement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above. In detail, we will use the following parameters for all the steps: - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" - Flatten the previous output. - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost. ``` # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Note that for simplicity and grading purposes, we'll hard-code some values such as the stride and kernel (filter) sizes. Normally, functions should take these values as function parameters. Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "W2" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] W2 = parameters['W2'] ### START CODE HERE ### # CONV2D: stride of 1, padding 'SAME' Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding="SAME") # RELU A1 = tf.nn.relu(Z1) # MAXPOOL: window 8x8, stride 8, padding 'SAME' P1 = tf.nn.max_pool(A1, ksize=[1, 8, 8, 1], strides=[1, 8, 8, 1], padding="SAME") # CONV2D: filters W2, stride 1, padding 'SAME' Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding="SAME") # RELU A2 = tf.nn.relu(Z2) # MAXPOOL: window 4x4, stride 4, padding 'SAME' P2 = tf.nn.max_pool(A2, ksize=[1, 4, 4, 1] , strides=[1, 4, 4, 1], padding="SAME") # FLATTEN F = tf.contrib.layers.flatten(P2) # FULLY-CONNECTED without non-linear activation function (not not call softmax). # 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None" Z3 = tf.contrib.layers.fully_connected(F, 6, activation_fn=None) ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) init = tf.global_variables_initializer() sess.run(init) a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)}) print("Z3 = \n" + str(a)) ``` **Expected Output**: ``` Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]] ``` ### 1.4 - Compute cost Implement the compute cost function below. Remember that the cost function helps the neural network see how much the model's predictions differ from the correct labels. By adjusting the weights of the network to reduce the cost, the neural network can improve its predictions. You might find these two functions helpful: - **tf.nn.softmax_cross_entropy_with_logits(logits = Z, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [softmax_cross_entropy_with_logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits). - **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to calculate the sum of the losses over all the examples to get the overall cost. You can check the full documentation [reduce_mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean). #### Details on softmax_cross_entropy_with_logits (optional reading) * Softmax is used to format outputs so that they can be used for classification. It assigns a value between 0 and 1 for each category, where the sum of all prediction values (across all possible categories) equals 1. * Cross Entropy is compares the model's predicted classifications with the actual labels and results in a numerical value representing the "loss" of the model's predictions. * "Logits" are the result of multiplying the weights and adding the biases. Logits are passed through an activation function (such as a relu), and the result is called the "activation." * The function is named `softmax_cross_entropy_with_logits` takes logits as input (and not activations); then uses the model to predict using softmax, and then compares the predictions with the true labels using cross entropy. These are done with a single function to optimize the calculations. ** Exercise**: Compute the cost below using the function above. ``` # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (number of examples, 6) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3, labels=Y)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) init = tf.global_variables_initializer() sess.run(init) a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)}) print("cost = " + str(a)) ``` **Expected Output**: ``` cost = 2.91034 ``` ## 1.5 Model Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. **Exercise**: Complete the function below. The model below should: - create placeholders - initialize parameters - forward propagate - compute the cost - create an optimizer Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer) #### Adam Optimizer You can use `tf.train.AdamOptimizer(learning_rate = ...)` to create the optimizer. The optimizer has a `minimize(loss=...)` function that you'll call to set the cost function that the optimizer will minimize. For details, check out the documentation for [Adam Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) #### Random mini batches If you took course 2 of the deep learning specialization, you implemented `random_mini_batches()` in the "Optimization" programming assignment. This function returns a list of mini-batches. It is already implemented in the `cnn_utils.py` file and imported here, so you can call it like this: ```Python minibatches = random_mini_batches(X, Y, mini_batch_size = 64, seed = 0) ``` (You will want to choose the correct variable names when you use it in your code). #### Evaluating the optimizer and cost Within a loop, for each mini-batch, you'll use the `tf.Session` object (named `sess`) to feed a mini-batch of inputs and labels into the neural network and evaluate the tensors for the optimizer as well as the cost. Remember that we built a graph data structure and need to feed it inputs and labels and use `sess.run()` in order to get values for the optimizer and cost. You'll use this kind of syntax: ``` output_for_var1, output_for_var2 = sess.run( fetches=[var1, var2], feed_dict={var_inputs: the_batch_of_inputs, var_labels: the_batch_of_labels} ) ``` * Notice that `sess.run` takes its first argument `fetches` as a list of objects that you want it to evaluate (in this case, we want to evaluate the optimizer and the cost). * It also takes a dictionary for the `feed_dict` parameter. * The keys are the `tf.placeholder` variables that we created in the `create_placeholders` function above. * The values are the variables holding the actual numpy arrays for each mini-batch. * The sess.run outputs a tuple of the evaluated tensors, in the same order as the list given to `fetches`. For more information on how to use sess.run, see the documentation [tf.Sesssion#run](https://www.tensorflow.org/api_docs/python/tf/Session#run) documentation. ``` # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009, num_epochs = 100, minibatch_size = 64, print_cost = True): """ Implements a three-layer ConvNet in Tensorflow: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Arguments: X_train -- training set, of shape (None, 64, 64, 3) Y_train -- test set, of shape (None, n_y = 6) X_test -- training set, of shape (None, 64, 64, 3) Y_test -- test set, of shape (None, n_y = 6) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: train_accuracy -- real number, accuracy on the train set (X_train) test_accuracy -- real number, testing accuracy on the test set (X_test) parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep results consistent (tensorflow seed) seed = 3 # to keep results consistent (numpy seed) (m, n_H0, n_W0, n_C0) = X_train.shape n_y = Y_train.shape[1] costs = [] # To keep track of the cost # Create Placeholders of the correct shape ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables globally init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): minibatch_cost = 0. num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch """ # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the optimizer and the cost. # The feedict should contain a minibatch for (X,Y). """ ### START CODE HERE ### (1 line) _ , temp_cost = sess.run(fetches=[optimizer, cost], feed_dict = {X: minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### minibatch_cost += temp_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 5 == 0: print ("Cost after epoch %i: %f" % (epoch, minibatch_cost)) if print_cost == True and epoch % 1 == 0: costs.append(minibatch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # Calculate the correct predictions predict_op = tf.argmax(Z3, 1) correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print(accuracy) train_accuracy = accuracy.eval({X: X_train, Y: Y_train}) test_accuracy = accuracy.eval({X: X_test, Y: Y_test}) print("Train Accuracy:", train_accuracy) print("Test Accuracy:", test_accuracy) return train_accuracy, test_accuracy, parameters ``` Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code! ``` _, _, parameters = model(X_train, Y_train, X_test, Y_test) ``` **Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. <table> <tr> <td> **Cost after epoch 0 =** </td> <td> 1.917929 </td> </tr> <tr> <td> **Cost after epoch 5 =** </td> <td> 1.506757 </td> </tr> <tr> <td> **Train Accuracy =** </td> <td> 0.940741 </td> </tr> <tr> <td> **Test Accuracy =** </td> <td> 0.783333 </td> </tr> </table> Congratulations! You have finished the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). Once again, here's a thumbs up for your work! ``` fname = "images/thumbs_up.jpg" image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)) plt.imshow(my_image) ```
github_jupyter
``` import os import json import argparse import pprint import datetime import numpy as np import matplotlib.pyplot as plt import torch from torch.utils import data from bnaf import * from tqdm import trange from data.generate2d import sample2d, energy2d # standard imports import torch import torch.nn as nn from sklearn.datasets import make_moons # from generate2d import sample2d, energy2d # FrEIA imports import FrEIA.framework as Ff import FrEIA.modules as Fm from nflows.distributions import normal device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") BATCHSIZE = 1000 N_DIM = 2 # we define a subnet for use inside an affine coupling block # for more detailed information see the full tutorial def subnet_fc(dims_in, dims_out): return nn.Sequential(nn.Linear(dims_in, 512), nn.ReLU(), nn.Linear(512, dims_out)) # a simple chain of operations is collected by ReversibleSequential inn = Ff.SequenceINN(N_DIM) for k in range(8): inn.append(Fm.AllInOneBlock, subnet_constructor=subnet_fc, permute_soft=True) inn.to(device) optimizer = torch.optim.Adam(inn.parameters(), lr=0.001) # a very basic training loop for i in range(2000): optimizer.zero_grad() # sample data from the moons distribution data, label = make_moons(n_samples=BATCHSIZE, noise=0.05) x = torch.Tensor(data).to(device) # pass to INN and get transformed variable z and log Jacobian determinant z, log_jac_det = inn(x) # print(z.size()) # plt.figure() # plt.plot(z.detach().numpy()[:,0], z.detach().numpy()[:,1],'r.') # plt.plot(x.detach().numpy()[:,0], x.detach().numpy()[:,1],'b.') # calculate the negative log-likelihood of the model with a standard normal prior # loss = 0.5*torch.sum(z**2, 1) - log_jac_det # tt1 = 0.5*torch.sum(z**2, 1) # tt2 = torch.distributions.Normal(torch.zeros_like(z), torch.ones_like(z)).log_prob(z).sum(-1) shape = z.shape[1:] log_z = normal.StandardNormal(shape=shape).log_prob(z) # print(tt1.mean()) # print(tt2.mean()) # print(tt3.mean()) loss = log_z + log_jac_det loss = -loss.mean() / N_DIM # backpropagate and update the weights loss.backward() optimizer.step() if i % 100==0: print(i,loss) plt.figure() plt.plot(z.detach().numpy()[:,0], z.detach().numpy()[:,1],'r.') plt.plot(x.detach().numpy()[:,0], x.detach().numpy()[:,1],'b.') data, label = make_moons(n_samples=1000, noise=0.05) plt.figure() plt.plot(data[:,0], data[:,1],'.') # sample from the INN by sampling from a standard normal and transforming # it in the reverse direction nsam = 1000 zzz = torch.randn(nsam, N_DIM) samples0, _ = inn(zzz, rev=True) samples = samples0.detach().numpy() plt.figure() plt.plot(samples[:,0], samples[:,1],'.') np.log(3) np.log(5) ```
github_jupyter
# 05d - Vertex AI > Training > Training Pipelines - With Python Source Distribution **Training Jobs Overview:** Where a model gets trained is where it consumes computing resources. With Vertex AI, you have choices for configuring the computing resources available at training. This notebook is an example of an execution environment. When it was set up there were choices for machine type and accelerators (GPUs). In the 04 series of demonstrations, the model training happened directly in the notebook. The models were then imported to Vertex AI and deployed to an endpoint for online predictions. In this 05 series of demonstrations, the same model is trained using managed computing resources in Vertex AI as custom training jobs. These jobs will be demonstrated as: - Custom Job from a python file and python source distribution - Training Pipeline that trains and saves models from a python file and python source distribution - Hyperparameter Tuning Jobs from a python source distribution **This Notebook: An extension of 05b** This notebook trains the same Tensorflow Keras model from 04a by first modifying and saving the training code to a python script. Then a Python source distribution is built containing the script. While this example fits nicely in a single script, larger examples will benefit from the flexibility offered by source distributions and this job gives an example of making the shift. The source distribution is then used as an input for a Vertex AI Training Job. The client used here is `aiplatform.CustomPythonPackageTrainingJob(python_package_gcs_uri=)`. The functional difference from the `aiplatform.CustomJob` version in 05b is that this method automatically uploads the final saved model to Vertex AI > Models. Running the job this way first triggers a job in Vertex AI > Training > Training Pipeline. This Training Pipeline triggers a Custom Job in Vertex AI > Training > Custom Jobs. If The Custom Job completes successfully then the final saved model is registered in Vertex AI > Models. The training can be reviewed with Vertex AI's managed Tensorboard under Experiments > Experiments, or by clicking on the `05d...` custom job under Training > Custom Jobs and then clicking the 'Open Tensorboard' link. **Prerequisites:** - 01 - BigQuery - Table Data Source - 05 - Vertex AI > Experiments - Managed Tensorboard - Understanding: - 04a - Vertex AI > Notebooks - Models Built in Notebooks with Tensorflow - Contains a more granular review of the Tensorflow model training **Overview:** - Setup - Connect to Tensorboard instance from 05 - Create a `train.py` Python script that recreates the local training in 04a - Build a Python source distribution that contains the `train.py` script - Use Python Client google.cloud.aiplatform for Vertex AI - Custom training job with aiplatform.CustomPythonPackageTrainingJob - Run job with .run - Create Endpoint with Vertex AI with aiplatform.Endpoint.create - Deploy model to endpoint with .deploy - Online Prediction demonstrated using Vertex AI Endpoint with deployed model - Get records to score from BigQuery table - Prediction with aiplatform.Endpoint.predict - Prediction with REST - Prediction with gcloud (CLI) **Resources:** - [BigQuery Tensorflow Reader](https://www.tensorflow.org/io/tutorials/bigquery) - [Keras Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) - [Keras API](https://www.tensorflow.org/api_docs/python/tf/keras) - [Python Client For Google BigQuery](https://googleapis.dev/python/bigquery/latest/index.html) - [Tensorflow Python Client](https://www.tensorflow.org/api_docs/python/tf) - [Tensorflow I/O Python Client](https://www.tensorflow.org/io/api_docs/python/tfio/bigquery) - [Python Client for Vertex AI](https://googleapis.dev/python/aiplatform/latest/aiplatform.html) - [Create a Python source distribution](https://cloud.google.com/vertex-ai/docs/training/create-python-pre-built-container) for a Vertex AI custom training job - Containers for training (Pre-Built) - [Overview](https://cloud.google.com/vertex-ai/docs/training/create-python-pre-built-container) - [List](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers) **Related Training:** - todo --- ## Vertex AI - Conceptual Flow <img src="architectures/slides/05d_arch.png"> --- ## Vertex AI - Workflow <img src="architectures/slides/05d_console.png"> --- ## Setup inputs: ``` REGION = 'us-central1' PROJECT_ID='statmike-mlops' DATANAME = 'fraud' NOTEBOOK = '05d' # Resources TRAIN_IMAGE = 'us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-7:latest' DEPLOY_IMAGE ='us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-7:latest' TRAIN_COMPUTE = 'n1-standard-4' DEPLOY_COMPUTE = 'n1-standard-4' # Model Training VAR_TARGET = 'Class' VAR_OMIT = 'transaction_id' # add more variables to the string with space delimiters EPOCHS = 10 BATCH_SIZE = 100 ``` packages: ``` from google.cloud import aiplatform from datetime import datetime from google.cloud import bigquery from google.protobuf import json_format from google.protobuf.struct_pb2 import Value import json import numpy as np ``` clients: ``` aiplatform.init(project=PROJECT_ID, location=REGION) bigquery = bigquery.Client() ``` parameters: ``` TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") BUCKET = PROJECT_ID URI = f"gs://{BUCKET}/{DATANAME}/models/{NOTEBOOK}" DIR = f"temp/{NOTEBOOK}" # Give service account roles/storage.objectAdmin permissions # Console > IMA > Select Account <projectnumber>-compute@developer.gserviceaccount.com > edit - give role SERVICE_ACCOUNT = !gcloud config list --format='value(core.account)' SERVICE_ACCOUNT = SERVICE_ACCOUNT[0] SERVICE_ACCOUNT ``` environment: ``` !rm -rf {DIR} !mkdir -p {DIR} ``` --- ## Get Tensorboard Instance Name The training job will show up as an experiment for the Tensorboard instance and have the same name as the training job ID. ``` tb = aiplatform.Tensorboard.list(filter=f'display_name={DATANAME}') tb[0].resource_name ``` --- ## Training ### Assemble Python File for Training Create the main python trainer file as `/train.py`: ``` !mkdir -p {DIR}/source/trainer %%writefile {DIR}/source/trainer/train.py # package import from tensorflow.python.framework import dtypes from tensorflow_io.bigquery import BigQueryClient import tensorflow as tf from google.cloud import bigquery import argparse import os import sys # import argument to local variables parser = argparse.ArgumentParser() # the passed param, dest: a name for the param, default: if absent fetch this param from the OS, type: type to convert to, help: description of argument parser.add_argument('--epochs', dest = 'epochs', default = 10, type = int, help = 'Number of Epochs') parser.add_argument('--batch_size', dest = 'batch_size', default = 32, type = int, help = 'Batch Size') parser.add_argument('--var_target', dest = 'var_target', type=str) parser.add_argument('--var_omit', dest = 'var_omit', type=str, nargs='*') parser.add_argument('--project_id', dest = 'project_id', type=str) parser.add_argument('--dataname', dest = 'dataname', type=str) parser.add_argument('--region', dest = 'region', type=str) parser.add_argument('--notebook', dest = 'notebook', type=str) args = parser.parse_args() # built in parameters for data source: PROJECT_ID = args.project_id DATANAME = args.dataname REGION = args.region NOTEBOOK = args.notebook # clients bigquery = bigquery.Client(project = PROJECT_ID) # get schema from bigquery source query = f"SELECT * FROM {DATANAME}.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '{DATANAME}_prepped'" schema = bigquery.query(query).to_dataframe() # get number of classes from bigquery source nclasses = bigquery.query(query = f'SELECT DISTINCT {args.var_target} FROM {DATANAME}.{DATANAME}_prepped WHERE {args.var_target} is not null').to_dataframe() nclasses = nclasses.shape[0] # prepare inputs for tensorflow training OMIT = args.var_omit + ['splits'] selected_fields = schema[~schema.column_name.isin(OMIT)].column_name.tolist() feature_columns = [] feature_layer_inputs = {} for header in selected_fields: if header != args.var_target: feature_columns.append(tf.feature_column.numeric_column(header)) feature_layer_inputs[header] = tf.keras.Input(shape=(1,),name=header) # all the columns in this data source are either float64 or int64 output_types = schema[~schema.column_name.isin(OMIT)].data_type.tolist() output_types = [dtypes.float64 if x=='FLOAT64' else dtypes.int64 for x in output_types] # remap input data to Tensorflow inputs of features and target def transTable(row_dict): target=row_dict.pop(args.var_target) target = tf.one_hot(tf.cast(target,tf.int64), nclasses) target = tf.cast(target, tf.float32) return(row_dict, target) # function to setup a bigquery reader with Tensorflow I/O def bq_reader(split): reader = BigQueryClient() training = reader.read_session( parent = f"projects/{PROJECT_ID}", project_id = PROJECT_ID, table_id = f"{DATANAME}_prepped", dataset_id = DATANAME, selected_fields = selected_fields, output_types = output_types, row_restriction = f"splits='{split}'", requested_streams = 3 ) return training train = bq_reader('TRAIN').parallel_read_rows().map(transTable).shuffle(args.batch_size*3).batch(args.batch_size) validate = bq_reader('VALIDATE').parallel_read_rows().map(transTable).batch(args.batch_size) test = bq_reader('TEST').parallel_read_rows().map(transTable).batch(args.batch_size) # define model and compile feature_layer = tf.keras.layers.DenseFeatures(feature_columns) feature_layer_outputs = feature_layer(feature_layer_inputs) layers = tf.keras.layers.BatchNormalization()(feature_layer_outputs) layers = tf.keras.layers.Dense(nclasses, activation = tf.nn.softmax)(layers) model = tf.keras.Model( inputs = [v for v in feature_layer_inputs.values()], outputs = layers ) opt = tf.keras.optimizers.SGD() #SGD or Adam loss = tf.keras.losses.CategoricalCrossentropy() model.compile( optimizer = opt, loss = loss, metrics = ['accuracy', tf.keras.metrics.AUC(curve='PR')] ) # setup tensorboard logs and train tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=os.environ['AIP_TENSORBOARD_LOG_DIR'], histogram_freq=1) history = model.fit(train, epochs = args.epochs, callbacks = [tensorboard_callback], validation_data = validate) # output the model save files model.save(os.getenv("AIP_MODEL_DIR")) ``` ### Assemble Python Source Distribution create `setup.py` file: ``` %%writefile {DIR}/source/setup.py from setuptools import setup from setuptools import find_packages REQUIRED_PACKAGES = ['tensorflow_io'] setup( name = 'trainer', version = '0.1', install_requires = REQUIRED_PACKAGES, packages = find_packages(), include_package_data = True, description='Training Package' ) ``` add `__init__.py` file to the trainer modules folder: ``` !touch {DIR}/source/trainer/__init__.py ``` Create the source distribution and copy it to the projects storage bucket: - change to the local direcory with the source folder - remove any previous distributions - tar and gzip the source folder - copy the distribution to the project folder on GCS - change back to the local project directory ``` %cd {DIR} !rm -f source.tar source.tar.gz !tar cvf source.tar source !gzip source.tar !gsutil cp source.tar.gz {URI}/{TIMESTAMP}/source.tar.gz temp = '../'*(DIR.count('/')+1) %cd {temp} ``` ### Setup Training Job ``` CMDARGS = [ "--epochs=" + str(EPOCHS), "--batch_size=" + str(BATCH_SIZE), "--var_target=" + VAR_TARGET, "--var_omit=" + VAR_OMIT, "--project_id=" + PROJECT_ID, "--dataname=" + DATANAME, "--region=" + REGION, "--notebook=" + NOTEBOOK ] customJob = aiplatform.CustomPythonPackageTrainingJob( display_name = f'{NOTEBOOK}_{DATANAME}_{TIMESTAMP}', python_package_gcs_uri = f"{URI}/{TIMESTAMP}/source.tar.gz", python_module_name = "trainer.train", container_uri = TRAIN_IMAGE, model_serving_container_image_uri = DEPLOY_IMAGE, staging_bucket = f"{URI}/{TIMESTAMP}", labels = {'notebook':f'{NOTEBOOK}'} ) ``` ### Run Training Job ``` model = customJob.run( base_output_dir = f"{URI}/{TIMESTAMP}", service_account = SERVICE_ACCOUNT, args = CMDARGS, replica_count = 1, machine_type = TRAIN_COMPUTE, accelerator_count = 0, tensorboard = tb[0].resource_name ) model.display_name ``` --- ## Serving ### Create An Endpoint ``` endpoint = aiplatform.Endpoint.create( display_name = f'{NOTEBOOK}_{DATANAME}_{TIMESTAMP}', labels = {'notebook':f'{NOTEBOOK}'} ) endpoint.display_name ``` ### Deploy Model To Endpoint ``` endpoint.deploy( model = model, deployed_model_display_name = f'{NOTEBOOK}_{DATANAME}_{TIMESTAMP}', traffic_percentage = 100, machine_type = DEPLOY_COMPUTE, min_replica_count = 1, max_replica_count = 1 ) ``` --- ## Prediction ### Prepare a record for prediction: instance and parameters lists ``` pred = bigquery.query(query = f"SELECT * FROM {DATANAME}.{DATANAME}_prepped WHERE splits='TEST' LIMIT 10").to_dataframe() pred.head(4) newob = pred[pred.columns[~pred.columns.isin(VAR_OMIT.split()+[VAR_TARGET, 'splits'])]].to_dict(orient='records')[0] #newob instances = [json_format.ParseDict(newob, Value())] parameters = json_format.ParseDict({}, Value()) ``` ### Get Predictions: Python Client ``` prediction = endpoint.predict(instances=instances, parameters=parameters) prediction prediction.predictions[0] np.argmax(prediction.predictions[0]) ``` ### Get Predictions: REST ``` with open(f'{DIR}/request.json','w') as file: file.write(json.dumps({"instances": [newob]})) !curl -X POST \ -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \ -H "Content-Type: application/json; charset=utf-8" \ -d @{DIR}/request.json \ https://{REGION}-aiplatform.googleapis.com/v1/{endpoint.resource_name}:predict ``` ### Get Predictions: gcloud (CLI) ``` !gcloud beta ai endpoints predict {endpoint.name.rsplit('/',1)[-1]} --region={REGION} --json-request={DIR}/request.json ``` --- ## Remove Resources see notebook "99 - Cleanup"
github_jupyter
``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.axes_grid1 import ImageGrid ### DEFINE USEFUL FUNCTIONS def fk(x, Ap): "A useful function" fk = 0.2*x**5 * (1 + 5/7*(2+4*Ap)*x**2 + 5/9 * (3 + 10*Ap + 4*Ap**2)*x**4) return fk def fgam(x, rc, Lp, Ap): "Annother useful function" D = (rc/Lp)**2 * (1-0.3*(rc/Lp)**2) fgam = x**3 * (-D/3 + ((1+D)/5)*x**2 + ((Ap*D-1.3)/7)*x**4) return fgam def fchi(x, ri, Lp): "Yet annother useful function" fchi = x**3 * (-1/3*(ri/Lp)**2 + 0.2*(1+(ri/Lp)**2)*x**2 - 13/70*x**4) return fchi def fc(x, delta, Ap): "The last useful function, for now" fc = x**3 * (1 - 0.6*(delta+1)*x**2 - 3/14*(delta+1)*(2*Ap-delta)*x**4) return fc def runVenusModel(Qbmo0, QbmoNow, H0M, h, TC, core_Kppm, Ppc): ### DEFINE CONSTANTS ## Fundamental mu0 = 4e-7 * np.pi # Vacuum pearmeability (SI) G = 6.67e-11 # Gravitational constant (SI) y2s = 3.154e7 # Seconds per Earth year R = 8.3145 # Universal gas constant (J/K/mol) ## Planet rp = 6052e3 # Radius of the planet (m) ## Core ri = 0 # Radius of the inner core (m) rc = 3110e3 # Radius of the core (m) K0 = 1172e9 # Effective modulus (Pa) K1 = 3.567 # Effective derivative of effective modulus rho0 = 11776 # Central density (kg/m^3) Lp = np.sqrt(3*K0/(2*np.pi*G*rho0**2)) # Length scale (m) Ap = 0.1*(5*K1-13) # Constant in density profile P0 = 341e9 # Central pressure (Pa) Cc = 750 # Specific heat of the core (J/kg/K) bet = 0.83 # Coefficient of compositional expansion for inner core alp = 0.8 # Coefficient of compositional expansion for magnesium precipitation dTLdc = -1e4 # Compositional dependence of liquidus temperature (K) dTLdP = 9e-9 # Pressure dependence of liquidus temperature (K/Pa) c0 = 0.056 # Initial mass fraction of light elements gamm = 1.5 # Gruneisen parameter TL0 = 5124 # Liquidus temperature at the center (K) DSc = 127 # Entropy of melting (J/K/kg) TcIC = TL0*(1 - (rc/Lp)**2 - Ap*(rc/Lp)**4)**gamm; # T @ CMB when IC nucleates (K) Mc = 4/3*np.pi*rho0*Lp**3*fc(rc/Lp, 0, Ap); # Mass of the core (kg) g = 4/3*np.pi*G*rho0*rc*(1-0.6*(rc/Lp)**2 - 3/7*Ap*(rc/Lp)**4) # Gravitational acceleration near the CMB (m/s^2) HlamC = 1.76e-17 # Average decay constant (1/s) h0C = 4.1834e-14 # Heating per unit mass per ppm of K in the core (W/kg/ppm) ## Basal Magma Ocean (BMO) DSm = 300 # Entropy of melting (J/K/kg)... should be 652 from Stixrude et al. but was 300 in Labrosse et al. rhoM = 5500 # Density of the basal mantle (kg/m^3) Cm = 1000 # Specific heat of the basal mantle (J/kg/K) Dphi = 0.088 # Mass fraction change of FeO-rich component upon freezing TA = 5500 # Melt temperature of the MgO-rich component (K) TB = 3500 # Melt temperature of the FeO-rich component (K) alphT = 1.25e-5 # Coefficient of thermal expansion in the BMO (1/K) OmegaE = 2*np.pi/(24*243*3600) # Rotation rate of Venus HT = Cm/(alphT*g) # Thermal scale height for the BMO (m) sigBMO = 2e4 # Electrical conductivity of the BMO (S/M) c = 0.63 # Constant prefactor in the scaling law for B-field strength fohm = 0.9 # Fraction of available power converted into magnetic field energy HlamM = 1.38e-17 # Average decay constant (1/s) rb = rc + h # Initial radius of BMO (m) ### RUN THE MODEL ## Define timesteps NN = 4500 # Number of timesteps tend = 4.5e9 * y2s # Duration of model (s) dt = tend/(NN-1) # Duration of timestep (s) t_all = np.linspace(0,tend,NN) # Create empty arrays to store parameters rb_all = np.zeros(NN) # Radius of the BMO upper boundary (m) ri_all = np.zeros(NN) # Radius of the inner core (m) TM_all = np.zeros(NN) # Temperature of the solid mantle (K) TC_all = np.zeros(NN) # Temperature of the BMO and core (K) TLi_all = np.zeros(NN) # Temperature at the inner core boundary (K) Qsm_all = np.zeros(NN) # Secular cooling of the BMO (W) Qlm_all = np.zeros(NN) # Latent heat in the BMO (W) Qsc_all = np.zeros(NN) # Secular cooling of the core (W) Qpc_all = np.zeros(NN) # Precipitation of light elements from the core (W) Qgc_all = np.zeros(NN) # Gravitational energy from inner core growth (W) Qlc_all = np.zeros(NN) # Latent heat from inner core growth (W) Qic_all = np.zeros(NN) # Conductive cooling of the inner core (W) # Initialize radiogenic heating and the heat flow into the solid mantle Qbmo_all = np.linspace(Qbmo0,QbmoNow,NN) # Heat flow into the base of the solid mantle (W) Qrm_all = H0M*np.exp(-HlamM*t_all) # Radiogenic heat in the BMO (W) Qrc_all = Mc*h0C*core_Kppm*np.exp(-HlamC*t_all) # Radiogenic heating in the core (W) ## Begin the time loop for ii, t in enumerate(t_all): # Calculate heat flow out of the BMO Qbmo = Qbmo_all[ii] # Calculate radiogenic heating in the BMO and core Qrm = Qrm_all[ii] Qrc = Qrc_all[ii] # Calculate proportionalities for BMO Mm = (4/3)*np.pi*(rb**3-rc**3)*rhoM # Mass of the BMO (kg) BigTerm = (rb**3-rc**3)/(3*rb**2*Dphi*(TA-TB)) # Gather some constants... Qsm_til = -Mm*Cm Qlm_til = -4*np.pi*rb**2*DSm*rhoM*TC*BigTerm # Calculate proportionalities for core Qpc_til = 8/3*(np.pi**2*G*rho0**2*Lp**5*alp*Ppc * (fgam(rc/Lp,rc,Lp,Ap) - fgam(ri/Lp,rc,Lp,Ap))) if TC > TcIC: # No inner core! Qsc_til = -4/3*(np.pi*rho0*Cc*Lp**3 * fc(rc/Lp, gamm, Ap) * (1-(rc/Lp)**2 - Ap*(rc/Lp)**4)**(-gamm)) Qgc_til = 0 Qlc_til = 0 Qic_til = 0 dridt = 0 TLi = TL0 else: # Yes inner core! if ri < 2e4: ri = 2e4 # Avoid dividing by zero (even once...) Mic = Mc - 4/3*np.pi*rho0*Lp**3*(fc(rc/Lp,0,Ap) - fc(ri/Lp,0,Ap)) TLi = TL0 - K0*dTLdP*(ri/Lp)**2 + dTLdc*c0*ri**3/(Lp**3*fc(rc/Lp,0,Ap)) dTLdri = -2*K0*dTLdP*ri/Lp**2 + 3*dTLdc*c0*ri**2 / (Lp**3*fc(rc/Lp,0,Ap)) rhoi = rho0 * (1 -(ri/Lp)**2 - Ap*(ri/Lp)**4) gi = 4/3*np.pi*G*rho0*ri*(1-0.6*(ri/Lp)**2 - 3/7*Ap*(ri/Lp)**4) dTadP = gamm*TLi/K0 dridTC = -(1/(dTLdP-dTadP)) * TLi/(TC*rhoi*gi) Psc = (-4/3*np.pi*rho0*Cc*Lp**3 * (1-(ri/Lp)**2-Ap*(ri/Lp)**4)**(-gamm) * (dTLdri+2*gamm*TLi*ri/Lp**2 * (1+2*Ap*(ri/Lp)**2)/(1-(ri/Lp)**2-Ap*(ri/Lp)**4)) * (fc(rc/Lp, gamm, Ap) - fc(ri/Lp, gamm, Ap))) Pgc = (8*np.pi**2*c0*G*rho0**2*bet*ri**2*Lp**2 / fc(rc/Lp,0,Ap) * (fchi(rc/Lp,ri,Lp) - fchi(ri/Lp,ri,Lp))) Plc = 4*np.pi*ri**2*rhoi*TLi*DSc Pic = Cc*Mic*dTLdP*K0*(2*ri/Lp**2 + 3.2*ri/Lp**5) Qsc_til = Psc * dridTC Qgc_til = Pgc * dridTC Qlc_til = Plc * dridTC Qic_til = Pic * dridTC # Calculate cooling rate dTCdt = (Qbmo - Qrm - Qrc)/(Qsm_til + Qlm_til + Qsc_til + Qpc_til + Qgc_til + Qlc_til + Qic_til) drbdt = BigTerm*dTCdt if TC < TcIC: dridt = dridTC * dTCdt else: dridt = 0 # Calculate all energetic terms Qsm = Qsm_til * dTCdt Qlm = Qlm_til * dTCdt Qsc = Qsc_til * dTCdt Qpc = Qpc_til * dTCdt Qgc = Qgc_til * dTCdt Qlc = Qlc_til * dTCdt Qic = Qic_til * dTCdt # Store output rb_all[ii] = rb ri_all[ii] = ri TC_all[ii] = TC TLi_all[ii] = TLi Qbmo_all[ii] = Qbmo Qsm_all[ii] = Qsm Qlm_all[ii] = Qlm Qsc_all[ii] = Qsc Qpc_all[ii] = Qpc Qgc_all[ii] = Qgc Qlc_all[ii] = Qlc Qic_all[ii] = Qic # Advance parameters one step TC = TC + dTCdt*dt rb = rb + drbdt*dt ri = ri + dridt*dt ### POST-PROCESSING ## Dynamo in BMO? # Flow velocities h_all = rb_all - rc qsm = Qbmo_all/(4*np.pi*rb_all**2) v_mix = (h_all*qsm/(rhoM*HT))**(1/3) # Mixing length theory v_CIA = (qsm/(rhoM*HT))**(2/5) * (h_all/OmegaE)**(1/5) # CIA balance v_MAC = (qsm/(rhoM*OmegaE*HT))**(1/2) # MAC balance # Magnetic Reynolds numbers Rm_mix = mu0*sigBMO*h_all*v_mix Rm_CIA = mu0*sigBMO*h_all*v_CIA Rm_MAC = mu0*sigBMO*h_all*v_MAC # Magnetic field strength at the BMO Bm_mix = np.sqrt(2*mu0*fohm*c*rhoM*v_mix**2) Bm_CIA = np.sqrt(2*mu0*fohm*c*rhoM*v_CIA**2) Bm_MAC = np.sqrt(2*mu0*fohm*c*rhoM*v_MAC**2) # Magnetic field strength at the surface Bs_mix = 1/7*Bm_mix*(rb_all/rp)**3 Bs_CIA = 1/7*Bm_CIA*(rb_all/rp)**3 Bs_MAC = 1/7*Bm_MAC*(rb_all/rp)**3 ## Dynamo in core? kc = 40 EK_all = 16*np.pi*gamm**2*kc*Lp*(fk(rc/Lp,Ap)-fk(ri_all/Lp,Ap)) Tbot_all = np.zeros(NN) for ii, ri in enumerate(ri_all): if ri > 0: Tbot_all[ii] = TLi_all[ii] else: Tbot_all[ii] = TC_all[ii]*(1-(rc/Lp)**2 - Ap*(rc/Lp)**4)**(-gamm) TS_all = Tbot_all*((1-(ri_all/Lp)**2-Ap*(ri_all/Lp)**4)**(-gamm) * (fc(rc/Lp,gamm,Ap) - fc(ri_all/Lp,gamm,Ap)) / (fc(rc/Lp,0,Ap) - fc(ri_all/Lp,0,Ap))) Tdis = ((Tbot_all / (1 - (ri_all/Lp)**2 - Ap*(ri_all/Lp)**4)**gamm) * ((fc(rc/Lp,0,Ap) - fc(ri_all/Lp,0,Ap)) / (fc(rc/Lp,-gamm,Ap) - fc(ri_all/Lp,-gamm,Ap)))) Plc_all = (Tdis*(TLi_all-TC_all)/(TLi_all*TC_all))*Qlc_all Pic_all = (Tdis*(TLi_all-TC_all)/(TLi_all*TC_all))*Qic_all Pgc_all = (Tdis/TC_all)*Qgc_all Qcc_all = Qsc_all + Qpc_all + Qgc_all + Qlc_all + Qic_all Prc_all = ((Tdis-TC_all)/TC_all)*Qrc_all Psc_all = (Tdis*(TS_all-TC_all)/(TS_all*TC_all))*Qsc_all Ppc_all = (Tdis/TC_all)*Qpc_all Pk_all = Tdis*EK_all P_inner = Plc_all + Pic_all + Pgc_all P_outer = Prc_all + Psc_all + Ppc_all - Pk_all Vc = 4/3*np.pi*(rc-ri_all)**3 # Volume of the core (m^3) rho_av = Mc/Vc # Average density of the core (kg/m^3) D_all = rc-ri_all # Thickness of the outer core (m) phi_outer = rc*g/2 # Gravitational potential at the CMB (m^2/s^2) phi_inner = ri_all**2/rc * g/2 # Gravitational potential at the inner core boundary (m^2/s^2) phi_mean = 0.3*g/rc*((rc**5-ri_all**5)/(rc**3-ri_all**3)) # Average grav. potential in the outer core (m^2/s^2) P_total = P_inner + P_outer TDM_all = np.zeros(NN) for ii, P in enumerate(P_total): if P > 0: F_inner = P_inner[ii]/(phi_mean[ii] - phi_inner[ii]) # Black magic scaling... F_outer = P_outer[ii]/(phi_outer - phi_mean[ii]) f_rat = F_inner/(F_outer+F_inner) powB = (P_inner[ii] + P_outer[ii])/(Vc[ii]*rho_av[ii]*OmegaE**3*D_all[ii]**2) b_dip = 7.3*(1-ri_all[ii]/rc)*(1+f_rat) B_rms = powB**0.34*np.sqrt(rho_av[ii]*mu0)*OmegaE*D_all[ii] TDM = np.maximum(4*np.pi*rc**3/(np.sqrt(2)*mu0) * B_rms/b_dip, 0) TDM_all[ii] = TDM else: TDM_all[ii] = 0 if TDM_all[NN-1] > 0: magicConstant = 7.94e22/TDM_all[NN-1] else: magicConstant = 1 TDM_all = magicConstant * TDM_all Bs_core = mu0*TDM_all/(4*np.pi*rp**3) return ( t_all, h_all, ri_all, TC_all, TS_all, TLi_all, Tdis, Tbot_all, Qbmo_all, Qsm_all, Qlm_all, Qrm_all, Qcc_all, Qsc_all, Qrc_all, Qpc_all, Qgc_all, Qlc_all, Qic_all, TDM_all, magicConstant, Psc_all, Prc_all, Ppc_all, Pgc_all, Plc_all, Pic_all, Pk_all, P_total, Bs_core, v_mix, v_CIA, v_MAC, Rm_mix, Rm_CIA, Rm_MAC, Bs_mix, Bs_CIA, Bs_MAC, ) def postprocessVenus(Qbmo0, QbmoNow, H0M, h, TC, core_Kppm, Ppc): (t_all, h_all, ri_all, TC_all, TS_all, TLi_all, Tdis, Tbot_all, Qbmo_all, Qsm_all, Qlm_all, Qrm_all, Qcc_all, Qsc_all, Qrc_all, Qpc_all, Qgc_all, Qlc_all, Qic_all, TDM_all, magicConstant, Psc_all, Prc_all, Ppc_all, Pgc_all, Plc_all, Pic_all, Pk_all, P_total, Bs_core, v_mix, v_CIA, v_MAC, Rm_mix, Rm_CIA, Rm_MAC, Bs_mix, Bs_CIA, Bs_MAC, ) = runVenusModel(Qbmo0, QbmoNow, H0M, h, TC, core_Kppm, Ppc) if np.argmin(Rm_mix>40) > 0: i_mix = np.argmin(Rm_mix>40) Bs_mix_now = 0 t_mix = t_all[i_mix] else: Bs_mix_now = Bs_mix[-1] t_mix = t_all[-1] if np.argmin(Rm_CIA>40) > 0: i_CIA = np.argmin(Rm_CIA>40) Bs_CIA_now = 0 t_CIA = t_all[i_CIA] else: Bs_CIA_now = Bs_CIA[-1] t_CIA = t_all[-1] if np.argmin(Rm_MAC>40) > 0: i_MAC = np.argmin(Rm_MAC>40) Bs_MAC_now = 0 t_MAC = t_all[i_MAC] else: Bs_MAC_now = Bs_MAC[-1] t_MAC = t_all[-1] h_now = h_all[-1] Qcc_now = Qcc_all[-1] Bs_core_now = Bs_core[-1] y2s = 3.154e7 return (h_now/1e3, Qcc_now/1e12, 1e6*Bs_core_now, 1e6*Bs_mix_now, 1e6*Bs_CIA_now, 1e6*Bs_MAC_now, t_mix/(y2s*1e9), t_CIA/(y2s*1e9), t_MAC/(y2s*1e9)) # "Constants" QbmoNow = 7e12 # Present-day value of Qbmo (W) H0M = 20e12 # Initial radiogenic heating (W) core_Kppm = 50 # Amount of K in the core (ppm) Ppc = 5e-6 # Precipitation rate of light elements (1/K) # Reference values Qbmo0_ref = 27e12 h0_ref = 750e3 TC0_ref = 4900 (h_now_ref, Qcc_now_ref, Bs_core_now_ref, Bs_mix_now_ref, Bs_CIA_now_ref, Bs_MAC_now_ref, t_mix_ref, t_CIA_ref, t_MAC_ref) = postprocessVenus(Qbmo0_ref, QbmoNow, H0M, h0_ref, TC0_ref, core_Kppm, Ppc) # Sensitivity test N_T0s = 50 N_Q0s = 51 N_h0s = 52 Qbmo0s = 1e12*np.linspace(20,40,N_Q0s) # Initial value of Qbmo (W) h0s = 1e3*np.linspace(300,1500,N_h0s) TC0s = np.linspace(4400,6100,N_T0s) h_h0 = np.zeros((N_h0s,N_Q0s)) h_TC = np.zeros((N_T0s,N_Q0s)) td_h0 = np.zeros((N_h0s,N_Q0s)) td_TC = np.zeros((N_T0s,N_Q0s)) for ii, Qbmo0 in enumerate(Qbmo0s): for jj, h0 in enumerate(h0s): (h_now, Qcc_now, Bs_core_now, Bs_mix_now, Bs_CIA_now, Bs_MAC_now, t_mix, t_CIA, t_MAC) = postprocessVenus(Qbmo0, QbmoNow, H0M, h0, TC0_ref, core_Kppm, Ppc) h_h0[jj,ii] = h_now td_h0[jj,ii] = t_CIA for kk, TC0 in enumerate(TC0s): (h_now, Qcc_now, Bs_core_now, Bs_mix_now, Bs_CIA_now, Bs_MAC_now, t_mix, t_CIA, t_MAC) = postprocessVenus(Qbmo0, QbmoNow, H0M, h0_ref, TC0, core_Kppm, Ppc) h_TC[kk,ii] = h_now td_TC[kk,ii] = t_CIA fig, axs = plt.subplots(2,1,figsize=(9,9)) fn = 'Arial' fs = 18 lw = 3 ax1 = plt.subplot(211) CP = plt.contourf(Qbmo0s/1e12, h0s/1e3, h_h0, 50, vmin=0, vmax=1300) plt.contour(Qbmo0s/1e12, h0s/1e3, td_h0, levels = [4.4], colors = ['w'], linewidths = [lw], linestyles = 'dashed') plt.ylabel('Initial BMO thickness (km)',fontname=fn,fontsize=fs) plt.xticks(np.linspace(20,40,5),fontname=fn, fontsize=fs) plt.yticks(fontname=fn, fontsize=fs) plt.scatter(Qbmo0_ref/1e12,h0_ref/1e3,s=300,c='white',marker='*') ax1.spines["top"].set_visible(False) ax1.spines["right"].set_visible(False) plt.minorticks_on() for c in CP.collections: c.set_edgecolor("face") plt.text(0.95,0.93,'c',color='white',fontname=fn,fontsize=fs,fontweight='bold',transform=ax1.transAxes) ax2 = plt.subplot(212) CP2 = plt.contourf(Qbmo0s/1e12, TC0s, h_TC, 50, vmin=0, vmax=1300) plt.contour(Qbmo0s/1e12, TC0s, td_TC, levels = [4.4], colors = ['w'], linewidths = [lw], linestyles = 'dashed') plt.xlabel('Initial heat flow out of the BMO (TW)',fontname=fn,fontsize=fs) plt.ylabel('Initial BMO temperature (K)',fontname=fn,fontsize=fs) plt.xticks(np.linspace(20,40,5),fontname=fn, fontsize=fs) plt.yticks(np.linspace(4500,6000,4),fontname=fn, fontsize=fs) plt.scatter(Qbmo0_ref/1e12,TC0_ref,s=300,c='white',marker='*') ax2.spines["top"].set_visible(False) ax2.spines["right"].set_visible(False) plt.minorticks_on() fig.subplots_adjust(right=0.8) cbar_ax = fig.add_axes([0.83, 0.25, 0.04, 0.5]) cbar = fig.colorbar(CP, cax=cbar_ax, ticks=np.linspace(0,1200,7)) cbar.ax.set_ylabel('Present-day thickness of the BMO (km)',fontname=fn,fontsize=fs) cbar.ax.tick_params(labelsize=fs) for c in CP2.collections: c.set_edgecolor("face") plt.text(0.95,0.93,'d',color='white',fontname=fn,fontsize=fs,fontweight='bold',transform=ax2.transAxes) plt.savefig('sensitiveVenus2.pdf') plt.show() fig, axs = plt.subplots(2,1,figsize=(9,9)) fn = 'Arial' fs = 18 lw = 3 ax1 = plt.subplot(211) CP = plt.contourf(Qbmo0s/1e12, h0s/1e3, td_h0, 50, vmin=0.34, vmax=4.5, cmap='magma_r') plt.contour(Qbmo0s/1e12, h0s/1e3, td_h0, levels = [4.4], colors = ['w'], linewidths = [lw], linestyles = 'dashed') plt.ylabel('Initial BMO thickness (km)',fontname=fn,fontsize=fs) plt.xticks(np.linspace(20,40,5),fontname=fn, fontsize=fs) plt.yticks(fontname=fn, fontsize=fs) plt.scatter(Qbmo0_ref/1e12,h0_ref/1e3,s=300,c='white',marker='*') ax1.spines["top"].set_visible(False) ax1.spines["right"].set_visible(False) plt.minorticks_on() for c in CP.collections: c.set_edgecolor("face") plt.text(0.95,0.93,'a',color='white',fontname=fn,fontsize=fs,fontweight='bold',transform=ax1.transAxes) ax2 = plt.subplot(212) CP2 = plt.contourf(Qbmo0s/1e12, TC0s, td_TC, 50, vmin=0.34, vmax=4.5, cmap='magma_r') plt.contour(Qbmo0s/1e12, TC0s, td_TC, levels = [4.4], colors = ['w'], linewidths = [lw], linestyles = 'dashed') plt.xlabel('Initial heat flow out of the BMO (TW)',fontname=fn,fontsize=fs) plt.ylabel('Initial BMO temperature (K)',fontname=fn,fontsize=fs) plt.xticks(np.linspace(20,40,5),fontname=fn, fontsize=fs) plt.yticks(np.linspace(4500,6000,4),fontname=fn, fontsize=fs) plt.scatter(Qbmo0_ref/1e12,TC0_ref,s=300,c='white',marker='*') ax2.spines["top"].set_visible(False) ax2.spines["right"].set_visible(False) plt.minorticks_on() fig.subplots_adjust(right=0.8) cbar_ax = fig.add_axes([0.83, 0.25, 0.04, 0.5]) cbar = fig.colorbar(CP, cax=cbar_ax, ticks=np.linspace(0,4.5,10)) cbar.ax.set_ylabel('Lifetime of the BMO dynamo (Gyr)',fontname=fn,fontsize=fs) cbar.ax.tick_params(labelsize=fs) for c in CP2.collections: c.set_edgecolor("face") plt.text(0.95,0.93,'b',color='white',fontname=fn,fontsize=fs,fontweight='bold',transform=ax2.transAxes) plt.savefig('sensitiveVenus1.pdf') plt.show() ```
github_jupyter
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_1_feature_encode.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Module 4: Training for Tabular Data** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 4 Material * **Part 4.1: Encoding a Feature Vector for Keras Deep Learning** [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_1_feature_encode.ipynb) * Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_2_multi_class.ipynb) * Part 4.3: Keras Regression for Deep Neural Networks with RMSE [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_3_regression.ipynb) * Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_4_backprop.ipynb) * Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_5_rmse_logloss.ipynb) # Google CoLab Instructions The following code ensures that Google CoLab is running the correct version of TensorFlow. ``` try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ``` # Part 4.1: Encoding a Feature Vector for Keras Deep Learning Neural networks can accept many types of data. We will begin with tabular data, where there are well defined rows and columns. This is the sort of data you would typically see in Microsoft Excel. An example of tabular data is shown below. Neural networks require numeric input. This numeric form is called a feature vector. Each row of training data typically becomes one vector. The individual input neurons each receive one feature (or column) from this vector. In this section, we will see how to encode the following tabular data into a feature vector. ``` import pandas as pd pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) #df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv", na_values=['NA','?']) df = pd.read_csv("D://ArtificialIntelligence//jh-simple-dataset.csv", na_values=['NA','?']) #, engine="python") pd.set_option('display.max_columns', 14) pd.set_option('display.max_rows', 5) display(df) ``` The following observations can be made from the above data: * The target column is the column that you seek to predict. There are several candidates here. However, we will initially use product. This field specifies what product someone bought. * There is an ID column. This column should not be fed into the neural network as it contains no information useful for prediction. * Many of these fields are numeric and might not require any further processing. * The income column does have some missing values. * There are categorical values: job, area, and product. To begin with, we will convert the job code into dummy variables. ``` pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) dummies = pd.get_dummies(df['job'],prefix="job") print(dummies.shape) pd.set_option('display.max_columns', 9) pd.set_option('display.max_rows', 10) display(dummies) ``` Because there are 33 different job codes, there are 33 dummy variables. We also specified a prefix, because the job codes (such as "ax") are not that meaningful by themselves. Something such as "job_ax" also tells us the origin of this field. Next, we must merge these dummies back into the main data frame. We also drop the original "job" field, as it is now represented by the dummies. ``` pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df = pd.concat([df,dummies],axis=1) df.drop('job', axis=1, inplace=True) pd.set_option('display.max_columns', 9) pd.set_option('display.max_rows', 10) display(df) ``` We also introduce dummy variables for the area column. ``` pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1) df.drop('area', axis=1, inplace=True) pd.set_option('display.max_columns', 9) pd.set_option('display.max_rows', 10) display(df) ``` The last remaining transformation is to fill in missing income values. ``` med = df['income'].median() df['income'] = df['income'].fillna(med) ``` There are more advanced ways of filling in missing values, but they require more analysis. The idea would be to see if another field might give a hint as to what the income were. For example, it might be beneficial to calculate a median income for each of the areas or job categories. This is something to keep in mind for the class Kaggle competition. At this point, the Pandas dataframe is ready to be converted to Numpy for neural network training. We need to know a list of the columns that will make up *x* (the predictors or inputs) and *y* (the target). The complete list of columns is: ``` print(list(df.columns)) ``` This includes both the target and predictors. We need a list with the target removed. We also remove **id** because it is not useful for prediction. ``` x_columns = df.columns.drop('product').drop('id') print(list(x_columns)) ``` ### Generate X and Y for a Classification Neural Network We can now generate *x* and *y*. Note, this is how we generate y for a classification problem. Regression would not use dummies and would simply encode the numeric value of the target. ``` # Convert to numpy - Classification x_columns = df.columns.drop('product').drop('id') x = df[x_columns].values dummies = pd.get_dummies(df['product']) # Classification products = dummies.columns y = dummies.values ``` We can display the *x* and *y* matrices. ``` print(x) print(y) ``` The x and y values are now ready for a neural network. Make sure that you construct the neural network for a classification problem. Specifically, * Classification neural networks have an output neuron count equal to the number of classes. * Classification neural networks should use **categorical_crossentropy** and a **softmax** activation function on the output layer. ### Generate X and Y for a Regression Neural Network For a regression neural network, the *x* values are generated the same. However, *y* does not use dummies. Make sure to replace **income** with your actual target. ``` y = df['income'].values ``` # Module 4 Assignment You can find the first assignment here: [assignment 4](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb)
github_jupyter
# JAMSTEC DB スキーム確認ツール for APEX(APF11) ``` import os import pandas as pd import re import termcolor import Levenshtein # レーベンシュタイン距離ライブラリにある、ジャロ・ウインクラー距離を計算するのに使う # jaro_dist = Levenshtein.jaro_winkler(srt1 , str2) apf11_excel = pd.read_excel('Apex_apf11.xlsx' , sheet_name=None) # sheet_name=Noneで全てのシート読み込む ``` ### ジャロ・ウィンクラー距離法の関数 2つの文字列を移動したり修正したりした回数等で計算する。1で一致とみなされる。 Winkler, W. E. (1990). "String Comparator Metrics and Enhanced Decision Rules in the Fellegi-Sunter Model of Record Linkage". Proceedings of the Section on Survey Research Methods. American Statistical Association: 354–359. ``` def jaro_dist(str1,str2): return Levenshtein.jaro_winkler(str1,str2) print(apf11_excel['scheme'].columns) ``` ### エクセルファイルの列を数えて抜き出す ``` msgcol = apf11_excel['scheme'].iloc[:,1] a26msg = apf11_excel['scheme'].iloc[:,7] # 7列目がA26のmsg , 8列目はlog 以降同様に数える x1a27system = apf11_excel['scheme'].iloc[:,10] x2x3a28a29system = apf11_excel['scheme'].iloc[:,12] a30a31x4system = apf11_excel['scheme'].iloc[:,14] a33a34a35system = apf11_excel['scheme'].iloc[:,16] scheme = pd.concat([msgcol,a26msg,x1a27system,x2x3a28a29system,a30a31x4system,a33a34a35system], axis=1)\ .rename(columns={'Unnamed: 1':'field_name', 'Unnamed: 7':'a26msg' , 'Unnamed: 10':'x1a27system' , 'Unnamed: 12':'x2x3a28a29system' , 'Unnamed: 14':'a30a31x4system' , 'Unnamed: 16':'a33a34a35system'}) techcol = apf11_excel['tech'].iloc[:,1] a26msg = apf11_excel['tech'].iloc[:,7] a26log = apf11_excel['tech'].iloc[:,8] x1a27science = apf11_excel['tech'].iloc[:,9] x1a27vitals = apf11_excel['tech'].iloc[:,10] x1a27systemtech = apf11_excel['tech'].iloc[:,11] x2x3a28a29a30a31a33a34science = apf11_excel['tech'].iloc[:,12] x2x3a28a29a30a31a33a34vitals = apf11_excel['tech'].iloc[:,13] x2x3a28a29a30a31a33a34systemtech = apf11_excel['tech'].iloc[:,14] a28science = apf11_excel['tech'].iloc[:,15] a28vitals = apf11_excel['tech'].iloc[:,16] a28system = apf11_excel['tech'].iloc[:,17] tech = pd.concat([techcol,a26msg,a26log,x1a27science,x1a27vitals,x1a27systemtech,x2x3a28a29a30a31a33a34science,x2x3a28a29a30a31a33a34vitals,x2x3a28a29a30a31a33a34systemtech,a28science,a28vitals,a28system], axis=1)\ .rename(columns={'Unnamed: 1':'field_name'\ , 'Unnamed: 7':'a26msg' , 'Unnamed: 8':'a26log'\ , 'Unnamed: 9':'x1a27sicence' , 'Unnamed: 10':'x1a27vitals' , 'Unnamed: 11':'x1a27system'\ , 'Unnamed: 12':'x2x3a28a29a30a31a33a34science' , 'Unnamed: 13':'x2x3a28a29a30a31a33a34vitals' , 'Unnamed: 14':'x2x3a28a29a30a31a33a34system'\ , 'Unnamed: 15':'a28science' , 'Unnamed: 16':'a28vitals' , 'Unnamed: 17':'a28system'}) ``` ### 生データ読み込み ``` with open('rawdata/A34/f8900.001.20200310T174432.system_log.txt','r') as sys: sysline = sys.readlines() with open('rawdata/A34/f8900.001.20200310T174432.vitals_log.csv','r') as vit: vitline = vit.readlines() with open('rawdata/A34/f8900.001.merge.science_log.csv','r') as sci: sciline = sci.readlines() ``` ### system_log読み込み 各行末をパラメータ名と判断する 後ろからみて|があったらそこまでをパラメータと判断する ``` regex = r'|$' pt = re.compile(regex) reg2 = r'^[A-Z]' pt2 = re.compile(reg2) sysdata = {} i = 0 for line in sysline: if(pt.match(line)): param = str(line.split('|')[-1:]) # 前と後ろの邪魔な文字を削除 para = param[2:] para2 = para[:-4] # 数字は削除する,先頭のみ取得 para3 = str(para2.split(' ')[0]) if(pt2.match(para3)): sysdata[i] = para3 i += 1 #print(sysdata) sysdf = pd.DataFrame(sysdata.values() , columns={'field_name'}) #print(sysdf) ``` ### science_log 読み込み CTD_P/CTD_PTS/CTD_CPと数値があるのみなので比較する必要なし ### vitals_log 読み込み vitals_coreがカンマ区切りなのでそのまま読み取る。 カラム数が重要(5項目めがBatteryVoltage等と決まる ###### Excel の内容判断は文章化されていて困難なのでテーブルとして以下を予め設定する(202010最新のA34が基準) |Column|DB table name| | ---- | ---- | |3|AirBladderPressure| |5|voltage_load<br>QuiescentVolts<br>Sbe41cpVolts<br>BuoyancyPumpVolts<br>BuoyancyPumpVolts2<br>AirPumpVolts| |7|Humidity| |8|LeakDetectVolts| |9|Vacuum| |11|CoulmbCounterAmphrs| ``` regex = r'^VITALS' # VITALS_COREの行は取得する pattern = re.compile(regex) vitdata = [] for line in vitline: if(pattern.match(line)): vitdata.append(line.split(',')) print(line.split(',')) # 以下のような感じで数値が入ってたら項目ありと判断するだけ。 print((vitdata[0][4])) print(scheme['a33a34a35system']) print(sysdf['field_name'][37:]) # 結果保存用のDFを準備 resultdf = pd.DataFrame(columns=['index','msg','xls','score']) for line in sysdf['field_name']: query = scheme['a33a34a35system'].str.startswith(line , na=False) #print(query.values) if (query[query == True].first_valid_index()): print( termcolor.colored(line + ' field exists.','blue')) else: print( termcolor.colored(line + ' is not found.' , 'red')) for index,item in scheme.iterrows(): score = jaro_dist(line , str(item['a33a34a35system']) ) # 引数を入れ替えると結果が多少変わる。 #print(line + ' is probably ' + str(item['a33a34a35system']) + ' ( ' + str(round(score,2)*100) + '%)' ) record = pd.Series([index , line , item['a33a34a35system'] , score] , index=resultdf.columns) resultdf = resultdf.append(record , ignore_index=True) print(resultdf) ``` ### ソートして上位3件を表示 ``` items = len(scheme.dropna()) # Index数、この数分ジャロ・ウインクラー距離を計算したら次のマッチしなかった語句になる kazu = int( len(resultdf) / items ) # クエリーのリストになくnot foundで表示した数 for count in range(kazu): res = resultdf[items*count : items*(count+1)] msgrank = res.sort_values('score',ascending=False)[:3] # Score降順にソートして上から3つを表示 if((re.match('^.',msgrank.iat[0,1])) is not None ): # print(ranking.iat[0,3]) # score が0と1の時は抜く(アルゴリズム?で100%が結構出てる。 disp_rank = str(msgrank.iat[0,1]) + ' is probably ' + str(msgrank.iat[0,2]) + ' ( ' + str(round(msgrank.iat[0,3] , 2) * 100 ) + '% )' + '\n' \ + ' or ' + str(msgrank.iat[1,2]) + ' ( ' + str(round(msgrank.iat[1,3] , 2) * 100 ) + '% )' + '\n' \ + ' or ' + str(msgrank.iat[2,2]) + ' ( ' + str(round(msgrank.iat[2,3] , 2) * 100 ) + '% )' + '\n' print( disp_rank ) ```
github_jupyter
# 虚谷物联之复合数据绘图 ## 1. 范例说明 复合数据,即彼此关联的数据,放在一个消息主题(topicID)中。在物联网应用中,尤其是物联网数据采集中,很多数据是相互关联的。如果分散在不同的topicID中,给观察和研究都带来不便。如科学实验中,会使用多个同类传感器进行比较;校园气象信息项目中,温湿度数据、光照数据等彼此关联。在虚谷物联项目中,我们称这类数据为复合数据。 与复合数据对应的,就是常规的单一数据了。因为SIoT的Web管理页面中本来就能呈现这种单一数据,用掌控板或者Mind+来绘制图表,都比较容易。该案例演示的就是如果在一个图表中,将多个数据同时绘制出来。 1)数据类型:2个数据,用“,”分开。 2)涉及资源:siot服务器,siot库、matplotlib库。 3)文档写作:谢作如 4)参考网站:https://github.com/vvlink/SIoT 5)其他说明:本代码范例可以移植到其他平台。因为虚谷号已经默认安装了siot库,也预装了siot服务器,使用虚谷号的用户可以省略这一步。 ## 2. 代码编写 ### 2.1 传感器数据采集 数据采集端指利用掌控板、Arduino或者虚谷号同时采集多个传感器的数据,发送到SIoT服务器。 TopicID名称为:xzr/100 数据格式:多种传感器的数据,用英文的逗号“,”分隔,如“22.1,35.0”。 传感器数据采集的方案很多,仅硬件就有很多种,代码略。具体请参考:https://github.com/vvlink/siot ### 2.2 同类数据呈现 复合数据中,有些是同类的,比如多个温度传感器,多个湿度传感器。同类的数据可以在统一坐标轴中绘制图表,比较简单。 **第一步:导入库** siot库是对mqtt库对二次封装,让代码更加简洁。 ``` import siot ``` **第二步:配置并连接SIOT服务器** 虚谷号可以用“127.0.0.1”表示本机,用户名和密码统一使用“scope”,topicid自己定义,这里用的是“xzr/100”,表示项目名称为“xzr”,设备名称为“100”。 ``` SERVER = "127.0.0.1" #MQTT服务器IP CLIENT_ID = "" #在SIoT上,CLIENT_ID可以留空 IOT_pubTopic = 'xzr/100' #“topic”为“项目名称/设备名称” IOT_UserName ='scope' #用户名 IOT_PassWord ='scope' #密码 # 连接服务器 siot.init(CLIENT_ID, SERVER, user=IOT_UserName, password=IOT_PassWord) siot.connect() ``` **第三步:编写绘图函数** 因为在jupyter上运行,为了能够动态显示图表,特意加了“display.clear_output(wait=True)”,如果直接运行.py文件,请删除如下几句: %matplotlib inline from IPython import display display.clear_output(wait=True) ``` import matplotlib.pyplot as plt %matplotlib inline from IPython import display x,p1,p2=[],[],[] i=0 w=20 #设置数据的长度 def draw(v1,v2): global x,i,p1,p2 i=i+1 x.append(i) p1.append(v1) p2.append(v2) # 保持数据长度,避免图表越来越小 if len(x)>w: x.pop(0) p1.pop(0) p2.pop(0) fig = plt.figure() plt.plot(x,p1,color="red",linewidth=1) plt.plot(x,p2,color="blue",linewidth=1) display.clear_output(wait=True) plt.show() ``` **第四步:订阅消息** “siot.subscribe(IOT_pubTopic, sub_cb)”中,“sub_cb”是回调函数名称。当“siot”对象收到一次消息,就会执行一次回调函数。在回调函数中调用绘图函数。 需要注意的是,回调函数中如果存在代码错误,Python是不会输出的。这对代码调试带来了一定的难度。 ``` def sub_cb(client, userdata, msg): print("\nTopic:" + str(msg.topic) + " Message:" + str(msg.payload)) # msg.payload是bytes类型,要转换 s=msg.payload.decode() ss=s.split(',') draw(ss[0],ss[1]) siot.subscribe(IOT_pubTopic, sub_cb) siot.loop() ``` 接下来,我们就能够看到动态刷新的数据了。 **注:重新运行程序的时候,需先选择上方“服务”->再选择“重启 & 清空输出”。** ### 2.3 不同类别数据呈现 不同类别的数据,不能放在一个坐标轴中,比如温度的30和湿度的30,不是一个单位的量。这里采用了多条坐标轴的形式来解决,让一个图表有两条Y轴,分别作为两个数据的刻度。 **注意:请先选择上方“服务”->再选择“重启 & 清空输出”,否则将会收到多次的数据。** ``` import matplotlib.pyplot as plt %matplotlib inline from IPython import display x,p1,p2=[],[],[] i=0 w=20 #设置数据的长度 def draw(v1,v2): global x,i,p1,p2 i=i+1 x.append(i) p1.append(v1) p2.append(v2) # 保持数据长度,避免图表越来越小 if len(x)>w: x.pop(0) p1.pop(0) p2.pop(0) fig,ax1 = plt.subplots() ax2 = ax1.twinx() ax1.set_xlabel('X data') ax1.set_ylabel('data Y1',color="red") ax2.set_ylabel('data Y2',color="blue") #如果知道数值的区间,可以先设置好刻度 ax1.set_yticks(range(0,50,10)) ax2.set_yticks(range(0,100,20)) ax1.plot(x,p1,'g-') ax2.plot(x,p2,'b-') display.clear_output(wait=True) plt.show() def sub_cb(client, userdata, msg): print("\nTopic:" + str(msg.topic) + " Message:" + str(msg.payload)) # msg.payload是bytes类型,要转换 s=msg.payload.decode() ss=s.split(',') draw(ss[0],ss[1]) siot.subscribe(IOT_pubTopic, sub_cb) siot.loop() ```
github_jupyter
<center> <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" /> </center> # Sets in Python Estimated time needed: **20** minutes ## Objectives After completing this lab you will be able to: - Work with sets in Python, including operations and logic operations. <h2>Table of Contents</h2> <div class="alert alert-block alert-info" style="margin-top: 20px"> <ul> <li> <a href="#set">Sets</a> <ul> <li><a href="content">Set Content</a></li> <li><a href="op">Set Operations</a></li> <li><a href="logic">Sets Logic Operations</a></li> </ul> </li> <li> <a href="#quiz">Quiz on Sets</a> </li> </ul> </div> <hr> <h2 id="set">Sets</h2> <h3 id="content">Set Content</h3> A set is a unique collection of objects in Python. You can denote a set with a curly bracket <b>{}</b>. Python will automatically remove duplicate items: ``` # Create a set set1 = {"pop", "rock", "soul", "hard rock", "rock", "R&B", "rock", "disco"} set1 ``` The process of mapping is illustrated in the figure: <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%202/images/SetsUnique.png" width="1100" /> You can also create a set from a list as follows: ``` # Convert list to set album_list = [ "Michael Jackson", "Thriller", 1982, "00:42:19", \ "Pop, Rock, R&B", 46.0, 65, "30-Nov-82", None, 10.0] album_set = set(album_list) album_set ``` Now let us create a set of genres: ``` # Convert list to set music_genres = set(["pop", "pop", "rock", "folk rock", "hard rock", "soul", \ "progressive rock", "soft rock", "R&B", "disco"]) music_genres ``` <h3 id="op">Set Operations</h3> Let us go over set operations, as these can be used to change the set. Consider the set <b>A</b>: ``` # Sample set A = set(["Thriller", "Back in Black", "AC/DC"]) A ``` We can add an element to a set using the <code>add()</code> method: ``` # Add element to set A.add("NSYNC") A ``` If we add the same element twice, nothing will happen as there can be no duplicates in a set: ``` # Try to add duplicate element to the set A.add("NSYNC") A ``` We can remove an item from a set using the <code>remove</code> method: ``` # Remove the element from set A.remove("NSYNC") A ``` We can verify if an element is in the set using the <code>in</code> command: ``` # Verify if the element is in the set "AC/DC" in A ``` <h3 id="logic">Sets Logic Operations</h3> Remember that with sets you can check the difference between sets, as well as the symmetric difference, intersection, and union: Consider the following two sets: ``` # Sample Sets album_set1 = set(["Thriller", 'AC/DC', 'Back in Black']) album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"]) ``` <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%202/images/SetsSamples.png" width="650" /> ``` # Print two sets album_set1, album_set2 ``` As both sets contain <b>AC/DC</b> and <b>Back in Black</b> we represent these common elements with the intersection of two circles. <img src = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%202/images/SetsLogic.png" width = "650" /> You can find the intersect of two sets as follow using <code>&</code>: ``` # Find the intersections intersection = album_set1 & album_set2 intersection ``` You can find all the elements that are only contained in <code>album_set1</code> using the <code>difference</code> method: ``` # Find the difference in set1 but not set2 album_set1.difference(album_set2) ``` You only need to consider elements in <code>album_set1</code>; all the elements in <code>album_set2</code>, including the intersection, are not included. <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%202/images/SetsLeft.png" width="650" /> The elements in <code>album_set2</code> but not in <code>album_set1</code> is given by: ``` album_set2.difference(album_set1) ``` <img src = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%202/images/SetsRight.png" width="650" /> You can also find the intersection of <code>album_list1</code> and <code>album_list2</code>, using the <code>intersection</code> method: ``` # Use intersection method to find the intersection of album_list1 and album_list2 album_set1.intersection(album_set2) ``` This corresponds to the intersection of the two circles: <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%202/images/SetsIntersect.png" width="650" /> The union corresponds to all the elements in both sets, which is represented by coloring both circles: <img src = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%202/images/SetsUnion.png" width="650" /> The union is given by: ``` # Find the union of two sets album_set1.union(album_set2) ``` And you can check if a set is a superset or subset of another set, respectively, like this: ``` # Check if superset set(album_set1).issuperset(album_set2) # Check if subset set(album_set2).issubset(album_set1) ``` Here is an example where <code>issubset()</code> and <code>issuperset()</code> return true: ``` # Check if subset set({"Back in Black", "AC/DC"}).issubset(album_set1) # Check if superset album_set1.issuperset({"Back in Black", "AC/DC"}) ``` <hr> <h2 id="quiz">Quiz on Sets</h2> Convert the list <code>['rap','house','electronic music', 'rap']</code> to a set: ``` # Write your code below and press Shift+Enter to execute set(['rap','house','electronic music', 'rap']) ``` <details><summary>Click here for the solution</summary> ```python set(['rap','house','electronic music','rap']) ``` </details> <hr> Consider the list <code>A = [1, 2, 2, 1]</code> and set <code>B = set([1, 2, 2, 1])</code>, does <code>sum(A) = sum(B)</code> ``` # Write your code below and press Shift+Enter to execute A = [1, 2, 2, 1] B = set([1, 2, 2, 1]) print("the sum of A is:", sum(A)) print("the sum of B is:", sum(B)) if sum(A) == sum(B): print("the sum of A and the sum of B is equal") else: print("the sum of A and the sum of B is different") ``` <details><summary>Click here for the solution</summary> ```python A = [1, 2, 2, 1] B = set([1, 2, 2, 1]) print("the sum of A is:", sum(A)) print("the sum of B is:", sum(B)) ``` </details> <hr> Create a new set <code>album_set3</code> that is the union of <code>album_set1</code> and <code>album_set2</code>: ``` # Write your code below and press Shift+Enter to execute album_set1 = set(["Thriller", 'AC/DC', 'Back in Black']) album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"]) album_set3 = album_set1.union(album_set2) album_set3 ``` <details><summary>Click here for the solution</summary> ```python album_set3 = album_set1.union(album_set2) album_set3 ``` </details> <hr> Find out if <code>album_set1</code> is a subset of <code>album_set3</code>: ``` # Write your code below and press Shift+Enter to execute ``` <details><summary>Click here for the solution</summary> ```python album_set1.issubset(album_set3) ``` </details> <hr> <h2>The last exercise!</h2> <p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work. <hr> ## Author <a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> ## Other contributors <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a> ## Change Log | Date (YYYY-MM-DD) | Version | Changed By | Change Description | | ----------------- | ------- | ---------- | ---------------------------------- | | 2020-08-26 | 2.0 | Lavanya | Moved lab to course repo in GitLab | | | | | | | | | | | ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.png) # Register model and deploy as webservice in ACI Following this notebook, you will: - Learn how to register a model in your Azure Machine Learning Workspace. - Deploy your model as a web service in an Azure Container Instance. ## Prerequisites If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration notebook](../../../configuration.ipynb) to install the Azure Machine Learning Python SDK and create a workspace. ``` import azureml.core # Check core SDK version number. print('SDK version:', azureml.core.VERSION) ``` ## Initialize workspace Create a [Workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace%28class%29?view=azure-ml-py) object from your persisted configuration. ``` from azureml.core import Workspace ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n') ``` ## Create trained model For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset). ``` import joblib from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge dataset_x, dataset_y = load_diabetes(return_X_y=True) model = Ridge().fit(dataset_x, dataset_y) joblib.dump(model, 'sklearn_regression_model.pkl') ``` ## Register input and output datasets Here, you will register the data used to create the model in your workspace. ``` import numpy as np from azureml.core import Dataset np.savetxt('features.csv', dataset_x, delimiter=',') np.savetxt('labels.csv', dataset_y, delimiter=',') datastore = ws.get_default_datastore() datastore.upload_files(files=['./features.csv', './labels.csv'], target_path='sklearn_regression/', overwrite=True) input_dataset = Dataset.Tabular.from_delimited_files(path=[(datastore, 'sklearn_regression/features.csv')]) output_dataset = Dataset.Tabular.from_delimited_files(path=[(datastore, 'sklearn_regression/labels.csv')]) ``` ## Register model Register a file or folder as a model by calling [Model.register()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#register-workspace--model-path--model-name--tags-none--properties-none--description-none--datasets-none--model-framework-none--model-framework-version-none--child-paths-none-). In addition to the content of the model file itself, your registered model will also store model metadata -- model description, tags, and framework information -- that will be useful when managing and deploying models in your workspace. Using tags, for instance, you can categorize your models and apply filters when listing models in your workspace. Also, marking this model with the scikit-learn framework will simplify deploying it as a web service, as we'll see later. ``` import sklearn from azureml.core import Model from azureml.core.resource_configuration import ResourceConfiguration model = Model.register(workspace=ws, model_name='my-sklearn-model', # Name of the registered model in your workspace. model_path='./sklearn_regression_model.pkl', # Local file to upload and register as a model. model_framework=Model.Framework.SCIKITLEARN, # Framework used to create the model. model_framework_version=sklearn.__version__, # Version of scikit-learn used to create the model. sample_input_dataset=input_dataset, sample_output_dataset=output_dataset, resource_configuration=ResourceConfiguration(cpu=1, memory_in_gb=0.5), description='Ridge regression model to predict diabetes progression.', tags={'area': 'diabetes', 'type': 'regression'}) print('Name:', model.name) print('Version:', model.version) ``` ## Deploy model Deploy your model as a web service using [Model.deploy()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#deploy-workspace--name--models--inference-config--deployment-config-none--deployment-target-none-). Web services take one or more models, load them in an environment, and run them on one of several supported deployment targets. For more information on all your options when deploying models, see the [next steps](#Next-steps) section at the end of this notebook. For this example, we will deploy your scikit-learn model to an Azure Container Instance (ACI). ### Use a default environment (for supported models) The Azure Machine Learning service provides a default environment for supported model frameworks, including scikit-learn, based on the metadata you provided when registering your model. This is the easiest way to deploy your model. Even when you deploy your model to ACI with a default environment you can still customize the deploy configuration (i.e. the number of cores and amount of memory made available for the deployment) using the [AciWebservice.deploy_configuration()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--). Look at the "Use a custom environment" section of this notebook for more information on deploy configuration. **Note**: This step can take several minutes. ``` service_name = 'my-sklearn-service' service = Model.deploy(ws, service_name, [model], overwrite=True) service.wait_for_deployment(show_output=True) ``` After your model is deployed, perform a call to the web service using [service.run()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py#run-input-). ``` import json input_payload = json.dumps({ 'data': dataset_x[0:2].tolist(), 'method': 'predict' # If you have a classification model, you can get probabilities by changing this to 'predict_proba'. }) output = service.run(input_payload) print(output) ``` When you are finished testing your service, clean up the deployment with [service.delete()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py#delete--). ``` service.delete() ``` ### Use a custom environment If you want more control over how your model is run, if it uses another framework, or if it has special runtime requirements, you can instead specify your own environment and scoring method. Custom environments can be used for any model you want to deploy. Specify the model's runtime environment by creating an [Environment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.environment%28class%29?view=azure-ml-py) object and providing the [CondaDependencies](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py) needed by your model. ``` from azureml.core import Environment from azureml.core.conda_dependencies import CondaDependencies environment = Environment('my-sklearn-environment') environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[ 'azureml-defaults', 'inference-schema[numpy-support]', 'joblib', 'numpy', 'scikit-learn=={}'.format(sklearn.__version__) ]) ``` When using a custom environment, you must also provide Python code for initializing and running your model. An example script is included with this notebook. ``` with open('score.py') as f: print(f.read()) ``` Deploy your model in the custom environment by providing an [InferenceConfig](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py) object to [Model.deploy()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#deploy-workspace--name--models--inference-config--deployment-config-none--deployment-target-none-). In this case we are also using the [AciWebservice.deploy_configuration()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--) method to generate a custom deploy configuration. **Note**: This step can take several minutes. ``` from azureml.core.model import InferenceConfig from azureml.core.webservice import AciWebservice service_name = 'my-custom-env-service' inference_config = InferenceConfig(entry_script='score.py', environment=environment) aci_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1) service = Model.deploy(workspace=ws, name=service_name, models=[model], inference_config=inference_config, deployment_config=aci_config, overwrite=True) service.wait_for_deployment(show_output=True) ``` After your model is deployed, make a call to the web service using [service.run()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py#run-input-). ``` input_payload = json.dumps({ 'data': dataset_x[0:2].tolist() }) output = service.run(input_payload) print(output) ``` When you are finished testing your service, clean up the deployment with [service.delete()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py#delete--). ``` service.delete() ``` ### Model Profiling Profile your model to understand how much CPU and memory the service, created as a result of its deployment, will need. Profiling returns information such as CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage. You can profile your model (or more precisely the service built based on your model) on any CPU and/or memory combination where 0.1 <= CPU <= 3.5 and 0.1GB <= memory <= 15GB. If you do not provide a CPU and/or memory requirement, we will test it on the default configuration of 3.5 CPU and 15GB memory. In order to profile your model you will need: - a registered model - an entry script - an inference configuration - a single column tabular dataset, where each row contains a string representing sample request data sent to the service. Please, note that profiling is a long running operation and can take up to 25 minutes depending on the size of the dataset. At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring. Below is an example of how you can construct an input dataset to profile a service which expects its incoming requests to contain serialized json. In this case we created a dataset based one hundred instances of the same request data. In real world scenarios however, we suggest that you use larger datasets with various inputs, especially if your model resource usage/behavior is input dependent. You may want to register datasets using the register() method to your workspace so they can be shared with others, reused and referred to by name in your script. You can try get the dataset first to see if it's already registered. ``` from azureml.core import Datastore from azureml.core.dataset import Dataset from azureml.data import dataset_type_definitions dataset_name='diabetes_sample_request_data' dataset_registered = False try: sample_request_data = Dataset.get_by_name(workspace = ws, name = dataset_name) dataset_registered = True except: print("The dataset {} is not registered in workspace yet.".format(dataset_name)) if not dataset_registered: # create a string that can be utf-8 encoded and # put in the body of the request serialized_input_json = json.dumps({ 'data': [ [ 0.03807591, 0.05068012, 0.06169621, 0.02187235, -0.0442235, -0.03482076, -0.04340085, -0.00259226, 0.01990842, -0.01764613] ] }) dataset_content = [] for i in range(100): dataset_content.append(serialized_input_json) dataset_content = '\n'.join(dataset_content) file_name = "{}.txt".format(dataset_name) f = open(file_name, 'w') f.write(dataset_content) f.close() # upload the txt file created above to the Datastore and create a dataset from it data_store = Datastore.get_default(ws) data_store.upload_files(['./' + file_name], target_path='sample_request_data') datastore_path = [(data_store, 'sample_request_data' +'/' + file_name)] sample_request_data = Dataset.Tabular.from_delimited_files( datastore_path, separator='\n', infer_column_types=True, header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS) sample_request_data = sample_request_data.register(workspace=ws, name=dataset_name, create_new_version=True) ``` Now that we have an input dataset we are ready to go ahead with profiling. In this case we are testing the previously introduced sklearn regression model on 1 CPU and 0.5 GB memory. The memory usage and recommendation presented in the result is measured in Gigabytes. The CPU usage and recommendation is measured in CPU cores. ``` from datetime import datetime environment = Environment('my-sklearn-environment') environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[ 'azureml-defaults', 'inference-schema[numpy-support]', 'joblib', 'numpy', 'scikit-learn=={}'.format(sklearn.__version__) ]) inference_config = InferenceConfig(entry_script='score.py', environment=environment) # if cpu and memory_in_gb parameters are not provided # the model will be profiled on default configuration of # 3.5CPU and 15GB memory profile = Model.profile(ws, 'rgrsn-%s' % datetime.now().strftime('%m%d%Y-%H%M%S'), [model], inference_config, input_dataset=sample_request_data, cpu=1.0, memory_in_gb=0.5) # profiling is a long running operation and may take up to 25 min profile.wait_for_completion(True) details = profile.get_details() ``` ### Model packaging If you want to build a Docker image that encapsulates your model and its dependencies, you can use the model packaging option. The output image will be pushed to your workspace's ACR. You must include an Environment object in your inference configuration to use `Model.package()`. ```python package = Model.package(ws, [model], inference_config) package.wait_for_creation(show_output=True) # Or show_output=False to hide the Docker build logs. package.pull() ``` Instead of a fully-built image, you can also generate a Dockerfile and download all the assets needed to build an image on top of your Environment. ```python package = Model.package(ws, [model], inference_config, generate_dockerfile=True) package.wait_for_creation(show_output=True) package.save("./local_context_dir") ``` ## Next steps - To run a production-ready web service, see the [notebook on deployment to Azure Kubernetes Service](../production-deploy-to-aks/production-deploy-to-aks.ipynb). - To run a local web service, see the [notebook on deployment to a local Docker container](../deploy-to-local/register-model-deploy-local.ipynb). - For more information on datasets, see the [notebook on training with datasets](../../work-with-data/datasets-tutorial/train-with-datasets/train-with-datasets.ipynb). - For more information on environments, see the [notebook on using environments](../../training/using-environments/using-environments.ipynb). - For information on all the available deployment targets, see [&ldquo;How and where to deploy models&rdquo;](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#choose-a-compute-target).
github_jupyter
``` %matplotlib inline import os import numpy as np import pandas as pd from matplotlib.patches import Rectangle as rect import matplotlib.pyplot as plt ``` ## Model background Here is an example based on the model of Freyberg, 1988. The synthetic model is a 2-dimensional MODFLOW model with 1 layer, 40 rows, and 20 columns. The model has 2 stress periods: an initial steady-state stress period used for calibration, and a 5-year transient stress period. The calibration period uses the recharge and well flux of Freyberg, 1988; the last stress period use 25% less recharge and 25% more pumping. The inverse problem has 761 parameters: hydraulic conductivity of each active model cell, calibration and forecast period recharge multipliers, storage and specific yield, calibration and forecast well flux for each of the six wells, and river bed conductance for each 40 cells with river-type boundary conditions. The inverse problem has 12 head obseravtions, measured at the end of the steady-state calibration period. The forecasts of interest include the sw-gw exchange flux during both stress periods (observations named ```sw_gw_0``` and ``sw_gw_1``), and the water level in well cell 6 located in at row 28 column 5 at the end of the stress periods (observations named ```or28c05_0``` and ```or28c05_1```). The forecasts are included in the Jacobian matrix as zero-weight observations. The model files, pest control file and previously-calculated jacobian matrix are in the `freyberg/` folder Freyberg, David L. "AN EXERCISE IN GROUND‐WATER MODEL CALIBRATION AND PREDICTION." Groundwater 26.3 (1988): 350-360. ``` import flopy # load the model model_ws = os.path.join("Freyberg","extra_crispy") ml = flopy.modflow.Modflow.load("freyberg.nam",model_ws=model_ws) # Because this model is old -- it predates flopy's modelgrid implementation. # And because modelgrid has been implemented without backward compatability # the modelgrid object is not constructed properly. # - We will use some sneaky pyemu to get things to how they should be import pyemu sr = pyemu.helpers.SpatialReference.from_namfile( os.path.join(model_ws, ml.namefile), delc=ml.dis.delc, delr=ml.dis.delr ) ml.modelgrid.set_coord_info( xoff=sr.xll, yoff=sr.yll, angrot=sr.rotation, proj4=sr.proj4_str, merge_coord_info=True, ) # plot some model attributes fig = plt.figure(figsize=(10,10)) ax = plt.subplot(111,aspect="equal") ml.upw.hk.plot(axes=ax,colorbar="K m/d",alpha=0.3) ml.wel.plot(axes=ax) # flopy possibly now only plots BCs in black ml.riv.plot(axes=ax) # plot obs locations obs = pd.read_csv(os.path.join("Freyberg","misc","obs_rowcol.dat"),delim_whitespace=True) obs_x = [ml.modelgrid.xcellcenters[r-1,c-1] for r,c in obs.loc[:,["row","col"]].values] obs_y = [ml.modelgrid.ycellcenters[r-1,c-1] for r,c in obs.loc[:,["row","col"]].values] ax.scatter(obs_x,obs_y,marker='.',label="obs") #plot names on the pumping well locations wel_data = ml.wel.stress_period_data[0] wel_x = ml.modelgrid.xcellcenters[wel_data["i"],wel_data["j"]] wel_y = ml.modelgrid.ycellcenters[wel_data["i"],wel_data["j"]] for i,(x,y) in enumerate(zip(wel_x,wel_y)): ax.text(x,y,"{0} ".format(i+1),ha="right",va="center", font=dict(size=15), color='r') ax.set_ylabel("y") ax.set_xlabel("x") ax.add_patch(rect((0,0),0,0,label="well",ec="none",fc="r")) ax.add_patch(rect((0,0),0,0,label="river",ec="none",fc="g")) ax.legend(bbox_to_anchor=(1.5,1.0),frameon=False) plt.savefig("domain.pdf") ``` The plot shows the Freyberg (1988) model domain. The colorflood is the hydraulic conductivity ($\frac{m}{d}$). Red and green cells coorespond to well-type and river-type boundary conditions. Blue dots indicate the locations of water levels used for calibration. ## Using `pyemu` ``` import pyemu pst = pyemu.Pst(os.path.join("Freyberg","freyberg.pst")) ``` ## Drawing from the prior Now we need a prior-realized ``ParameterEnsemble``, which stores a ``pandas.DataFrame`` under the hood. ### ```draw``` The ``ParameterEnsemble`` class has several ``draw`` type methods to generate stochastic values from (multivariate) (log) gaussian, uniform and triangular distributions. Much of what we do is predicated on the gaussian distribution, so let's use that here. The gaussian draw accepts a `cov` arg which can be a `pyemu.Cov` instance. If this isn't passed, then the draw method constructs a diagonal covariance matrix from the parameter bounds (assuming a certain number of standard deviations represented by the distance between the bounds - the `sigma_range` argument) ``` pe = pyemu.ParameterEnsemble.from_gaussian_draw(pst=pst,num_reals=200) ``` ``draw`` also accepts a ``num_reals`` argument to specify the number of draws to make: ``` pe.head() ``` Note that these ``draw`` methods use initial parameter values in the control file (the `Pst.parameter_data.parval`1 attribute) the $\boldsymbol{\mu}$ (mean) prior parameter vector. To change that, we need to update the parameter values in the control file: ``` pst.parrep(pst.filename.replace(".pst",".par")) pst.parameter_data.parval1 pe = pyemu.ParameterEnsemble.from_gaussian_draw(pst=pst,num_reals=200) pe.head() ``` ## plotting Since ```ParameterEnsemble``` stores a ```pandas.DataFrame```, it has all the cool methods and attributes we all love. Let's compare the results of drawing from a uniform vs a gaussian distribution. The actual dataframe is stored under the private attribute `ParameterEnsemble._df`: ``` pe = pyemu.ParameterEnsemble.from_uniform_draw(pst=pst,num_reals=1000) ax = plt.subplot(111) pe._df.loc[:,"rch_1"].plot(kind="hist",bins=20,ax=ax,alpha=0.5) pe = pyemu.ParameterEnsemble.from_gaussian_draw(pst=pst,num_reals=1000) pe._df.loc[:,"rch_1"].plot(kind="hist",bins=20,ax=ax,alpha=0.5) ``` The gaussian histo go beyond the parameter bound - bad times. Luckily, `ParameterEnsemble` includes an `enforce` method to apply parameter bounds: ``` pe = pyemu.ParameterEnsemble.from_uniform_draw(pst=pst,num_reals=1000) ax = plt.subplot(111) pe._df.loc[:,"rch_1"].plot(kind="hist",bins=20,ax=ax,alpha=0.5) pe = pyemu.ParameterEnsemble.from_gaussian_draw(pst=pst,num_reals=1000) pe.enforce(how="reset") pe._df.loc[:,"rch_1"].plot(kind="hist",bins=20,ax=ax,alpha=0.5) ``` ## bayes linear monte carlo we can use the bayes linear posterior parameter covariance matrix (aka Schur compliment) to "precondition" the realizations using linear algebra so that they hopefully yield a lower phi. The trick is we just need to pass this posterior covariance matrix to the draw method. Note this covariance matrix is the second moment of the posterior (under the FOSM assumptions) and the final parameter values is the first moment (which we `parrep`'ed into the `Pst` earlier) ``` # get the list of forecast names from the pest++ argument in the pest control file jco = os.path.join("Freyberg","freyberg.jcb") sc = pyemu.Schur(jco=jco) pe_post = pyemu.ParameterEnsemble.from_gaussian_draw(pst=pst,cov=sc.posterior_parameter, num_reals=1000) pe_post.enforce() ax = plt.subplot(111) pe._df.loc[:,"rch_1"].plot(kind="hist",bins=20,ax=ax,alpha=0.5) pe_post._df.loc[:,"rch_1"].plot(kind="hist",bins=20,ax=ax,alpha=0.5) ``` Now we just need to run this preconditioned ensemble to validate the FOSM assumptions (that the realizations do yield an acceptably low phi and that the relation between parameters and forecasts is linear)
github_jupyter
``` import MySQLdb import datetime import pandas as pd import numpy as np import collections import matplotlib.pyplot as plt import matplotlib.dates as mdates import seaborn as sns sns.set_context('poster') sns.set_style('darkgrid') %matplotlib inline import matplotlib zhfont = matplotlib.font_manager.FontProperties(fname='/home/********/wqy-microhei.ttc', size=22) sql_params = {'host': "********", 'db' : "********", 'user': "********", 'passwd': "********", 'charset': 'utf8'} # Open database connection connection = MySQLdb.connect (**sql_params) ``` # Primary Stats Rates.R ``` td_str = '20170808' td = datetime.datetime.strptime(td_str, '%Y%m%d') print td sql_query = "".join(["SELECT c.*, d.S_INFO_WINDCODE, d.TRADE_DT, d.B_ANAL_YIELD_CNBD FROM( ", "SELECT ifnull(a.S_INFO_FORMERWINDCODE, a.S_INFO_WINDCODE) AS ticker, a.S_INFO_WINDCODE, a.S_INFO_NAME, ", "a.B_INFO_TERM_YEAR_, a.B_INFO_ISSUERCODE, a.B_INFO_CARRYDATE, a.B_ISSUE_FIRSTISSUE, ", "a.B_ISSUE_AMOUNTPLAN, a.IS_FAILURE, a.B_INFO_LISTDATE, ", "a.B_INFO_MATURITYDATE, a.B_INFO_COUPONRATE, b.S_IPO_OVRSUBRATIO, ", "b.B_TENDER_OBJECT, b.B_TENDRST_HIGHTEST, b.B_TENDRST_LOWEST, ", "b.B_TENDRST_FINALPRICE, ifnull(b.B_TENDRST_REFERYIELD, b.B_TENDRST_BIDRATE) AS yield, ", "b.B_TENDRST_OUGHTTENDER, b.B_TENDRST_INVESTORTENDERED, b.B_TENDRST_WINNERBIDDER, ", "b.B_TENDRST_EFFECTTENDER, b.B_TENDRST_WINNINGBIDDER ", "FROM CBONDDESCRIPTION a LEFT JOIN CBONDTENDERRESULT b ON a.S_INFO_WINDCODE = b.S_INFO_WINDCODE ", "WHERE a.B_INFO_ISSUERTYPE IN ('财政部', '政策性银行') ", "AND a.B_INFO_COUPON <> '505003000' ", "AND a.B_INFO_INTERESTTYPE = '501002000'", "AND a.B_INFO_SPECIALBONDTYPE IS NULL ", "AND a.B_INFO_LISTDATE IS NOT NULL ", "AND a.B_ISSUE_FIRSTISSUE > '20130101' ", "AND a.S_INFO_EXCHMARKET = 'NIB' ", # "AND a.S_INFO_WINDCODE LIKE '140016%' ", ") c LEFT JOIN CBONDANALYSISCNBD d ", "ON c.ticker = d.S_INFO_WINDCODE ", "AND date_add(str_to_date(c.B_ISSUE_FIRSTISSUE, '%Y%m%d'), INTERVAL 3 MONTH) > str_to_date(d.TRADE_DT, '%Y%m%d') ", "AND date_sub(str_to_date(c.B_ISSUE_FIRSTISSUE, '%Y%m%d'), INTERVAL 1 MONTH) <= str_to_date(d.TRADE_DT, '%Y%m%d') ", "AND d.B_ANAL_CREDIBILITY = '推荐' "]) data = pd.read_sql(sql_query, connection) my_columns = ["px.ticker","id.ticker","name","term","issuer","dt.carry", "dt.issue","amt.issue","failure","dt.list","dt.mature", "cpn","over.sub.ratio","td.type","td.high","td.low","td.px","td.yld","bidder.max","bidder.in","bidder.won", "bids.effect","bids.win","ref.ticker","ref.dt","ref.yld"] data.columns = my_columns print data.shape print data.dtypes data.head() def cleanRatesData(df): df.loc[:, 'reopen'] = (df.loc[:, 'px.ticker'] == df.loc[:, 'id.ticker']).astype(int) for col in ['dt.issue', 'dt.list', 'dt.mature', 'ref.dt']: df.loc[:, col] = pd.to_datetime(df.loc[:, col], format="%Y%m%d") # seperate bond info and bond price bond_info_columns = [col for col in list(df.columns) if col not in ['ref.ticker', 'ref.dt', 'ref.yld'] ] bond_info = df.loc[:, bond_info_columns].copy() bond_info = bond_info.drop_duplicates() # regroup bond price by term and issuer, rather than by ticker bond_px_columns = ['issuer', 'term', 'px.ticker', 'ref.dt', 'ref.yld'] bond_px = df.loc[:, bond_px_columns].copy() bond_px.columns = ['issuer', 'term', 'ref.ticker', 'ref.dt', 'ref.yld'] mask_null = np.logical_or(df.loc[:, 'ref.dt'].isnull(), df.loc[:, 'ref.yld'].isnull()) mask_null = np.logical_not(mask_null) bond_px = bond_px.loc[mask_null, :] # get the price history bond_px = bond_px.drop_duplicates() #bond_px = bond_px.sort_index(axis=0, by='ref.ticker') grouped = bond_px.groupby(by=['issuer', 'term', 'ref.dt'], as_index=False) # select only the most recent issue for each issuer / term / date bond_px = grouped.head(n=1) #TODO print "Sort top n by ref.ticker???" return df, bond_info, bond_px data_x, bond_info, bond_px = cleanRatesData(data) # rejoin bond info with bond price, by term and issuer data_join = pd.merge(bond_info, bond_px, how='outer', left_on=['issuer', 'term'], right_on=['issuer', 'term'])# join yld history by issuer and term data_join = data_join.loc[data_join.loc[:, 'dt.issue'] > data_join.loc[:, 'ref.dt'], :] # limit range to 30 days before issue grouped = data_join.groupby(by='id.ticker', as_index=False) data_join = grouped.head(1) data_join.loc[:, 'dt.diff'] = data_join.loc[:, 'dt.issue'] - data_join.loc[:, 'ref.dt'] data_join = data_join.sort_index(axis=0, by='id.ticker') print data_join.shape data_join.head() def get_bond_upcoming(larger, smaller): s1 = set(larger.loc[:, 'id.ticker'].drop_duplicates()) s2 = set(smaller.loc[:, 'id.ticker'].drop_duplicates()) return list(s1.difference(s2)) bond_upcoming = get_bond_upcoming(data_x, data_join) mask = bond_info.loc[:, 'id.ticker'].apply(lambda x: x in bond_upcoming) data_x = pd.concat([bond_info.loc[mask, :], data_join], axis=0) print data_x.shape def firstDayOfWeek(dt): t = dt - datetime.timedelta(dt.weekday()) return datetime.date(t.year, t.month, t.day) def firstDayOfMonth(dt): t = dt.replace(day=1) return datetime.date(t.year, t.month, t.day) def map_term_to_sector(t): if t <= 3: res = "Short" else: if t <= 7: res = "Mid" elif t == 10: res = "Ten" else: res = "Long" return res RATES_ISSUER_DICT = {"2000850": u"国债", "2002700": u"国开", "0MT64BBFB6": u"口行", "04M5F620A3": u"农发"} def process_data_x(df): df = df.copy() df.loc[:, 'dur.impact'] = df.loc[:, 'amt.issue'] * df.loc[:, 'term'] / 10. df.loc[:, 'term.remain'] = (df.loc[:, 'dt.mature'] - df.loc[:, 'dt.list']) / np.timedelta64(1,'D') /365.25 # remaining term in years df.loc[:, 'through'] = (df.loc[:, 'ref.yld'] - df.loc[:, 'td.yld']) * 100. df.loc[:, 'dt.month'] = df.loc[:, 'dt.issue'].apply(firstDayOfMonth) df.loc[:, 'dt.week'] = df.loc[:, 'dt.issue'].apply(firstDayOfWeek) df.loc[:, 'sector'] = df.loc[:, 'term'].apply(map_term_to_sector) df.loc[:, 'issuer.name'] = df.loc[:, 'issuer'].apply(lambda key: RATES_ISSUER_DICT[key]) return df data_x = process_data_x(data_x) print data_x.shape data_x.head() data_plot_columns = ["dt.issue", "dt.week", "dt.month", "term", "sector", "issuer.name", "amt.issue", "dur.impact"] data_plot = data_x.loc[:, data_plot_columns].copy() print data_plot.shape data_plot.head() data_plot_melt = pd.melt(data_plot, id_vars=["dt.issue", "dt.week", "dt.month", "term", "sector", "issuer.name"],) print data_plot_melt.shape data_plot_melt.head() #ISSUER_RANK = [u"国债", u"国开", u"口行", u"农发"] ISSUER_RANK = collections.OrderedDict([(u"国债", 1), (u"国开", 2), (u"口行", 3), (u"农发", 4)]) def get_data_new(df): df = df.copy() mask = df.loc[:, 'dt.issue'] >= td cols = ["dt.issue", "issuer.name", "term", "amt.issue", "id.ticker"] df = df.loc[mask, cols] df.loc[:, 'weekday'] = df.loc[:, 'dt.issue'].apply(lambda dt: dt.weekday() + 1) #df.loc[:, 'issuer.name'] = pd.Categorical(df.loc[:, 'issuer.name'], ISSUER_RANK) df.loc[:, 'rank'] = df.loc[:, 'issuer.name'].apply(lambda key: ISSUER_RANK.get(key)) df = df.sort(columns=['dt.issue', 'rank', 'term']) df = df.drop('rank', axis=1) return df data_new = get_data_new(data_x) print data_new.shape data_new.head() RATES_RECENT_DAYS_RANGE = 7 def get_data_recent(df): df = df.copy() mask = np.logical_and(df.loc[:, 'dt.issue'] < td, df.loc[:, 'dt.issue'] >= td-datetime.timedelta(RATES_RECENT_DAYS_RANGE)) cols = ["dt.issue", "issuer.name", "term", "amt.issue", "id.ticker"] df = df.loc[mask, cols] df.loc[:, 'weekday'] = df.loc[:, 'dt.issue'].apply(lambda dt: dt.weekday() + 1) #df.loc[:, 'issuer.name'] = pd.Categorical(df.loc[:, 'issuer.name'], ISSUER_RANK) df.loc[:, 'rank'] = df.loc[:, 'issuer.name'].apply(lambda key: ISSUER_RANK.get(key)) df = df.sort(columns=['dt.issue', 'rank', 'term']) df = df.drop('rank', axis=1) return df data_recent = get_data_recent(data_x) print data_recent.shape data_recent.head() TERM_RANK = ["Short", "Mid", "Ten", "Long"] ``` ### Total Issuance- monthly ``` grouped = data_plot.groupby(by=['dt.month', 'sector'], as_index=False) grouped.agg({'amt.issue': np.sum, 'dur.impact': np.sum}) SECTOR_COLOR_MAP.keys() SECTOR_COLOR_MAP = collections.OrderedDict([('Long', 'violet'), ('Ten', 'c'), ('Mid', 'mediumseagreen'), ('Short', 'salmon')]) def get_factors(df, yname, idxname, factorname): idx = np.sort(df.loc[:, idxname].drop_duplicates()) df.index = pd.DatetimeIndex(df.loc[:, idxname]) res_df = pd.DataFrame(index=idx, columns=SECTOR_COLOR_MAP.keys()) for factor in SECTOR_COLOR_MAP.keys(): mask = df.loc[:, factorname] == factor source = df.loc[mask, :] res_df.loc[source.index, factor] = source.loc[:, yname].values res_df = res_df.fillna(0) return res_df def plot_total_issuance(df): df = df.copy() df.index = df.loc[:, 'dt.month'] grouped = df.groupby(by=['dt.month', 'sector'], as_index=False) monthly_sum = grouped.agg({'amt.issue': np.sum, 'dur.impact': np.sum}) mask = np.logical_and(monthly_sum.loc[:, 'dt.month'] >= datetime.date(2013, 1, 1), monthly_sum.loc[:, 'dt.month'] < datetime.date(td.year, td.month, td.day) + datetime.timedelta(14)) monthly_sum = monthly_sum.loc[mask, :] df_issue_plot = get_factors(monthly_sum, 'amt.issue', 'dt.month', 'sector') df_impact_plot = get_factors(monthly_sum, 'dur.impact', 'dt.month', 'sector') fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(30, 22), dpi=300) common_styles = {'edgecolor': '', 'width': 24.0} last_values = np.zeros(df_issue_plot.shape[0], dtype=float) for sector, color in SECTOR_COLOR_MAP.iteritems(): ax1.bar(df_issue_plot.index, df_issue_plot.loc[:, sector], bottom=last_values, color=color, label=sector, **common_styles) last_values += df_issue_plot.loc[:, sector].values ax1.legend(loc='upper right') last_values = np.zeros(df_impact_plot.shape[0], dtype=float) for sector, color in SECTOR_COLOR_MAP.iteritems(): ax2.bar(df_impact_plot.index, df_impact_plot.loc[:, sector], bottom=last_values, color=color, label=sector, **common_styles) last_values += df_impact_plot.loc[:, sector].values ax2.xaxis.set_ticks(pd.date_range(start=df_impact_plot.index[0], end=df_impact_plot.index[-1], freq='3M')) ax2.xaxis.set_major_formatter(mdates.DateFormatter('%Y%m')) ax2.set_xlabel(u"月份", fontproperties=zhfont) ax1.set_ylabel(u"值", fontproperties=zhfont) plt.suptitle(u"总发行 - 每月", fontproperties=zhfont) ax1.set_title(u"发行量", fontproperties=zhfont) ax2.set_title(u"久期影响", fontproperties=zhfont) # ax2.set_xlabel("dt.month") # ax1.set_ylabel("value") # plt.suptitle("Total Issuance - monthly", fontproperties=zhfont) # ax1.set_title("amt.issue") # ax2.set_title("dur.impact") return fig fig = plot_total_issuance(data_plot) fig.savefig('RATES_1.png', bbox_inches='tight') ``` ### Total Issuance - weekly ``` def plot_total_issuance_week(df): df = df.copy() df.index = df.loc[:, 'dt.week'] grouped = df.groupby(by=['dt.week', 'sector'], as_index=False) weekly_sum = grouped.agg({'amt.issue': np.sum, 'dur.impact': np.sum}) mask = np.logical_and(weekly_sum.loc[:, 'dt.week'] >= datetime.date(td.year, td.month, td.day) - datetime.timedelta(420), weekly_sum.loc[:, 'dt.week'] < datetime.date(td.year, td.month, td.day) + datetime.timedelta(14)) weekly_sum = weekly_sum.loc[mask, :] df_issue_plot = get_factors(weekly_sum, 'amt.issue', 'dt.week', 'sector') df_impact_plot = get_factors(weekly_sum, 'dur.impact', 'dt.week', 'sector') fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(30, 22), dpi=300) common_styles = {'edgecolor': '', 'width': 6.0} last_values = np.zeros(df_issue_plot.shape[0], dtype=float) for sector, color in SECTOR_COLOR_MAP.iteritems(): ax1.bar(df_issue_plot.index, df_issue_plot.loc[:, sector], bottom=last_values, color=color, label=sector, **common_styles) last_values += df_issue_plot.loc[:, sector].values ax1.set_ylabel(u"值", fontproperties=zhfont) last_values = np.zeros(df_impact_plot.shape[0], dtype=float) for sector, color in SECTOR_COLOR_MAP.iteritems(): ax2.bar(df_impact_plot.index, df_impact_plot.loc[:, sector], bottom=last_values, color=color, label=sector, **common_styles) last_values += df_impact_plot.loc[:, sector].values ax2.set_ylabel(u"值", fontproperties=zhfont) ax2.xaxis.set_ticks(pd.date_range(start=df_impact_plot.index[0], end=df_impact_plot.index[-1], freq='3W')) ax2.xaxis.set_major_formatter(mdates.DateFormatter('%Y%m')) ax1.legend(loc='upper right') ax2.set_xlabel(u"星期", fontproperties=zhfont) plt.suptitle(u"总发行 - 每周", fontproperties=zhfont) ax1.set_title(u"发行量", fontproperties=zhfont) ax2.set_title(u"久期影响", fontproperties=zhfont) # ax2.set_xlabel("dt.week") # ax1.set_ylabel("value") # plt.suptitle("Total Issuance - weekly", fontproperties=zhfont) # ax1.set_title("amt.issue") # ax2.set_title("dur.impact") return fig fig = plot_total_issuance_week(data_plot) fig.savefig('RATES_2.png', bbox_inches='tight') ``` ### Total Issuance Amt - weekly by Issuer ``` data_plot.head() def plot_total_issue_amt_week_issuer(df): df = df.copy() df.index = df.loc[:, 'dt.week'] grouped = df.groupby(by=['dt.week', 'sector', 'issuer.name'], as_index=False) weekly_sum = grouped.agg({'amt.issue': np.sum, 'dur.impact': np.sum}) mask = np.logical_and(weekly_sum.loc[:, 'dt.week'] >= datetime.date(td.year, td.month, td.day) - datetime.timedelta(420), weekly_sum.loc[:, 'dt.week'] < datetime.date(td.year, td.month, td.day) + datetime.timedelta(14)) weekly_sum = weekly_sum.loc[mask, :] df_issuers = {} for issuer in ISSUER_RANK.keys(): df_issuer = weekly_sum.loc[weekly_sum.loc[:, 'issuer.name'] == issuer, :] df_issuers[issuer] = get_factors(df_issuer, 'amt.issue', 'dt.week', 'sector') fig, axes = plt.subplots(len(ISSUER_RANK.keys()), 1, sharex=True, figsize=(30, 22), dpi=300) common_styles = {'edgecolor': '', 'width': 6.0} i = 3 for issuer, df_plot in df_issuers.iteritems(): last_values = np.zeros(df_plot.shape[0], dtype=float) for sector, color in SECTOR_COLOR_MAP.iteritems(): axes[i].bar(df_plot.index, df_plot.loc[:, sector], bottom=last_values, color=color, label=sector, **common_styles) last_values += df_plot.loc[:, sector].values axes[i].set_ylabel(u"值", fontproperties=zhfont) axes[i].set_title(issuer, fontproperties=zhfont) i -= 1 axes[-1].xaxis.set_ticks(pd.date_range(start=df_plot.index[0], end=df_plot.index[-1], freq='3W')) axes[-1].xaxis.set_major_formatter(mdates.DateFormatter('%m%d')) axes[0].legend(loc='upper right') #axes[-1].set_xlabel("dt.week") axes[-1].set_xlabel(u"星期", fontproperties=zhfont) plt.suptitle(u"总发行 - 每周", fontproperties=zhfont) #plt.suptitle(u"Total Issuance - weekly", fontproperties=zhfont) return fig fig = plot_total_issue_amt_week_issuer(data_plot) fig.savefig('RATES_3.png', bbox_inches='tight') ``` ### Total Issuance Duration - weekly by Issuer ``` def plot_total_dur_impact_week_issuer(df): df = df.copy() df.index = df.loc[:, 'dt.week'] grouped = df.groupby(by=['dt.week', 'sector', 'issuer.name'], as_index=False) weekly_sum = grouped.agg({'amt.issue': np.sum, 'dur.impact': np.sum}) mask = np.logical_and(weekly_sum.loc[:, 'dt.week'] >= datetime.date(td.year, td.month, td.day) - datetime.timedelta(420), weekly_sum.loc[:, 'dt.week'] < datetime.date(td.year, td.month, td.day) + datetime.timedelta(14)) weekly_sum = weekly_sum.loc[mask, :] df_issuers = {} for issuer in ISSUER_RANK.keys(): df_issuer = weekly_sum.loc[weekly_sum.loc[:, 'issuer.name'] == issuer, :] df_issuers[issuer] = get_factors(df_issuer, 'dur.impact', 'dt.week', 'sector') fig, axes = plt.subplots(len(ISSUER_RANK.keys()), 1, sharex=True, figsize=(30, 22), dpi=300) common_styles = {'edgecolor': '', 'width': 6.0} i = 3 for issuer, df_plot in df_issuers.iteritems(): last_values = np.zeros(df_plot.shape[0], dtype=float) for sector, color in SECTOR_COLOR_MAP.iteritems(): axes[i].bar(df_plot.index, df_plot.loc[:, sector], bottom=last_values, color=color, label=sector, **common_styles) last_values += df_plot.loc[:, sector].values axes[i].set_ylabel(u"值", fontproperties=zhfont) axes[i].set_title(issuer, fontproperties=zhfont) i -= 1 axes[-1].xaxis.set_ticks(pd.date_range(start=df_plot.index[0], end=df_plot.index[-1], freq='3W')) axes[-1].xaxis.set_major_formatter(mdates.DateFormatter('%m%d')) axes[0].legend(loc='upper right') #axes[-1].set_xlabel("dt.week") axes[-1].set_xlabel(u"星期", fontproperties=zhfont) #plt.suptitle("Total Issuance - weekly", fontproperties=zhfont) plt.suptitle(u"总发行 - 每周", fontproperties=zhfont) return fig fig = plot_total_dur_impact_week_issuer(data_plot) fig.savefig('RATES_4.png', bbox_inches='tight') ``` ### Bid-Cover ratio by Sector ``` grouped = data_x.groupby(by='sector') grouped.get_group('Long') def bp_set_box(elem, box_color='red'): elem.set_color('white') elem.set_edgecolor(box_color) elem.set_linewidth(1) def bp_set_median(elem, box_color='red'): elem.set_color(box_color) elem.set_linewidth(4) def bp_set_whisker(elem, box_color='red'): elem.set_linestyle('-') elem.set_color(box_color) elem.set_linewidth(1) def bp_set_cap(elem): global box_color elem.set_alpha(0.0) def plot_bc_ratio_sector(df, yname, title="", ynamecn=u""): df = df.copy() df.index = df.loc[:, 'dt.week'] mask = df.loc[:, 'dt.week'] >= (datetime.date(td.year, td.month, td.day) - datetime.timedelta(240)) df = df.loc[mask, :] ylim = df.loc[:, yname].min() * 0.98, df.loc[:, yname].max() * 1.02 grouped = df.groupby(by='sector', as_index=False) sector_colors = ['salmon', 'mediumseagreen', 'dodgerblue'] sector_selected = ['Short', 'Mid', 'Ten'] fig, axes = plt.subplots(len(sector_selected), 1, sharex=True, figsize=(30, 22), dpi=300) common_styles = {'sym': '+', 'vert': True, 'patch_artist': True} for i, sector in enumerate(sector_selected): df_plot = grouped.get_group(sector) df_plot = df_plot.loc[:, ['dt.week', yname]] grouped2 = df_plot.groupby(by='dt.week', as_index=False) box_data = [] for week, idx in grouped2.groups.iteritems(): box_data.append(df_plot.loc[idx, yname].values) bp_res = axes[i].boxplot(box_data, **common_styles) box_color = sector_colors[i] map(lambda x: bp_set_box(x, box_color), bp_res['boxes']) map(lambda x: bp_set_median(x, box_color), bp_res['medians']) map(lambda x: bp_set_whisker(x, box_color), bp_res['whiskers']) map(bp_set_cap, bp_res['caps']) plt.setp(bp_res['fliers'], color='red', marker='+') axes[i].set_ylim(ylim) axes[i].set_title(u"期限: " + sector, fontproperties=zhfont)#TODOzhch期限 axes[i].set_ylabel(ynamecn, fontproperties=zhfont)#TODOzhcn投标倍数 weeks = map(lambda dt: dt.strftime("%Y%m%d"), grouped2.groups.keys()) axes[-1].set_xticks(range(1, 34), ['lll']*33) axes[-1].set_xlabel(u"星期", fontproperties=zhfont) plt.suptitle(title, fontproperties=zhfont)#TODOzhcn return fig fig = plot_bc_ratio_sector(data_x, 'over.sub.ratio', u"投标倍数 - 按期限分类", u"投标倍数")#"Bid Cover Ratio by Sector") fig.savefig('RATES_5.png', bbox_inches='tight') ``` ### Bid-Cover ratio by Issuer ``` def plot_bc_ratio_issuer(df, yname, title="", ynamecn=u""): df = df.copy() df.index = df.loc[:, 'dt.week'] mask = df.loc[:, 'dt.week'] >= (datetime.date(td.year, td.month, td.day) - datetime.timedelta(240)) df = df.loc[mask, :] ylim = df.loc[:, yname].min() * 0.98, df.loc[:, yname].max() * 1.02 grouped = df.groupby(by='issuer.name', as_index=False) sector_colors = ['salmon', 'mediumseagreen', 'dodgerblue', 'violet'] sector_selected = ISSUER_RANK.keys() fig, axes = plt.subplots(len(sector_selected), 1, sharex=True, figsize=(30, 22), dpi=300) common_styles = {'sym': '+', 'vert': True, 'patch_artist': True} for i, sector in enumerate(ISSUER_RANK.keys()): #print sector df_plot = grouped.get_group(sector) df_plot = df_plot.loc[:, ['dt.week', yname]] grouped2 = df_plot.groupby(by='dt.week', as_index=False) box_data = [] for week, idx in grouped2.groups.iteritems(): box_data.append(df_plot.loc[idx, yname].values) bp_res = axes[i].boxplot(box_data, **common_styles) box_color = sector_colors[i] map(lambda x: bp_set_box(x, box_color), bp_res['boxes']) map(lambda x: bp_set_median(x, box_color), bp_res['medians']) map(lambda x: bp_set_whisker(x, box_color), bp_res['whiskers']) map(bp_set_cap, bp_res['caps']) plt.setp(bp_res['fliers'], color='red', marker='+') axes[i].set_ylim(ylim ) axes[i].set_title(u"发行主体: " + sector, fontproperties=zhfont) axes[i].set_ylabel(ynamecn, fontproperties=zhfont)#TODOzhcn weeks = map(lambda dt: dt.strftime("%Y%m%d"), grouped2.groups.keys()) axes[-1].set_xticks(range(1, 34), ['lll']*33) axes[-1].set_xlabel(u"星期", fontproperties=zhfont) plt.suptitle(title, fontproperties=zhfont)#TODOzhcn return fig fig = plot_bc_ratio_issuer(data_x, 'over.sub.ratio', u"投标倍数 - 按发行主体分类", u"投标倍数")#"Bid Cover Ratio by Issuer") fig.savefig('RATES_6.png', bbox_inches='tight') ``` ### Through-tail by Sector ``` fig = plot_bc_ratio_sector(data_x, 'through', u"招标利率差值 - 按期限分类", u"差值")#TODOzhcn招标利率差值(实际- 市场) fig.savefig('RATES_7.png', bbox_inches='tight') ``` ### Through-tail by Issuer ``` fig = plot_bc_ratio_issuer(data_x, 'through', u"招标利率差值 - 按发行主体分类", u"差值")#"Through/Tail by Issuer")#TODOzhcn fig.savefig('RATES_8.png', bbox_inches='tight') from reportlab.pdfgen import canvas import reportlab.lib.pagesizes as rlps mypagesize = rlps.A4[1], rlps.A4[0] def calc_center(pagesize, fig): width, height = pagesize fw, fh = fig.get_size_inches() * 72. scale_ratio = width / fw * 0.95 paint_w, paint_h = fw * scale_ratio, fh * scale_ratio x = (width - paint_w) / 2. return x, paint_w, paint_h def draw_one_img_one_page(canvas, img_name, pagesize): fig_x, fig_width, fig_height = calc_center(pagesize, fig) canvas.drawImage(img_name, fig_x, 20, width=fig_width, height=fig_height) canvas.showPage() def generate_PDF(fname='ex'): c = canvas.Canvas(fname+'.pdf', pagesize=mypagesize) from reportlab.pdfbase import pdfmetrics from reportlab.pdfbase.ttfonts import TTFont pdfmetrics.registerFont(TTFont('wqy', '/home/********/wqy-microhei.ttc')) c.setFont("wqy", 50) c.drawCentredString(mypagesize[0] / 2, mypagesize[1] * 4. / 5., u"利率债一级市场-大数据汇总") fig_width, fig_height = 756./3, 672./3 c.drawImage("logo.png", (mypagesize[0] - fig_width) / 2., mypagesize[1] / 5, width=fig_width, height=fig_height) c.showPage() for i in range(1, 9): draw_one_img_one_page(c, "RATES_{}.png".format(i), mypagesize) c.save() return generate_PDF("RATES_Stats_"+td_str) ```
github_jupyter
``` import numpy as np import os import tensorflow as tf from tensorflow import keras import pandas as pd from sklearn import preprocessing import matplotlib.pyplot as plt ds = pd.read_csv("dataset/players.csv") ds = ds.reset_index(drop=True) le = preprocessing.LabelEncoder() le.fit(ds['player']) ds['player_trans'] = le.transform(ds['player']) n = int(len(ds)) players = ds.player.nunique() print("Number Of Players : ",players) print("Number Of Images : ",n) print("\n\nDistribution Per Player") ds['player'].value_counts().plot.bar() ds = ds[:-6] test = ds[-6:] ``` # Now let's make the distribution equal ``` for index,row in ds.iterrows(): if len(ds[ds['player']==row['player']])>20: ds.drop(ds[ds['image']==row['image']].index , inplace=True) print("Distribution Per Player") ds['player'].value_counts().plot.bar() def data_augment(image, label): p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32) p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32) p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32) p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32) p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32) p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32) image = tf.image.random_flip_left_right(image) image = tf.image.random_flip_up_down(image) if p_spatial > .75:image = tf.image.transpose(image) if p_rotate > .75:image = tf.image.rot90(image, k=3) elif p_rotate > .5:image = tf.image.rot90(image, k=2) elif p_rotate > .25:image = tf.image.rot90(image, k=1) if p_pixel_1 >= .4:image = tf.image.random_saturation(image, lower=.7, upper=1.3) if p_pixel_2 >= .4:image = tf.image.random_contrast(image, lower=.8, upper=1.2) if p_pixel_3 >= .4:image = tf.image.random_brightness(image, max_delta=.1) if p_crop > .7: if p_crop > .9:image = tf.image.central_crop(image, central_fraction=.7) elif p_crop > .8:image = tf.image.central_crop(image, central_fraction=.8) else:image = tf.image.central_crop(image, central_fraction=.9) elif p_crop > .4: crop_size = tf.random.uniform([], int(224*.8),224, dtype=tf.int32) image = tf.image.random_crop(image, size=[crop_size, crop_size, 3]) image = tf.image.resize(image, [224,224]) return image,label def load_img(image,player,player_transf): path = "dataset/images/"+player+"/"+image img = tf.io.decode_jpeg(tf.io.read_file(path),channels=3) img = tf.cast(img, tf.float32) img = tf.image.resize(img, [224,224]) img = keras.applications.mobilenet_v2.preprocess_input(img) return img,player_transf dataset = tf.data.Dataset.from_tensor_slices((ds.image.values,ds.player.values,ds.player_trans.values)) train_ds = dataset.take(int(0.8*n)) val_ds = dataset.skip(int(0.8*n)) AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.map(load_img,num_parallel_calls=AUTOTUNE) train_ds = train_ds.repeat(40).map(data_augment,num_parallel_calls=AUTOTUNE) train_ds = train_ds.batch(32).prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.map(load_img,num_parallel_calls=AUTOTUNE).batch(32).prefetch(buffer_size=AUTOTUNE) ``` # Define Model ``` base_model = keras.applications.MobileNetV2(weights="imagenet",include_top=False) avg = keras.layers.GlobalAveragePooling2D()(base_model.output) output = keras.layers.Dense(players, activation="softmax")(avg) model = keras.Model(inputs=base_model.input, outputs=output) for layer in base_model.layers: layer.trainable = False checkpoint_path = "./checkpoints/cp.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,save_weights_only=True,verbose=1) ``` # Training ``` model.compile(loss='sparse_categorical_crossentropy', optimizer='adam',metrics=["accuracy"]) history = model.fit(train_ds, epochs=20, validation_data=val_ds,verbose=1,callbacks=[cp_callback]) pd.DataFrame(history.history)[['accuracy']].plot() plt.title("Accuracy") plt.show() ``` # Test Cases ``` i=0 def load_test_img(image,player,player_transf): path = "dataset/images/"+player+"/"+image img = tf.io.decode_jpeg(tf.io.read_file(path),channels=3) img = tf.cast(img, tf.float32) img = tf.image.resize(img, [224,224]) img = keras.applications.mobilenet_v2.preprocess_input(img) return img test_ds = tf.data.Dataset.from_tensor_slices((test.image.values,test.player.values,test.player_trans.values)) test_ds = test_ds.map(load_test_img).batch(6) prediction = model.predict(test_ds) for index,row in test.iterrows(): img = tf.io.decode_jpeg(tf.io.read_file("dataset/images/"+row['player']+"/"+row['image']),channels=3) imgplot = plt.imshow(img.numpy().astype("uint8"),aspect='auto') real = str(list(le.classes_)[row['player_trans']]) top_k_values, top_k_indices = tf.nn.top_k(prediction[i], k=3) top_k_names = [] for k in range(3): top_k_names+=[str(list(le.classes_)[top_k_indices[k]])] plt.title("Real Value: "+str(real)+"\nTop 3 Predicted Values: "+str(top_k_names)) plt.show() i+=1 ```
github_jupyter
``` import json import numpy as np import collections import copy from os import listdir from os.path import isfile, join import findspark findspark.init() from pyspark import SparkContext import pyspark conf = pyspark.SparkConf().setAll([('spark.executor.memory', '8g'), ('spark.executor.cores', '2'),('spark.executor.instances','7'), ('spark.driver.memory','32g'), ('spark.driver.maxResultSize','10g')]) sc = SparkContext(conf=conf) from pyspark.sql import functions as F from pyspark.sql.types import ArrayType, FloatType, StringType from pyspark.sql.types import Row from pyspark.sql import SparkSession spark = SparkSession(sc) def convert_ndarray_back(x): x['entityCell'] = np.array(x['entityCell']) return x data_dir = "../../data/" train_tables = sc.textFile(data_dir+"train_tables.jsonl").map(lambda x:convert_ndarray_back(json.loads(x.strip()))) def get_core_entity_caption_label(x): core_entities = set() for i,j in zip(*x['entityCell'].nonzero()): if j==0 and j in x['entityColumn']: core_entities.add(x['tableData'][i][j]['surfaceLinks'][0]['target']['id']) return list(core_entities), x["_id"], x['tableCaption'], x["processed_tableHeaders"][0] from operator import add table_rdd = train_tables.map(get_core_entity_caption_label) entity_rdd = table_rdd.flatMap(lambda x:[(z,x[1],x[2],x[3]) for z in x[0]]) from pyspark.ml.feature import Tokenizer, StopWordsRemover table_df = spark.createDataFrame(table_rdd,["entities","table_id","caption","header"]) caption_tokenizer = Tokenizer(inputCol="caption", outputCol="caption_term") header_tokenizer = Tokenizer(inputCol="header", outputCol="header_term") list_stopwords = StopWordsRemover.loadDefaultStopWords("english") caption_remover = StopWordsRemover(inputCol="caption_term", outputCol="caption_term_cleaned") header_remover = StopWordsRemover(inputCol="header_term", outputCol="header_term_cleaned") list_stopwords table_df_tokenizered = header_remover.transform(\ header_tokenizer.transform(\ caption_remover.transform(\ caption_tokenizer.transform(table_df)))).select("entities","table_id","caption_term_cleaned","header_term_cleaned","header") table_df_tokenizered.show() caption_term_freq = table_df_tokenizered.select("caption_term_cleaned").rdd \ .flatMap(lambda x:[(z,1) for z in x["caption_term_cleaned"]])\ .reduceByKey(add).collect() header_term_freq = table_df_tokenizered.select("header_term_cleaned").rdd \ .flatMap(lambda x:[(z,1) for z in x["header_term_cleaned"]])\ .reduceByKey(add).collect() header_freq = table_df_tokenizered.select("header").rdd \ .map(lambda x:(x["header"],1))\ .reduceByKey(add).collect() len(header_freq) entity_df = table_df_tokenizered.select(F.explode("entities").alias("entity"), "table_id","caption_term_cleaned","header_term_cleaned","header") entity_caption_term_freq = entity_df.select("entity", "caption_term_cleaned").rdd \ .flatMap(lambda x:[((x["entity"],z),1) for z in x["caption_term_cleaned"]])\ .reduceByKey(add)\ .map(lambda x:(x[0][0], [(x[0][1],x[1])]))\ .reduceByKey(add).collect() entity_header_term_freq = entity_df.select("entity", "header_term_cleaned").rdd \ .flatMap(lambda x:[((x["entity"],z),1) for z in x["header_term_cleaned"]])\ .reduceByKey(add)\ .map(lambda x:(x[0][0], [(x[0][1],x[1])]))\ .reduceByKey(add).collect() entity_header_freq = entity_df.select("entity", "header").rdd \ .map(lambda x:((x["entity"],x["header"]),1))\ .reduceByKey(add)\ .map(lambda x:(x[0][0], [(x[0][1],x[1])]))\ .reduceByKey(add).collect() entity_tables = entity_df.select("entity","table_id")\ .groupBy("entity").agg(F.collect_list("table_id").alias("tables"))\ .rdd.map(lambda x:(x['entity'],x['tables'])).collect() import pickle with open("../../data/entity_tables.pkl","wb") as f: pickle.dump(entity_tables, f) for e in entity_header_freq: entity_header_freq[e] = [sum([count for _,count in entity_header_freq[e]]),dict(entity_header_freq[e])] with open("../../data/entity_header_freq.pkl","wb") as f: pickle.dump(entity_header_freq, f) entity_header_term_freq = dict(entity_header_term_freq) for e in entity_header_term_freq: entity_header_term_freq[e] = [sum([count for _,count in entity_header_term_freq[e]]),dict(entity_header_term_freq[e])] with open("../../data/entity_header_term_freq.pkl","wb") as f: pickle.dump(entity_header_term_freq, f) entity_caption_term_freq = dict(entity_caption_term_freq) for e in entity_caption_term_freq: entity_caption_term_freq[e] = [sum([count for _,count in entity_caption_term_freq[e]]),dict(entity_caption_term_freq[e])] with open("../../data/entity_caption_term_freq.pkl","wb") as f: pickle.dump(entity_caption_term_freq, f) caption_term_freq = dict(caption_term_freq) with open("../../data/caption_term_freq.pkl","wb") as f: pickle.dump([sum([count for _,count in caption_term_freq.items()]),caption_term_freq], f) header_term_freq = dict(header_term_freq) with open("../../data/header_term_freq.pkl","wb") as f: pickle.dump([sum([count for _,count in header_term_freq.items()]),header_term_freq], f) header_freq = dict(header_freq) with open("../../data/header_freq.pkl","wb") as f: pickle.dump([sum([count for _,count in header_freq.items()]),header_freq], f) for e in entity_tables: if len(entity_tables[e]) != sum([count for _,count in entity_header_freq[e]]): print(e, len(entity_tables[e]), sum([count for _,count in entity_header_freq[e]])) break caption_term_freq[0] entity_header_freq[1677] entity_rdd.filter(lambda x:x[0]==5839439).take(10) from metric import * with open("../../data/dev_result.pkl","rb") as f: dev_result = pickle.load(f) def load_entity_vocab(data_dir, ignore_bad_title=True, min_ent_count=1): entity_vocab = {} bad_title = 0 few_entity = 0 with open(os.path.join(data_dir, 'entity_vocab.txt'), 'r', encoding="utf-8") as f: for line in f: _, entity_id, entity_title, entity_mid, count = line.strip().split('\t') if ignore_bad_title and entity_title == '': bad_title += 1 elif int(count) < min_ent_count: few_entity += 1 else: entity_vocab[len(entity_vocab)] = { 'wiki_id': int(entity_id), 'wiki_title': entity_title, 'mid': entity_mid, 'count': int(count) } print('total number of entity: %d\nremove because of empty title: %d\nremove because count<%d: %d'%(len(entity_vocab),bad_title,min_ent_count,few_entity)) return entity_vocab entity_vocab = load_entity_vocab("../../data", True, 2) train_all_entities = set([x['wiki_id'] for _,x in entity_vocab.items()]) dev_final = {} for id,result in dev_result.items(): _, target_entities, pneural, pall, pee, pce, ple, cand_e, cand_c = result target_entities = set(target_entities) cand_e = set([e for e in cand_e if e in train_all_entities]) cand_c = set([e for e in cand_c if e in train_all_entities]) cand_all = set([e for e in cand_c|cand_e if e in train_all_entities]) recall_e = len(cand_e&target_entities)/len(target_entities) recall_c = len(cand_c&target_entities)/len(target_entities) recall_all = len(cand_all&target_entities)/len(target_entities) ranked_neural = sorted(pneural.items(),key=lambda z:z[1]+30*pee[z[0]],reverse=True) ranked_neural = [1 if z[0] in target_entities else 0 for z in ranked_neural if z[0] in train_all_entities] ap_neural = average_precision(ranked_neural) ranked_all = sorted(pall.items(),key=lambda z:100*pee[z[0]]+1*pce[z[0]]+0.5*ple[z[0]],reverse=True) ranked_all = [1 if z[0] in target_entities else 0 for z in ranked_all if z[0] in train_all_entities] ap_all = average_precision(ranked_all) # ranked_e = sorted(pee.items(),key=lambda z:z[1],reverse=True) # ranked_e = [1 if z[0] in target_entities else 0 for z in ranked_e if z[0] in train_all_entities] # assert len(ranked_e) == len(ranked_neural) # ap_e = average_precision(ranked_e) # ranked_c = sorted(pce.items(),key=lambda z:z[1],reverse=True) # ap_c = average_precision([1 if z[0] in target_entities else 0 for z in ranked_c if z[0] in train_all_entities]) # ranked_l = sorted(ple.items(),key=lambda z:z[1],reverse=True) # ap_l = average_precision([1 if z[0] in target_entities else 0 for z in ranked_l if z[0] in train_all_entities]) dev_final[id] = [recall_all,recall_e,recall_c,ap_neural,ap_all,ap_e,ap_c,ap_l] for i in range(8): print(np.mean([z[i] for _,z in dev_final.items()])) dev_result['13591903-1'][2] len([1 for z in dev_final if z[4]>=z[5]]) [(i,z[3],z[4],z[5]) for i, z in dev_final.items() if z[4]>=z[5]] ```
github_jupyter
# Parse Job Directories --- Meant to be run within one of the computer clusters on which jobs are run (Nersc, Sherlock, Slac). Will `os.walk` through `jobs_root_dir` and cobble together all the job directories and then upload the data to Dropbox. This script is meant primarily to get simple job information, for more detailed info run the `parse_job_data.ipynb` notebook. # Import Modules ``` import os print(os.getcwd()) import sys import time; ti = time.time() import shutil import pickle from pathlib import Path import numpy as np import pandas as pd pd.options.mode.chained_assignment = None # default='warn' # ######################################################### from misc_modules.pandas_methods import reorder_df_columns # ######################################################### from local_methods import ( is_attempt_dir, is_rev_dir, get_job_paths_info, ) from methods import isnotebook isnotebook_i = isnotebook() if isnotebook_i: from tqdm.notebook import tqdm verbose = True else: from tqdm import tqdm verbose = False compenv = os.environ.get("COMPENV", "wsl") if compenv == "wsl": jobs_root_dir = os.path.join( os.environ["PROJ_irox_oer_gdrive"], "dft_workflow") elif compenv == "nersc" or compenv == "sherlock" or compenv == "slac": jobs_root_dir = os.path.join( os.environ["PROJ_irox_oer"], "dft_workflow") ``` # Gathering prelim info, get all base job dirs ``` def get_path_rel_to_proj(full_path): """ """ #| - get_path_rel_to_proj subdir = full_path PROJ_dir = os.environ["PROJ_irox_oer"] search_term = PROJ_dir.split("/")[-1] ind_tmp = subdir.find(search_term) if ind_tmp == -1: search_term = "PROJ_irox_oer" ind_tmp = subdir.find(search_term) path_rel_to_proj = subdir[ind_tmp:] path_rel_to_proj = "/".join(path_rel_to_proj.split("/")[1:]) return(path_rel_to_proj) #__| ``` ``` if verbose: print( "Scanning for job dirs from the following dir:", "\n", jobs_root_dir, sep="") ``` ### Initial scan of root dir ``` data_dict_list = [] for subdir, dirs, files in os.walk(jobs_root_dir): data_dict_i = dict() data_dict_i["path_full"] = subdir last_dir = jobs_root_dir.split("/")[-1] path_i = os.path.join(last_dir, subdir[len(jobs_root_dir) + 1:]) # path_i = subdir[len(jobs_root_dir) + 1:] if "dft_jobs" not in subdir: continue if ".old" in subdir: continue if path_i == "": continue # # TEMP # # print("TEMP") # # frag_i = "slac/mwmg9p7s6o/11-20" # # frag_i = "slac/mwmg9p7s6o/11-20/bare/active_site__26/01_attempt" # frag_i = "run_dos_bader" # if frag_i not in subdir: # # break # continue # print(1 * "Got through | ") # print(subdir) # if verbose: # print(path_i) path_rel_to_proj = get_path_rel_to_proj(subdir) out_dict = get_job_paths_info(path_i) # Only add job directory if it's been submitted my_file = Path(os.path.join(subdir, ".SUBMITTED")) submitted = False if my_file.is_file(): submitted = True # ##################################################### data_dict_i.update(out_dict) data_dict_i["path_rel_to_proj"] = path_rel_to_proj data_dict_i["submitted"] = submitted # ##################################################### data_dict_list.append(data_dict_i) # ##################################################### if len(data_dict_list) == 0: df_cols = [ "path_full", "path_rel_to_proj", "path_job_root", "path_job_root_w_att_rev", "att_num", "rev_num", "is_rev_dir", "is_attempt_dir", "path_job_root_w_att", "gdrive_path", "submitted", ] df = pd.DataFrame(columns=df_cols) else: df = pd.DataFrame(data_dict_list) df = df[~df.path_job_root_w_att_rev.isna()] df = df.drop_duplicates(subset=["path_job_root_w_att_rev", ], keep="first") df = df.reset_index(drop=True) assert df.index.is_unique, "Index must be unique here" ``` ### Get facet and bulk from path ``` def get_facet_bulk_id(row_i): """ """ new_column_values_dict = { "bulk_id": None, "facet": None, } # ##################################################### path_job_root = row_i.path_job_root # ##################################################### # print(path_job_root) # ##################################################### # ##################################################### # Check if the job is a *O calc (different than other adsorbates) if "run_o_covered" in path_job_root or "run_o_covered" in jobs_root_dir: path_split = path_job_root.split("/") ads_i = "o" if "active_site__" in path_job_root: facet_i = path_split[-2] bulk_id_i = path_split[-3] active_site_parsed = False for i in path_split: if "active_site__" in i: active_site_path_seg = i.split("_") active_site_i = active_site_path_seg[-1] active_site_i = int(active_site_i) active_site_parsed = True if not active_site_parsed: print("PROBLEM | Couldn't parse active site for following dir:") print(path_job_root) else: # path_split = path_job_root.split("/") facet_i = path_split[-1] bulk_id_i = path_split[-2] # active_site_i = None active_site_i = np.nan # ##################################################### # ##################################################### elif "run_bare_oh_covered" in path_job_root or "run_bare_oh_covered" in jobs_root_dir: path_split = path_job_root.split("/") if "/bare/" in path_job_root: ads_i = "bare" elif "/oh/" in path_job_root: ads_i = "oh" else: print("Couldn't parse the adsorbate from here") ads_i = None active_site_parsed = False for i in path_split: if "active_site__" in i: active_site_path_seg = i.split("_") active_site_i = active_site_path_seg[-1] active_site_i = int(active_site_i) active_site_parsed = True if not active_site_parsed: print("PROBLEM | Couldn't parse active site for following dir:") print(path_job_root) facet_i = path_split[-3] bulk_id_i = path_split[-4] # ads_i = "bare" # Check that the parsed facet makes sense char_list_new = [] for char_i in facet_i: if char_i != "-": char_list_new.append(char_i) facet_new_i = "".join(char_list_new) # all_facet_chars_are_numeric = all([i.isnumeric() for i in facet_i]) # all_facet_chars_are_numeric = all([i.isnumeric() for i in facet_i]) all_facet_chars_are_numeric = all([i.isnumeric() for i in facet_new_i]) mess_i = "All characters of parsed facet must be numeric" assert all_facet_chars_are_numeric, mess_i # ##################################################### # ##################################################### elif "run_oh_covered" in path_job_root or "run_oh_covered" in jobs_root_dir: path_split = path_job_root.split("/") if "/bare/" in path_job_root: ads_i = "bare" elif "/oh/" in path_job_root: ads_i = "oh" else: print("Couldn't parse the adsorbate from here") ads_i = None active_site_parsed = False for i in path_split: if "active_site__" in i: active_site_path_seg = i.split("_") active_site_i = active_site_path_seg[-1] active_site_i = int(active_site_i) active_site_parsed = True if not active_site_parsed: print("PROBLEM | Couldn't parse active site for following dir:") print(path_job_root) facet_i = path_split[-3] bulk_id_i = path_split[-4] # Check that the parsed facet makes sense char_list_new = [] for char_i in facet_i: if char_i != "-": char_list_new.append(char_i) facet_new_i = "".join(char_list_new) # all_facet_chars_are_numeric = all([i.isnumeric() for i in facet_i]) all_facet_chars_are_numeric = all([i.isnumeric() for i in facet_new_i]) mess_i = "All characters of parsed facet must be numeric" assert all_facet_chars_are_numeric, mess_i # ##################################################### # ##################################################### else: print("Couldn't figure out what to do here") print(path_job_root) facet_i = None bulk_id_i = None ads_i = None pass # ##################################################### new_column_values_dict["facet"] = facet_i new_column_values_dict["bulk_id"] = bulk_id_i new_column_values_dict["ads"] = ads_i new_column_values_dict["active_site"] = active_site_i # ##################################################### for key, value in new_column_values_dict.items(): row_i[key] = value return(row_i) df = df.apply( get_facet_bulk_id, axis=1) df.att_num = df.att_num.astype(int) df.rev_num = df.rev_num.astype(int) # df["compenv"] = compenv ``` ### Get job type ``` def get_job_type(row_i): """ """ new_column_values_dict = { "job_type": None, } # ##################################################### path_job_root = row_i.path_job_root # ##################################################### # print(path_job_root) if "run_dos_bader" in path_job_root: job_type_i = "dos_bader" elif "dft_workflow/run_slabs" in path_job_root: job_type_i = "oer_adsorbate" # ##################################################### new_column_values_dict["job_type"] = job_type_i # ##################################################### for key, value in new_column_values_dict.items(): row_i[key] = value # ##################################################### return(row_i) # ##################################################### df = df.apply( get_job_type, axis=1) ``` # Reorder columns ``` new_col_order = [ "job_type", "compenv", "bulk_id", "facet", "ads", "submitted", "att_num", "rev_num", "num_revs", "is_rev_dir", "is_attempt_dir", "path_job_root", "path_job_root_w_att_rev", ] df = reorder_df_columns(new_col_order, df) ``` # Saving data and uploading to Dropbox ``` # Pickling data ########################################### directory = os.path.join( os.environ["PROJ_irox_oer"], "dft_workflow/job_processing", "out_data") if not os.path.exists(directory): os.makedirs(directory) file_name_i = "df_jobs_base_" + compenv + ".pickle" file_path_i = os.path.join(directory, file_name_i) with open(file_path_i, "wb") as fle: pickle.dump(df, fle) # ######################################################### db_path = os.path.join( "01_norskov/00_git_repos/PROJ_IrOx_OER", "dft_workflow/job_processing/out_data" , file_name_i) rclone_remote = os.environ.get("rclone_dropbox", "raul_dropbox") bash_comm = "rclone copyto " + file_path_i + " " + rclone_remote + ":" + db_path if verbose: print("bash_comm:", bash_comm) if compenv != "wsl": os.system(bash_comm) # ######################################################### print(20 * "# # ") print("All done!") print("Run time:", np.round((time.time() - ti) / 60, 3), "min") print("parse_job_dirs.ipynb") print(20 * "# # ") # ######################################################### ``` ``` # DEPRECATED | Moved to fix_gdrive_conflicts.ipynb ### Removing paths that have the GDrive duplicate syntax in them ' (1)' # for ind_i, row_i in df.iterrows(): # path_full_i = row_i.path_full # if " (" in path_full_i: # print( # path_full_i, # sep="") # # ################################################# # found_wrong_level = False # path_level_list = [] # for i in path_full_i.split("/"): # if not found_wrong_level: # path_level_list.append(i) # if " (" in i: # found_wrong_level = True # path_upto_error = "/".join(path_level_list) # my_file = Path(path_full_i) # if my_file.is_dir(): # size_i = os.path.getsize(path_full_i) # else: # continue # # If it's a small file size then it probably just has the init files and we're good to delete the dir # # Seems that all files are 512 bytes in size (I think it's bytes) # if size_i < 550: # my_file = Path(path_upto_error) # if my_file.is_dir(): # print("Removing dir:", path_upto_error) # # shutil.rmtree(path_upto_error) # else: # print(100 * "Issue | ") # print(path_full_i) # print(path_full_i) # print(path_full_i) # print(path_full_i) # print(path_full_i) # print("") # Removing files with ' (' in name (GDrive duplicates) # for subdir, dirs, files in os.walk(jobs_root_dir): # for file_i in files: # if " (" in file_i: # file_path_i = os.path.join(subdir, file_i) # print( # "Removing:", # file_path_i) # # os.remove(file_path_i) # # os.path.join(subdir, file_i) # assert False # df[df.job_type == "dos_bader"] # assert False ```
github_jupyter
``` from pygradu import shortest_path from pygradu import gridify import shapely.geometry import pandas as pd import numpy as np import importlib importlib.reload(gridify) grid_2500m = gridify.area_to_grid(side_length=2500) grid_5km = gridify.area_to_grid(side_length=5000) MODELS_DIR = 'data/models/' DATASET_DIR = 'data/datasets/' graph_df_adj_2500m = pd.read_csv(MODELS_DIR + 'complete_graph_adjacent_2500m.csv')[['original', 'connected', 'cost']] graph_df_ship_2500m = pd.read_csv(MODELS_DIR + 'complete_graph_ship_model_2500m.csv')[['original', 'connected', 'cost']] graph_df_adj_5km = pd.read_csv(MODELS_DIR + 'complete_graph_adjacent_5km.csv')[['original', 'connected', 'cost']] graph_df_ship_5km = pd.read_csv(MODELS_DIR + 'complete_graph_ship_model_5km.csv')[['original', 'connected', 'cost']] graph_df_adj_5km.head() importlib.reload(shortest_path) %time graph_adj_2500m = shortest_path.df_to_graph(graph_df_adj_2500m) %time graph_ship_2500m = shortest_path.df_to_graph(graph_df_ship_2500m) %time graph_adj_5km = shortest_path.df_to_graph(graph_df_adj_5km) %time graph_ship_5km = shortest_path.df_to_graph(graph_df_ship_5km) # Convert speed model to dict speed_model_2500m = pd.read_csv(MODELS_DIR + 'speed_model_2500m.csv', index_col=0).to_dict() speed_model_2500m = {int(k):v for k,v in speed_model_2500m.items()} speed_model_5km = pd.read_csv(MODELS_DIR + 'speed_model_5km.csv', index_col=0).to_dict() speed_model_5km = {int(k):v for k,v in speed_model_5km.items()} speed_model_5km[354] # Load test set test_voyages = pd.read_csv(DATASET_DIR + 'validation_set_summer.csv', index_col=0, parse_dates = ['timestamp', 'ata', 'atd']) test_voyages.head() # Load shallow water shallow_graph_2500m = set(pd.read_csv(MODELS_DIR + 'shallow_water_model_2500m.csv', index_col=0).original.values) shallow_graph_5km = set(pd.read_csv(MODELS_DIR + 'shallow_water_model_5km.csv', index_col=0).original.values) # Take first observation of every voyage test_voyages['course'] = -1 voyages = test_voyages.groupby('voyage') first_rows = [] for voyage, observations in voyages: course = shortest_path.angleFromCoordinatesInDeg([observations.iloc[0].lat, observations.iloc[0].lon], [observations.iloc[1].lat, observations.iloc[1].lon]) row = observations.iloc[1] row.course = course first_rows.append(row) validation_set = pd.DataFrame(data=first_rows, columns=test_voyages.columns) validation_set.head() # Predicting routes using the adjacent model 5km start_time = None importlib.reload(shortest_path) graph_adj_5km.use_turn_penalty = False graph_adj_5km.use_shallow_penalty = True graph_adj_5km.use_dirways = False %time routes_and_areas = shortest_path.predict_routes(validation_set, grid_5km, graph_adj_5km, speed_model_5km, None, shallow_graph_5km) routes_adjacent_5km = pd.DataFrame(data=routes_and_areas[0], columns=['lat', 'lon', 'node', 'speed', 'mmsi', 'voyage', 'start_time', 'number']) areas_adjacent_5km = pd.DataFrame(data=routes_and_areas[1], columns=['lat', 'lon', 'voyage','g', 'h', 'f']) importlib.reload(shortest_path) %time routes_timestamp_5km_adj = shortest_path.calculate_timestamps(routes_adjacent_5km) routes_timestamp_5km_adj.head() importlib.reload(shortest_path) %time results_5km_adj = shortest_path.test_accuracy(grid_5km, routes_timestamp_5km_adj, test_voyages) results_5km_adj.head() results_5km_adj.describe() # Load dirways dirways = pd.read_csv(DATASET_DIR + 'dirways_all_2018_2019.csv', parse_dates = ['publishtime', 'deletetime', 'createtime']) # Load test set test_voyages_winter = pd.read_csv(DATASET_DIR + 'validation_set_winter.csv', index_col=0, parse_dates = ['timestamp', 'ata', 'atd']) # Take first observation of every voyage test_voyages_winter['course'] = -1 voyages = test_voyages_winter.groupby('voyage') first_rows = [] for voyage, observations in voyages: course = shortest_path.angleFromCoordinatesInDeg([observations.iloc[0].lat, observations.iloc[0].lon], [observations.iloc[1].lat, observations.iloc[1].lon]) row = observations.iloc[1] row.course = course first_rows.append(row) validation_set_winter = pd.DataFrame(data=first_rows, columns=test_voyages_winter.columns) validation_set_winter.head() # Predicting winter routes using the adjacent model 5km start_time = None importlib.reload(shortest_path) graph_adj_5km.use_turn_penalty = False graph_adj_5km.use_shallow_penalty = True graph_adj_5km.use_dirways = True %time routes_and_areas_winter = shortest_path.predict_routes(validation_set_winter, grid_5km, graph_adj_5km, speed_model_5km, dirways, shallow_graph_5km) routes_adjacent_5km_winter = pd.DataFrame(data=routes_and_areas_winter[0], columns=['lat', 'lon', 'node', 'speed', 'mmsi', 'voyage', 'start_time', 'number']) areas_adjacent_5km_winter = pd.DataFrame(data=routes_and_areas_winter[1], columns=['lat', 'lon', 'voyage','g', 'h', 'f']) importlib.reload(shortest_path) %time routes_timestamp_5km_adj_winter = shortest_path.calculate_timestamps(routes_adjacent_5km_winter) routes_timestamp_5km_adj_winter.head() importlib.reload(shortest_path) %time results_5km_adj_winter = shortest_path.test_accuracy(grid_5km, routes_timestamp_5km_adj_winter, test_voyages_winter) results_5km_adj_winter.head() results_5km_adj_winter.describe() ```
github_jupyter
## Deep-Learning Particle Filter The goal of this project is to explore the idea of using machine-learning to learn the physical behavior of complexe systems and use these models as the "physical-update" in a particle filter to do state estimation. In order to develop this new technique, a numerical experiment is devised: the first attempt involves simulating a double-pendulum and using the afore-mentioned technique of deep-learning particle filtering in order to estimate the position of the entire pendulum (in this case the angle $\beta$) depending on the measurement of the angle of the first link. ![Double Pendulum](img/DoublePendulum.png "Double Pendulum") ## Steps for the Project: 1. Set up a simulator for a double pendulum 2. Plot alpha and beta for a set of initial conditions 3. Train an LSTM network on the variables $\alpha$, $\dot{\alpha}$ and see to what extent $\beta$ can be predicted. If this does not work, one could also train the network on $\alpha$, $\dot{\alpha}$, and $\dot{\beta}$ and see how that works. ### First Step: a Double Pendulum Simulator: Here we work with existing code to give a head-start, in particular we use the code from [this link](https://github.com/chris-greening/double-pendula). From this work, we plot the angle values. The following cell is copied directly. Afterwards, the code is adapted to our particular use. ``` #Author: Chris Greening #Date: 7/15/19 #Purpose: Another crack at the double pendulum to convert it to OOP #to support multiple pendula import numpy as np import pandas as pd from pandas import Series, DataFrame from numpy import sin, cos from scipy.integrate import odeint import matplotlib.pyplot as plt import matplotlib.animation as animation from IPython.display import HTML # Added by OCornes for Jupyter compatibility fig = plt.figure() class System: def __init__(self): self.double_pendulums = [] def add_double_pendulum(self, double_pendulum): pass class DoublePendulum: def __init__( self, L1=1, L2=1, m1=1, m2=1, g=-9.81, y0=[90, 0, -10, 0], tmax = 180, dt = .05, color = "g" ): self.tmax = tmax self.dt = dt self.t = np.arange(0, self.tmax+self.dt, self.dt) self.color = color self.g = g self.pendulum1 = Pendulum(L1, m1) self.pendulum2 = Pendulum(L2, m2) # Initial conditions: theta1, dtheta1/dt, theta2, dtheta2/dt. self.y0 = np.array(np.radians(y0)) # Do the numerical integration of the equations of motion self.y = odeint(self.derivative, self.y0, self.t, args=(self.pendulum1.L, self.pendulum2.L, self.pendulum1.m, self.pendulum2.m)) self.pendulum1.calculate_path(self.y[:, 0]) self.pendulum2.calculate_path(self.y[:, 2], self.pendulum1.x, self.pendulum1.y) self.w = self.y[:, 1] self.fig = fig self.ax_range = self.pendulum1.L + self.pendulum2.L self.ax = self.fig.add_subplot(111, autoscale_on=False, xlim=(-self.ax_range, self.ax_range), ylim=(-self.ax_range, self.ax_range)) self.ax.set_aspect('equal') self.ax.grid() self.pendulum1.set_axes(self.ax) self.pendulum2.set_axes(self.ax) self.line, = self.ax.plot([], [], 'o-', lw=2,color=self.color) self.time_template = 'time = %.1fs' self.time_text = self.ax.text(0.05, 0.9, '', transform=self.ax.transAxes) def derivative(self, y, t, L1, L2, m1, m2): """Return the first derivatives of y = theta1, z1, theta2, z2.""" theta1, z1, theta2, z2 = y c, s = np.cos(theta1-theta2), np.sin(theta1-theta2) theta1dot = z1 z1dot = (m2*self.g*np.sin(theta2)*c - m2*s*(L1*z1**2*c + L2*z2**2) - (m1+m2)*self.g*np.sin(theta1)) / L1 / (m1 + m2*s**2) theta2dot = z2 z2dot = ((m1+m2)*(L1*z1**2*s - self.g*np.sin(theta2) + self.g*np.sin(theta1)*c) + m2*L2*z2**2*s*c) / L2 / (m1 + m2*s**2) return theta1dot, z1dot, theta2dot, z2dot def init(self): self.line.set_data([], []) self.time_text.set_text('') return self.line, self.time_text class Pendulum: def __init__(self, L, m): self.L = L self.m = m def set_axes(self, ax): self.ax = ax # defines line that tracks where pendulum's have gone self.p, = self.ax.plot([], [], color='r-') self.w = self.ax.plot([], []) def calculate_path(self, y, x0=0, y0=0): self.x = self.L*np.sin(y) + x0 self.y = self.L*np.cos(y) + y0 def animate(i): arr = pendula #pendulum2, pendulum3, pendulum4, pendulum5, pendulum6] return_arr = [] for double_pendulum in arr: thisx = [0, double_pendulum.pendulum1.x[i], double_pendulum.pendulum2.x[i]] thisy = [0, double_pendulum.pendulum1.y[i], double_pendulum.pendulum2.y[i]] double_pendulum.line.set_data(thisx, thisy) double_pendulum.time_text.set_text(double_pendulum.time_template % (i*double_pendulum.dt)) return_arr.append(double_pendulum.line) return_arr.append(double_pendulum.time_text) return_arr.append(double_pendulum.pendulum1.p) return_arr.append(double_pendulum.pendulum2.p) return return_arr def random_hex(): hex_chars = "0123456789ABCDEF" hex_string = "#" for i in range(6): index = np.random.randint(0, len(hex_chars)) hex_string += hex_chars[index] return hex_string L1 = 5 L2 = 5 pendula = [] initial_dtheta = 0 initial_theta = 90 dtheta = .5 #creates pendula for _ in range(10): pendula.append(DoublePendulum(L1=L1,L2=L2,y0=[initial_theta-initial_dtheta, 0,-10,0], tmax=15, color=random_hex())) initial_dtheta += dtheta # plt.plot(pendula[0].x2, pendula[0].y2, color=pendula[0].color) ani = animation.FuncAnimation(fig, animate, np.arange(1, len(pendula[0].y)), interval=25, blit=True, init_func=pendula[0].init) ani.save('line.gif', dpi=80, writer='imagemagick') # HTML(ani.to_html5_video()) # plt.show() # removed by OCornes for Jupyter adaptation ``` ## Notes on observability: Before we can attempt to apply deep-learning on the double pendulum usefull, it is important to **determine if the system of our experiement is observable**. If it is not observable, i.e. the information necessary for estimating the variable of interest is not available in any way, then the experiment needs to be redesigned. Determining the observability of a non-linear system like the double pendulum requires techniques which use Lie derivatives. These techniques are complexe, and a proper execution of these is presently difficult, so instead we chose to **linearise the system around a chosen operating point (or a series of points), and build the experiment based on that.** The proper method to derive observability will be done subsequently, if the results on the deep learning experiment turn out promising. ### Observability for linear systems: reminder of the basic steps: (From Wikipedia) For time-invariant linear systems in the state space representation, there are convenient tests to check whether a system is observable. Consider a SISO system with $n$ state variables (see state space for details about MIMO systems) given by $\dot{x} = Ax + Bu$ $y = Cx + Du$ With $\vec{x}$ being the state vector, $\vec{y}$ the output vector and $u$ the input vector. If the rank of the observability matrix of the system is $n$, then the system is observable. The observability matrix for a linear system is: $O = \begin{bmatrix} C \\ CA \\ CA^2 \\ ... \\ CA^{n-1}\end{bmatrix}$ ### Linearising the double pendulum: Physical model of the double pendulum: $\vec{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} \theta_1 \\ \dot{\theta_1} \\ \theta_2 \\ \dot{\theta_2} \end{bmatrix}$ $\vec{\dot{x}} = \begin{bmatrix} x_2 \\ \frac{1}{L_1 (m_1 + m_2\sin(x_1-x_3)^2)} \left(m_2g\sin(x_3)\cos(x_1 - x_3) - m_2\sin(x_1 - x_3)\left( L_1 x_2^ 2 \cos(x_1-x_3) + L_2 x_4^2\right) - (m_1 + m_2)g\sin(x_1) \right)\\ x_4 \\ \frac{1}{L_2 (m_1 + m_2\sin(x_1-x_3)^2)} \left((m_1 + m_2)L_1 x_2^2\sin(x_1-x_3) - g\sin(x_3) + g\sin(x_1)\cos(x_1-x_3) + m_2 L_2 x_4^2 \sin(x_1-x_3)\cos(x_1- x_3) \right) \end{bmatrix}$ $\vec{y} = \begin{bmatrix} x_1 \\ x_3 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0\\0 & 0 & 1 & 0\end{bmatrix} \vec{x}$ ### Jacobian of the system: We use Wolfram Alpha in order to derive the derivatives of the dynamics equations. The Jacobian has the following for in this case: $A = \frac{\partial \vec{\dot{x}}}{\vec{x}} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ a_{21} & a_{22} & a_{23} & a_{24}\\ 0 & 0 & 0 & 1 \\ a_{41} & a_{42} & a_{43} & a_{44}\\ \end{bmatrix} $ $a_{22} = \frac{-2m_2\sin(x_1-x_3)\cos(x_1-x_3)}{m_1 + m_2\sin(x_1-x_3)^2}x_2$ $a_{24} = \frac{L2}{L1}\frac{-2m_2\sin(x_1-x_3)}{m_1 + m_2\sin(x_1-x_3)^2}x_4$ $a_{42} = \frac{L1}{L2}\frac{2(m_1 + m_2)\sin(x_1-x_3)}{m_1 + m_2\sin(x_1-x_3)^2}x_2$ $a_{44} = \frac{2m_2\sin(x_1-x_3)\cos(x_1-x_3)}{m_1 + m_2\sin(x_1-x_3)^2}x_4$ ## Notes on machine learning for dynamical systems: ``` #================================================================== # Attempt for numerical differentiation of the system using python: #================================================================== import numpy as np g = 9.81 # From the double pendulum python script: #def derivative(y, t, L1, L2, m1, m2): def derivative(y, L1, L2, m1, m2): """Return the first derivatives of y = theta1, z1, theta2, z2.""" theta1, z1, theta2, z2 = y c, s = np.cos(theta1-theta2), np.sin(theta1-theta2) theta1dot = z1 z1dot = (m2*g*np.sin(theta2)*c - m2*s*(L1*z1**2*c + L2*z2**2) - (m1+m2)*g*np.sin(theta1)) / L1 / (m1 + m2*s**2) theta2dot = z2 z2dot = ((m1+m2)*(L1*z1**2*s - g*np.sin(theta2) + g*np.sin(theta1)*c) + m2*L2*z2**2*s*c) / L2 / (m1 + m2*s**2) return theta1dot, z1dot, theta2dot, z2dot # A = [[df1/dx1, df1/dx2, ...df1/dxN],[df2/dx1, df2/dx2, ...] ...[dfN/dx1, dfN/dx2, ... dfN/dxN]] # Testing derivative techniques for a single dfi/dxj: y = [0, 0, 0, 0] eps = 1e-6; y_ = y.copy() y_[1] += eps L1 = 1; L2 = 1; m1 = 1; m2 = 1; theta1dot, z1dot, theta2dot, z2dot = derivative(y, L1, L2, m1, m2) theta1dot_, z1dot_, theta2dot_, z2dot_ = derivative(y_, L1, L2, m1, m2) print('Vector:') print('theta1dot: '+ str(theta1dot_-theta1dot)) print('z1dot_: '+ str(z1dot_-z1dot)) print('theta2dot_: '+ str(theta2dot_-theta2dot)) print('z2dot_: '+ str(z2dot_-z2dot)) print(y_) print(y) # Numerical derivation function: def d_double_pendulum(y, L1, L2, m1, m2, eps): jacobian = np.zeros((4,4)) for i in range(4): y1, y2 = y.copy(), y.copy() y2[i] += eps/2 y1[i] -= eps/2 theta1dot2, z1dot2, theta2dot2, z2dot2 = derivative(y2, L1, L2, m1, m2) theta1dot1, z1dot1, theta2dot1, z2dot1 = derivative(y1, L1, L2, m1, m2) dtheta1dot = (theta1dot2-theta1dot1)/eps dz1dot = (z1dot2-z1dot1)/eps dtheta2dot = (theta2dot2-theta2dot1)/eps dz2dot = (z2dot2-z2dot1)/eps jacobian[:,i] = [dtheta1dot, dz1dot, dtheta2dot, dz2dot] return jacobian ''' def printJacobian(jacobian): for i in range(len(jacobian)): for j in range(len(jacobian)): print(round(jacobian[i][j],2),end=" ") print("\n")''' def printJacobian(jacobian): for i in range(len(jacobian)): print('{:^10}{:^10}{:^10}{:^10}'.format(round(jacobian[i][0],2), round(jacobian[i][1],2), round(jacobian[i][2],2), round(jacobian[i][3],2))) jacobian = d_double_pendulum(y, L1, L2, m1, m2, eps) printJacobian(jacobian) print(jacobian) rank = np.linalg.matrix_rank(jacobian) A = np.array(jacobian) A2= np.dot(A,A) A3= np.dot(A,A2) #A2= A*A #A3= A*A2 C = np.array([[1,0,0,0],[0,0,1,0]]) CA = np.dot(C,A) CA2= np.dot(C,A2) CA3= np.dot(C,A3) O = np.vstack((C,CA)) O = np.vstack((O,CA2)) O = np.vstack((O,CA3)) print(C) print(O) print(np.linalg.matrix_rank(O)) # Bro, kuck mal den Rank an: print(np.linalg.matrix_rank(O)) ``` ## Simulating a noisy double pendulum The goal of this section is to produce raw data from the simulation that can be fed into the neural net for learning. First, the data is extracted for a 10 second simulation for a single initial condition. We plot this on x-t plots, and possibly in the state space $x_1, x_2, x_3$ as a 3D trajectory. From there, we generate 10 second simulations for a variety of initial conditions and save it in csv format for the input to the neural nets. ``` # Pendulum simulator: import numpy as np import pandas as pd from pandas import Series, DataFrame from numpy import sin, cos from scipy.integrate import odeint class DoublePendulum: def __init__( self, L1=1, L2=1, m1=1, m2=1, g=-9.81, y0=[90, 0, -10, 0], tmax = 180, dt = .05, color = "g" ): self.tmax = tmax self.dt = dt self.t = np.arange(0, self.tmax+self.dt, self.dt) self.color = color self.g = g self.pendulum1 = Pendulum(L1, m1) self.pendulum2 = Pendulum(L2, m2) # Initial conditions: theta1, dtheta1/dt, theta2, dtheta2/dt. self.y0 = np.array(np.radians(y0)) # Do the numerical integration of the equations of motion self.y = odeint(self.derivative, self.y0, self.t, args=(self.pendulum1.L, self.pendulum2.L, self.pendulum1.m, self.pendulum2.m)) self.pendulum1.calculate_path(self.y[:, 0]) self.pendulum2.calculate_path(self.y[:, 2], self.pendulum1.x, self.pendulum1.y) self.w = self.y[:, 1] self.fig = fig self.ax_range = self.pendulum1.L + self.pendulum2.L self.ax = self.fig.add_subplot(111, autoscale_on=False, xlim=(-self.ax_range, self.ax_range), ylim=(-self.ax_range, self.ax_range)) self.ax.set_aspect('equal') self.ax.grid() self.pendulum1.set_axes(self.ax) self.pendulum2.set_axes(self.ax) self.line, = self.ax.plot([], [], 'o-', lw=2,color=self.color) self.time_template = 'time = %.1fs' self.time_text = self.ax.text(0.05, 0.9, '', transform=self.ax.transAxes) def derivative(self, y, t, L1, L2, m1, m2): """Return the first derivatives of y = theta1, z1, theta2, z2.""" theta1, z1, theta2, z2 = y c, s = np.cos(theta1-theta2), np.sin(theta1-theta2) theta1dot = z1 z1dot = (m2*self.g*np.sin(theta2)*c - m2*s*(L1*z1**2*c + L2*z2**2) - (m1+m2)*self.g*np.sin(theta1)) / L1 / (m1 + m2*s**2) theta2dot = z2 z2dot = ((m1+m2)*(L1*z1**2*s - self.g*np.sin(theta2) + self.g*np.sin(theta1)*c) + m2*L2*z2**2*s*c) / L2 / (m1 + m2*s**2) return theta1dot, z1dot, theta2dot, z2dot def init(self): self.line.set_data([], []) self.time_text.set_text('') return self.line, self.time_text class Pendulum: def __init__(self, L, m): self.L = L self.m = m def set_axes(self, ax): self.ax = ax # defines line that tracks where pendulum's have gone self.p, = self.ax.plot([], [], color='r-') self.w = self.ax.plot([], []) def calculate_path(self, y, x0=0, y0=0): self.x = self.L*np.sin(y) + x0 self.y = self.L*np.cos(y) + y0 def animate(i): arr = pendula #pendulum2, pendulum3, pendulum4, pendulum5, pendulum6] return_arr = [] for double_pendulum in arr: thisx = [0, double_pendulum.pendulum1.x[i], double_pendulum.pendulum2.x[i]] thisy = [0, double_pendulum.pendulum1.y[i], double_pendulum.pendulum2.y[i]] double_pendulum.line.set_data(thisx, thisy) double_pendulum.time_text.set_text(double_pendulum.time_template % (i*double_pendulum.dt)) return_arr.append(double_pendulum.line) return_arr.append(double_pendulum.time_text) return_arr.append(double_pendulum.pendulum1.p) return_arr.append(double_pendulum.pendulum2.p) return return_arr L1 = 5 L2 = 5 pendula = [] initial_dtheta = 0 initial_theta = 90 dtheta = .5 dp = DoublePendulum(L1=L1,L2=L2,y0=[initial_theta-initial_dtheta, 0,-10,0], tmax=15, color=random_hex()) t = dp.t # plot lines plt.plot(theta1, y, label = "$\theta_1$") #plt.plot(t, x, label = "line 2") plt.legend() plt.show() ``` # References: 1. [Double Pendulum Equations](http://www.maths.surrey.ac.uk/explore/michaelspages/documentation/Double.pdf) 2. [ESTIMATION AND CONTROL OF A DOUBLE-INVERTED PENDULUM](https://core.ac.uk/download/pdf/161989454.pdf) 3. [Control Theory: The Double Pendulum Inverted on a Cart](https://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1131&context=math_etds)
github_jupyter
``` %%capture # Important header information #amrwindfedir = '/projects/wind_uq/lcheung/amrwind-frontend' # official version #amrwindfedir = '/projects/hfm/lcheung/amrwind-frontend' amrwindfedir = '/projects/hfm/lcheung/amrwind-frontend/' import sys, os sys.path.insert(1, amrwindfedir) sys.path.insert(1, '../utilities') # Load the libraries import matplotlib.pyplot as plt import amrwind_frontend as amrwind import numpy as np from matplotlib import cm import SOWFAdata as sowfa # Also ignore warnings import warnings warnings.filterwarnings('ignore') # Make all plots inline %matplotlib inline # WRF Forcing data # SOWFA profile directories SOWFAdir = '../atlantic-vineyard/summer-stable/drivingData/' Tfile = SOWFAdir+'/givenSourceT' Ufile = SOWFAdir+'/givenSourceU_component' tfluxfile = SOWFAdir+'/surfaceTemperatureFluxTable' # Load the SOWFA data zT = sowfa.readsection(Tfile, 'sourceHeightsTemperature') sowfatime, sowfatemp = sowfa.readsection(Tfile, 'sourceTableTemperature', splitfirstcol=True) zMom = sowfa.readsection(Ufile, 'sourceHeightsMomentum') t1, sowfa_momu = sowfa.readsection(Ufile, 'sourceTableMomentumX', splitfirstcol=True) t2, sowfa_momv = sowfa.readsection(Ufile, 'sourceTableMomentumY', splitfirstcol=True) t3, sowfa_tflux = sowfa.readplainfile(tfluxfile, splitfirstcol=True) print("Loaded SOWFA profiles") itime=4 sowfatime[itime] ``` # Setup ## The input file ``` # Print the input file inputfile='ATLVINEYARD_test1.inp' %cat ATLVINEYARD_test1.inp ``` ## Plot the domain This is what the case looks like -- it should be a 1.5km x 1.5km case with wind from 230 degrees southwest. ``` # Start the amrwind_frontend app tutorial2 = amrwind.MyApp.init_nogui() # Load the input into the app tutorial2.loadAMRWindInput(inputfile) fig, ax = plt.subplots(figsize=(4,4), facecolor='w', dpi=150) # Set any additional items to plot tutorial2.popup_storteddata['plotdomain']['plot_sampleprobes'] = [] #['p_hub'] tutorial2.plotDomain(ax=ax) ``` # Postprocessing ``` # Set your run directory here casedir = './' # Average between 15,000 sec to 20,000 sec #avgtimes = [950, 1050] ``` ## Plot the sample planes ``` tutorial2.Samplepostpro_loadnetcdffile(casedir+'/post_processing/sampling00000.nc') levels =np.linspace(-10,0,41) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(6,5), gridspec_kw={'width_ratios': [1, 0.05]}, dpi=150) im1 = tutorial2.plotSamplePlane('p_hub', 'velocityx', 300, 1, 'X','Y',ax=ax1, colorbar=False, levels=levels, cmap=cm.jet) fig.colorbar(im1[0], cax=ax2) ``` ## Plot the ABL statistics ``` tutorial2.ABLpostpro_loadnetcdffile(casedir+'/post_processing/abl_statistics00000.nc') timevec=[48, 72, 96, 108, 144] # First, let's look at the hub-height averaged statistics #tutorial2.ABLpostpro_printreport(avgt=avgtimes, avgz=plotheights) ``` ### Plot scalar statistics ``` fig, ax = plt.subplots(figsize=(4,4), facecolor='w', dpi=150) ustar=tutorial2.ABLpostpro_plotscalars(ax=ax, plotvars=['ustar']) ax.set_ylim([0, 0.5]) ``` ### Plot profile statistics ``` # Plot velocity fig, axs = plt.subplots(1,len(timevec),figsize=(2.5*len(timevec),5), facecolor='w', dpi=150, sharey=True) for it, time in enumerate(timevec): ax=axs[it] print('Time = %0.1f'%sowfatime[time]) SOWFA_Uhoriz = np.sqrt(sowfa_momu[time,:]**2 + sowfa_momv[time,:]**2) dat=tutorial2.ABLpostpro_plotprofiles(ax=ax, plotvars=['Uhoriz'], avgt=[sowfatime[time]-5, sowfatime[time]+5], doplot=False) ax.plot(SOWFA_Uhoriz, zMom, label='SOWFA WRF') ax.plot(dat['Uhoriz']['data'], dat['Uhoriz']['z'], label='AMR-Wind') ax.set_xlim([0, 25]) ax.set_xlabel('U [m/s]') ax.grid(linestyle=':', linewidth=0.5) ax.set_ylim([0,2000]) ax.set_title('Time: %0.1f hr'%(sowfatime[time]/3600.0)) axs[0].legend() axs[0].set_ylabel('z [m]') plt.suptitle('Horizontal velocity') # Plot temperature #timevec=[6,12,15] fig, axs = plt.subplots(1,len(timevec),figsize=(2.5*len(timevec),5), facecolor='w', dpi=150, sharey=True) for it, time in enumerate(timevec): ax=axs[it] print('Time = %0.1f'%sowfatime[time]) dat=tutorial2.ABLpostpro_plotprofiles(ax=ax, plotvars=['Temperature'], avgt=[sowfatime[time]-5, sowfatime[time]+5], doplot=False) ax.plot(sowfatemp[time,:], zT, label='SOWFA WRF') ax.plot(dat['T']['data'], dat['T']['z'], label='AMR-Wind') #ax.set_xlim([0, 15]) ax.set_xlabel('T [K]') ax.grid(linestyle=':', linewidth=0.5) ax.set_ylim([0,2000]) ax.set_title('Time: %0.1f hr'%(sowfatime[time]/3600.0)) axs[0].legend() axs[0].set_ylabel('z [m]') plt.suptitle('Temperature') # Plot TI #timevec=[6,12,15] fig, axs = plt.subplots(1,len(timevec),figsize=(2.5*len(timevec),5), facecolor='w', dpi=150, sharey=True) for it, time in enumerate(timevec): ax=axs[it] print('Time = %0.1f'%sowfatime[time]) dat=tutorial2.ABLpostpro_plotprofiles(ax=ax, plotvars=['TI_horiz'], avgt=[sowfatime[time]-5, sowfatime[time]+5], doplot=False) ax.plot(dat['TI_horiz']['data'], dat['TI_horiz']['z'], label='AMR-Wind') ax.set_xlim([0, 0.11]) ax.set_xlabel('TI [-]') ax.grid(linestyle=':', linewidth=0.5) ax.set_ylim([0,2000]) ax.set_title('Time: %0.1f hr'%(sowfatime[time]/3600.0)) axs[0].legend() axs[0].set_ylabel('z [m]') plt.suptitle('Horizontal TI') # Plot TKE #timevec=[6,12,15] fig, axs = plt.subplots(1,len(timevec),figsize=(2.5*len(timevec),5), facecolor='w', dpi=150, sharey=True) for it, time in enumerate(timevec): ax=axs[it] print('Time = %0.1f'%sowfatime[time]) dat=tutorial2.ABLpostpro_plotprofiles(ax=ax, plotvars=['TKE'], avgt=[sowfatime[time]-5, sowfatime[time]+5], doplot=False) ax.plot(dat['TKE']['data'], dat['TKE']['z'], label='AMR-Wind') ax.set_xlim([0, 0.2]) ax.set_xlabel('TKE [m^2/s^2]') ax.grid(linestyle=':', linewidth=0.5) ax.set_ylim([0,2000]) ax.set_title('Time: %0.1f hr'%(sowfatime[time]/3600.0)) axs[0].legend() axs[0].set_ylabel('z [m]') plt.suptitle('Horizontal TI') """ # Plot Reynolds Stresses fig, ax = plt.subplots(figsize=(6,6), facecolor='w', dpi=150) REstress=tutorial2.ABLpostpro_plotprofiles(ax=ax, plotvars=['ReStresses'], avgt=avgtimes) ax.plot(nalustressprof[:,1], nalustressprof[:,0], '--', label='Nalu-Wind uu') ax.plot(nalustressprof[:,2], nalustressprof[:,0], '--', label='Nalu-Wind uv') ax.legend() #ax.set_xlim([0, 0.2]) ax.set_xlabel('Reynolds Stresses [m^2/s^2]') ax.grid(linestyle=':', linewidth=0.5) ax.set_ylim([0,400]) #ax.set_title('Horizontal wind speed') """ ```
github_jupyter
## Example 2: Sensitivity analysis on a NetLogo model with SALib This notebook provides a more advanced example of interaction between NetLogo and a Python environment, using the SALib library (Herman & Usher, 2017; available through the pip package manager) to sample and analyze a suitable experimental design for a Sobol global sensitivity analysis. All files used in the example are available from the pyNetLogo repository at https://github.com/quaquel/pyNetLogo. ``` #Ensuring compliance of code with both python2 and python3 from __future__ import division, print_function try: from itertools import izip as zip except ImportError: # will be 3.x series pass %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pyNetLogo #Import the sampling and analysis modules for a Sobol variance-based sensitivity analysis from SALib.sample import saltelli from SALib.analyze import sobol ``` SALib relies on a problem definition dictionary which contains the number of input parameters to sample, their names (which should here correspond to a NetLogo global variable), and the sampling bounds. Documentation for SALib can be found at https://salib.readthedocs.io/en/latest/. ``` problem = { 'num_vars': 6, 'names': ['random-seed', 'grass-regrowth-time', 'sheep-gain-from-food', 'wolf-gain-from-food', 'sheep-reproduce', 'wolf-reproduce'], 'bounds': [[1, 100000], [20., 40.], [2., 8.], [16., 32.], [2., 8.], [2., 8.]] } ``` We start by instantiating the wolf-sheep predation example model, specifying the _gui=False_ flag to run in headless mode. ``` netlogo = pyNetLogo.NetLogoLink(gui=False) netlogo.load_model('./models/Wolf Sheep Predation_v6.nlogo') ``` The SALib sampler will automatically generate an appropriate number of samples for Sobol analysis. To calculate first-order, second-order and total sensitivity indices, this gives a sample size of _n*(2p+2)_, where _p_ is the number of input parameters, and _n_ is a baseline sample size which should be large enough to stabilize the estimation of the indices. For this example, we use _n_ = 1000, for a total of 14000 experiments. For more complex analyses, parallelizing the experiments can significantly improve performance. An additional notebook in the pyNetLogo repository demonstrates the use of the ipyparallel library; parallel processing for NetLogo models is also supported by the Exploratory Modeling Workbench (Kwakkel, 2017). ``` n = 1000 param_values = saltelli.sample(problem, n, calc_second_order=True) ``` The sampler generates an input array of shape (_n*(2p+2)_, _p_) with rows for each experiment and columns for each input parameter. ``` param_values.shape ``` Assuming we are interested in the mean number of sheep and wolf agents over a timeframe of 100 ticks, we first create an empty dataframe to store the results. ``` results = pd.DataFrame(columns=['Avg. sheep', 'Avg. wolves']) ``` We then simulate the model over the 14000 experiments, reading input parameters from the param_values array generated by SALib. The repeat_report command is used to track the outcomes of interest over time. To later compare performance with the ipyparallel implementation of the analysis, we also keep track of the elapsed runtime. ``` import time t0=time.time() for run in range(param_values.shape[0]): #Set the input parameters for i, name in enumerate(problem['names']): if name == 'random-seed': #The NetLogo random seed requires a different syntax netlogo.command('random-seed {}'.format(param_values[run,i])) else: #Otherwise, assume the input parameters are global variables netlogo.command('set {0} {1}'.format(name, param_values[run,i])) netlogo.command('setup') #Run for 100 ticks and return the number of sheep and wolf agents at each time step counts = netlogo.repeat_report(['count sheep','count wolves'], 100) #For each run, save the mean value of the agent counts over time results.loc[run, 'Avg. sheep'] = counts['count sheep'].values.mean() results.loc[run, 'Avg. wolves'] = counts['count wolves'].values.mean() elapsed=time.time()-t0 #Elapsed runtime in seconds elapsed ``` The "to_csv" dataframe method provides a simple way of saving the results to disk. Pandas supports several more advanced storage options, such as serialization with msgpack, or hierarchical HDF5 storage. ``` results.to_csv('Sobol_sequential.csv') results = pd.read_csv('Sobol_sequential.csv', header=0, index_col=0) results.head(5) ``` We can then proceed with the analysis, first using a histogram to visualize output distributions for each outcome: ``` sns.set_style('white') sns.set_context('talk') fig, ax = plt.subplots(1,len(results.columns), sharey=True) for i, n in enumerate(results.columns): ax[i].hist(results[n], 20) ax[i].set_xlabel(n) ax[0].set_ylabel('Counts') fig.set_size_inches(10,4) fig.subplots_adjust(wspace=0.1) #plt.savefig('JASSS figures/SA - Output distribution.pdf', bbox_inches='tight') #plt.savefig('JASSS figures/SA - Output distribution.png', dpi=300, bbox_inches='tight') plt.show() ``` Bivariate scatter plots can be useful to visualize relationships between each input parameter and the outputs. Taking the outcome for the average sheep count as an example, we obtain the following, using the scipy library to calculate the Pearson correlation coefficient (r) for each parameter: ``` %matplotlib import scipy nrow=2 ncol=3 fig, ax = plt.subplots(nrow, ncol, sharey=True) sns.set_context('talk') y = results['Avg. sheep'] for i, a in enumerate(ax.flatten()): x = param_values[:,i] sns.regplot(x, y, ax=a, ci=None, color='k',scatter_kws={'alpha':0.2, 's':4, 'color':'gray'}) pearson = scipy.stats.pearsonr(x, y) a.annotate("r: {:6.3f}".format(pearson[0]), xy=(0.15, 0.85), xycoords='axes fraction',fontsize=13) if divmod(i,ncol)[1]>0: a.get_yaxis().set_visible(False) a.set_xlabel(problem['names'][i]) a.set_ylim([0,1.1*np.max(y)]) fig.set_size_inches(9,9,forward=True) fig.subplots_adjust(wspace=0.2, hspace=0.3) #plt.savefig('JASSS figures/SA - Scatter.pdf', bbox_inches='tight') #plt.savefig('JASSS figures/SA - Scatter.png', dpi=300, bbox_inches='tight') plt.show() ``` This indicates a positive relationship between the "sheep-gain-from-food" parameter and the mean sheep count, and negative relationships for the "wolf-gain-from-food" and "wolf-reproduce" parameters. We can then use SALib to calculate first-order (S1), second-order (S2) and total (ST) Sobol indices, to estimate each input's contribution to output variance. By default, 95% confidence intervals are estimated for each index. ``` Si = sobol.analyze(problem, results['Avg. sheep'].values, calc_second_order=True, print_to_console=False) ``` As a simple example, we first select and visualize the first-order and total indices for each input, converting the dictionary returned by SALib to a dataframe. ``` Si_filter = {k:Si[k] for k in ['ST','ST_conf','S1','S1_conf']} Si_df = pd.DataFrame(Si_filter, index=problem['names']) Si_df sns.set_style('white') fig, ax = plt.subplots(1) indices = Si_df[['S1','ST']] err = Si_df[['S1_conf','ST_conf']] indices.plot.bar(yerr=err.values.T,ax=ax) fig.set_size_inches(8,4) #plt.savefig('JASSS figures/SA - Indices.pdf', bbox_inches='tight') #plt.savefig('JASSS figures/SA - Indices.png', dpi=300, bbox_inches='tight') plt.show() ``` The "sheep-gain-from-food" parameter has the highest ST index, indicating that it contributes over 50% of output variance when accounting for interactions with other parameters. However, it can be noted that the confidence bounds are overly broad due to the small _n_ value used for sampling, so that a larger sample would be required for reliable results. For instance, the S1 index is estimated to be larger than ST for the "random-seed" parameter, which is an artifact of the small sample size. We can use a more sophisticated visualization to include the second-order interactions between inputs. ``` import itertools from math import pi def normalize(x, xmin, xmax): return (x-xmin)/(xmax-xmin) def plot_circles(ax, locs, names, max_s, stats, smax, smin, fc, ec, lw, zorder): s = np.asarray([stats[name] for name in names]) s = 0.01 + max_s * np.sqrt(normalize(s, smin, smax)) fill = True for loc, name, si in zip(locs, names, s): if fc=='w': fill=False else: ec='none' x = np.cos(loc) y = np.sin(loc) circle = plt.Circle((x,y), radius=si, ec=ec, fc=fc, transform=ax.transData._b, zorder=zorder, lw=lw, fill=True) ax.add_artist(circle) def filter(sobol_indices, names, locs, criterion, threshold): if criterion in ['ST', 'S1', 'S2']: data = sobol_indices[criterion] data = np.abs(data) data = data.flatten() # flatten in case of S2 # TODO:: remove nans filtered = ([(name, locs[i]) for i, name in enumerate(names) if data[i]>threshold]) filtered_names, filtered_locs = zip(*filtered) elif criterion in ['ST_conf', 'S1_conf', 'S2_conf']: raise NotImplementedError else: raise ValueError('unknown value for criterion') return filtered_names, filtered_locs def plot_sobol_indices(sobol_indices, criterion='ST', threshold=0.01): '''plot sobol indices on a radial plot Parameters ---------- sobol_indices : dict the return from SAlib criterion : {'ST', 'S1', 'S2', 'ST_conf', 'S1_conf', 'S2_conf'}, optional threshold : float only visualize variables with criterion larger than cutoff ''' max_linewidth_s2 = 15#25*1.8 max_s_radius = 0.3 # prepare data # use the absolute values of all the indices #sobol_indices = {key:np.abs(stats) for key, stats in sobol_indices.items()} # dataframe with ST and S1 sobol_stats = {key:sobol_indices[key] for key in ['ST', 'S1']} sobol_stats = pd.DataFrame(sobol_stats, index=problem['names']) smax = sobol_stats.max().max() smin = sobol_stats.min().min() # dataframe with s2 s2 = pd.DataFrame(sobol_indices['S2'], index=problem['names'], columns=problem['names']) s2[s2<0.0]=0. #Set negative values to 0 (artifact from small sample sizes) s2max = s2.max().max() s2min = s2.min().min() names = problem['names'] n = len(names) ticklocs = np.linspace(0, 2*pi, n+1) locs = ticklocs[0:-1] filtered_names, filtered_locs = filter(sobol_indices, names, locs, criterion, threshold) # setup figure fig = plt.figure() ax = fig.add_subplot(111, polar=True) ax.grid(False) ax.spines['polar'].set_visible(False) ax.set_xticks(ticklocs) ax.set_xticklabels(names) ax.set_yticklabels([]) ax.set_ylim(top=1.4) legend(ax) # plot ST plot_circles(ax, filtered_locs, filtered_names, max_s_radius, sobol_stats['ST'], smax, smin, 'w', 'k', 1, 9) # plot S1 plot_circles(ax, filtered_locs, filtered_names, max_s_radius, sobol_stats['S1'], smax, smin, 'k', 'k', 1, 10) # plot S2 for name1, name2 in itertools.combinations(zip(filtered_names, filtered_locs), 2): name1, loc1 = name1 name2, loc2 = name2 weight = s2.loc[name1, name2] lw = 0.5+max_linewidth_s2*normalize(weight, s2min, s2max) ax.plot([loc1, loc2], [1,1], c='darkgray', lw=lw, zorder=1) return fig from matplotlib.legend_handler import HandlerPatch class HandlerCircle(HandlerPatch): def create_artists(self, legend, orig_handle, xdescent, ydescent, width, height, fontsize, trans): center = 0.5 * width - 0.5 * xdescent, 0.5 * height - 0.5 * ydescent p = plt.Circle(xy=center, radius=orig_handle.radius) self.update_prop(p, orig_handle, legend) p.set_transform(trans) return [p] def legend(ax): some_identifiers = [plt.Circle((0,0), radius=5, color='k', fill=False, lw=1), plt.Circle((0,0), radius=5, color='k', fill=True), plt.Line2D([0,0.5], [0,0.5], lw=8, color='darkgray')] ax.legend(some_identifiers, ['ST', 'S1', 'S2'], loc=(1,0.75), borderaxespad=0.1, mode='expand', handler_map={plt.Circle: HandlerCircle()}) sns.set_style('whitegrid') fig = plot_sobol_indices(Si, criterion='ST', threshold=0.005) fig.set_size_inches(7,7) #plt.savefig('JASSS figures/Figure 8 - Interactions.pdf', bbox_inches='tight') #plt.savefig('JASSS figures/Figure 8 - Interactions.png', dpi=300, bbox_inches='tight') plt.show() ``` In this case, the sheep-gain-from-food variable has strong interactions with the wolf-gain-from-food and sheep-reproduce inputs in particular. The size of the ST and S1 circles correspond to the normalized variable importances. Finally, the kill_workspace() function shuts down the NetLogo instance. ``` netlogo.kill_workspace() ```
github_jupyter
# XSAR example open a dataset with [xsar.open_dataset](../basic_api.rst#xsar.open_dataset) ``` import xsar import os import numpy as np # get test file. You can replace with an path to other SAFE filename = xsar.get_test_file('S1A_IW_GRDH_1SDV_20170907T103020_20170907T103045_018268_01EB76_Z010.SAFE') ``` ## Open a dataset with a xsar.Sentinel1Meta object A [xsar.Sentinel1Meta](../basic_api.rst#xsar.Sentinel1Meta) object handles all attributes and methods that can't be embdeded in a `xarray.Dataset` object. It can also replace a filename in `xarray.open_dataset` ``` sar_meta = xsar.Sentinel1Meta(filename) sar_meta ``` If holoviews extension is loaded, the `<Sentinel1Meta objet>` have a nice representation. (`matplolib` is also a valid extension) ``` import holoviews as hv hv.extension('bokeh') sar_meta ``` `sar_meta` object is an [xsar.Sentinel1Meta](../basic_api.rst#xsar.Sentinel1Meta) object that can be given to `xarray.open_dataset` or [xsar.Sentinel1Dataset](../basic_api.rst#xsar.Sentinel1Dataset) , as if it was a filename: ``` sar_ds = xsar.Sentinel1Dataset(sar_meta) sar_ds ``` ## Open a dataset at lower resolution `resolution` keyword can be used to open a dataset at lower resolution. It might be: * a dict `{'atrack': 20, 'xtrack': 20}`: 20*20 pixels. so if sensor resolution is 10m, the final resolution will be 100m * a string like `'200m'`: Sensor resolution will be automatically used to compute the window size Then we can instantiate a [xsar.Sentinel1Dataset](../basic_api.rst#xsar.Sentinel1Dataset), with the given resolution. Note that the above pixel size has changed. ``` sar_ds = xsar.Sentinel1Dataset(sar_meta, resolution='200m') sar_ds ``` ## Extract a sub image of 10*10km around a lon/lat point ### Convert (lon,lat) to (atrack, xtrack) we can use [sar_meta.ll2coords](../basic_api.rst#xsar.Sentinel1Meta.ll2coords) to convert (lon,lat) to (atrack, xtrack): ``` # from a shapely object point_lonlat = sar_meta.footprint.centroid point_coords = sar_meta.ll2coords(point_lonlat.x, point_lonlat.y) point_coords ``` The result is floating, because it is the position inside the pixel. If real indexes from existing dataset is needed, you'll have to use [sar_ds.ll2coords](../basic_api.rst#xsar.Sentinel1Dataset.ll2coords) Result will be the nearest (atrack, xtrack) in the dataset ``` point_coords = sar_ds.ll2coords(point_lonlat.x, point_lonlat.y) point_coords ``` ### Extract the sub-image ``` box_size = 10000 # 10km dist = {'atrack' : int(np.round(box_size / 2 / sar_meta.pixel_atrack_m)), 'xtrack': int(np.round(box_size / 2 / sar_meta.pixel_xtrack_m))} dist ``` The xarray/dask dataset is available as a property : [sar_ds.dataset](../basic_api.rst#xsar.Sentinel1Dataset.dataset). This attribute can be set to a new values, so the attributes like pixel spacing and coverage are correctly recomputed: ``` # select 10*10 km around point_coords sar_ds.dataset = sar_ds.dataset.sel(atrack=slice(point_coords[0] - dist['atrack'], point_coords[0] + dist['atrack']), xtrack=slice(point_coords[1] - dist['xtrack'], point_coords[1] + dist['xtrack'])) sar_ds sar_ds.dataset ```
github_jupyter
# Dynamic Programming adopted from: https://www.amazon.com/Python-Algorithms-Mastering-Basic-Language/dp/148420056X Python Algorithms: Mastering Basic Algorithms in the Python Language ![Bellman](https://upload.wikimedia.org/wikipedia/en/7/7a/Richard_Ernest_Bellman.jpg) https://en.wikipedia.org/wiki/Richard_E._Bellman ## Dynamic Programming - not the programming in computer terms! The term dynamic programming (or simply DP) can be a bit confusing to newcomers. Both of the words are used in a different way than most might expect. Programming here refers to making a set of choices (as in “linear programming”) and thus has more in common with the way the term is used in, say, television, than in writing computer programs. Dynamic simply means that things change over time—in this case, that each choice depends on the previous one. In other words, this “dynamicism” has little to do with the program you’ll write and is just a description of the problem class. In Bellman’s own words, “I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities * The core technique of DP -> caching * Decompose your problem recursively/inductively (usual) * allow overlap between the subproblems. * Plain recursive solution xponential number of times -> caching trims away waste * result is usually both an impressively efficient algorithm and a greater insight into the problem. Commonly, DP algorithms turn the recursive formulation upside down, making it iterative and filling out some data structure (such as a multidimensional array) step by step. * Another option well suited to high-level languages such as Python—is to implement the recursive formulation directly but to cache the return values. * If a call is made more than once with the same arguments, the result is simply returned directly from the cache. This is known as **memoization** ## Little puzzle: Longest Increasing Subsequence Say you have a sequence of numbers, and you want to find its longest increasing (or, rather nondecreasing) subsequence—or one of them, if there are more. A subsequence consists of a subset of the elements in their original order. So, for example, in the sequence [3, 1, 0, 2, 4], one solution would be [1, 2, 4]. ``` from itertools import combinations def naive_lis(seq): for length in range(len(seq), 0, -1): # n, n-1, ... , 1 for sub in combinations(seq, length): # Subsequences of given length if list(sub) == sorted(sub): # An increasing subsequence? return sub # Return it! naive_lis([3,1,0,2,4]) naive_lis([5,2,1,6,3,7,4,6]) # how about complexity? # Two nested loops -> n^2 ? # Hint combinations is not O(1).... ## Fibonacci def fib(i): # finding i-th member in our Fibonacci chain 1,1,2,3,5,8,13,21,34,55,89 if i < 2: return 1 else: return fib(i-1) + fib(i-2) fib(5) fib(10) for n in range(1,15): print(fib(n), fib(n-1), round(fib(n)/fib(n-1),5)) ``` ![Fib Spiral](https://upload.wikimedia.org/wikipedia/commons/thumb/9/93/Fibonacci_spiral_34.svg/500px-Fibonacci_spiral_34.svg.png) https://en.wikipedia.org/wiki/Golden_ratio ``` # so far so good? fib(100) ``` #https://stackoverflow.com/questions/35959100/explanation-on-fibonacci-recursion ![FibTree](https://i.stack.imgur.com/QVSdv.png) ``` from functools import wraps def memo(func): cache = {} # Stored subproblem solutions, this dictionary - Hashmap type @wraps(func) # Make wrap look like func def wrap(*args): # The memoized wrapper if args not in cache: # Not already computed? cache[args] = func(*args) # Compute & cache the solution return cache[args] # Return the cached solution return wrap # Return the wrapper fib_memo = memo(fib) #functions are first class citizens in Python fib_memo(35) %%timeit fib(35) @memo # this is a decorator - meaning we wrap our fib_m function in memo function def fib_m(i): if i < 2: return 1 else: return fib_m(i-1) + fib_m(i-2) %%timeit fib_m(35) fib_m(36) # well the book implemention does not quite work, it is not caching properly fib_m(40) fib_m(60) fib_m(200) fib_m(1000) fib_m(2000) # sadly memoization will not solved stack overflow problem fib_m(5000) # sadly memoization will not help us with stack overflow , a good way to force kernel restart :) import functools # https://stackoverflow.com/questions/1988804/what-is-memoization-and-how-can-i-use-it-in-python @functools.lru_cache(maxsize=None) #by default only 128 latest def fib(num): if num < 2: return num else: return fib(num-1) + fib(num-2) fib(35) %%timeit fib(35) # so official caching version is 3 times faster than our self-made version, factor of 3 is not a dealbreak but nice to know fib(40) fib(100) fib(200) fib(1200) fib(2000) fib(3000) fib(5000) # so the problem with TOP-DOWN memoization is that we are still left with recursive calls going over our stack limit ``` ## So how to solve the stack overflow problem? ``` # we could try the build up solution - meaning BOTTOM-UP method # this will usually be an iterative solution so no worries about stack # silly iterative version def fib_it(n): # so n will be 1 based fibs = [1,1] #so we are going to build our answers # lets pretend we do not know of any formulas and optimizations # n += 1 # fix this if n < 2: return fibs[n] # off by one errors ndx = 2 while ndx <= n: fibs.append(fibs[ndx-1]+fibs[ndx-2]) # so I am building a 1-d table(array/list) of answers ndx+=1 return fibs[n] # again off by one indexing 0 based in python and 1 based in our function fib_it(2) for n in range(0,10+1): print(fib_it(n)) fib_it(5),fib_it(6) fib_it(35),fib_it(36) %%timeit fib_it(35) # so one way we could improve is that we do not need to store all this knowledge about previous solutions, # (unless we were building a table of ALL solutions) # so we only need to store 2 values def fib_v2(n): prev, cur = 1, 1 if n <= 1: return prev ndx = 2 while ndx <= n: prev, cur = cur, prev+cur # python makes it easy to assign 2 values at once with tuple unpacking ndx += 1 return cur fib_v2(5),fib_v2(6) fib_v2(35) %%timeit fib_v2(35) ``` # Pascal's triangle ![Triangle](https://upload.wikimedia.org/wikipedia/commons/0/0d/PascalTriangleAnimated2.gif) The combinatorial meaning of C(n,k) is the number of k-sized subsets you can get from a set of size n. In mathematics, a combination is a selection of items from a collection, such that the order of selection does not matter (unlike permutations). https://en.wikipedia.org/wiki/Combination ``` # this is horrible again we have 2 recursive calls for each call def C(n,k): if k == 0: return 1 if n == 0: return 0 return C(n-1,k-1) + C(n-1,k) C(3,0),C(3,1),C(3,2),C(3,3) C(4,0),C(4,1),C(4,2),C(4,3),C(4,4) C(6,3) C(20,12) @functools.lru_cache(maxsize=None) def C_mem(n,k): if k == 0: return 1 if n == 0: return 0 return C_mem(n-1,k-1) + C_mem(n-1,k) C_mem(20,12) %%timeit C(20,12) %%timeit C_mem(20,12) ``` So 3 Million times faster And it will only get worse with larger n and k values ``` C(22,10) C_mem(30,22) C_mem(500,127) C_mem(2000,1555) ``` You may at times want to rewrite your code to make it iterative. This can make it faster, and you avoid exhausting the stack if the recursion depth gets excessive. There’s another reason, too: The iterative versions are often based on a specially constructed cache, rather than the generic “dict keyed by parameter tuples” used in my @memo. This means that you can sometimes use more efficient structures, such as the multidimensional arrays of NumPy, or even just nested lists. This custom cache design makes it possible to do use DP in more low-level languages(ahem C, C++), where general, abstract solutions such as our @memo decorator are often not feasible. Note that even though these two techniques often go hand in hand, you are certainly free to use an iterative solution with a more generic cache or a recursive one with a tailored structure for your subproblem solutions. Let’s reverse our algorithm, filling out Pascal’s triangle directly. ``` from collections import defaultdict def pascal_up(n,k): # Cit = defaultdict(int) Cit = {} # turns out going to plain dictionary did not help at all, slow down by 10% for row in range(n+1): Cit[row,0] = 1 for col in range(1,k+1): # looking like O(n*k) space and time complexity here right? Cit[row,col] = Cit.get((row-1,col-1),0) + Cit.get((row-1,col),0) return Cit[n,k] pascal_up(20,12) %%timeit pascal_up(20,12) %%timeit pascal_up(20,12) pascal_up(200,120) C_mem(200,120) pascal_up(2000,1255) C_mem(2000,1255) # so we see the need for iterative version - other way would be to allow tail call optimization in functional languages # so we could futher save memory for our pascal_up by only saving the needed information meaning we only need the previous row ``` # Difference between TOP-DOWN (with memoization) and BOTTOM-UP (with filling up DP table) Basically the same thing is going on. The main difference is that we need to figure out which cells in the cache need to be filled out, and we need to find a safe order to do it in so that when we’re about to calculate C[row,col], the cells C[row-1,col-1] and C[row-1,col] are already calculated. With the memoized function, we needn’t worry about either issue: It will calculate whatever it needs recursively. ``` ## Back to LIS # so recursive memoized solution - TOP/DOWN def rec_lis(seq): # Longest increasing subseq. @functools.lru_cache(maxsize=None) def L(cur): # Longest ending at seq[cur] res = 1 # Length is at least 1 for pre in range(cur): # Potential predecessors if seq[pre] <= seq[cur]: # A valid (smaller) predec. res = max(res, 1 + L(pre)) # Can we improve the solution? return res return max(L(i) for i in range(len(seq))) # The longest of them all rec_lis([3,1,0,2,4]) # so recursive memoized solution - TOP/DOWN # TODO add sequence passing def rec_lis_full(seq): # Longest increasing subseq. @functools.lru_cache(maxsize=None) def L(cur): # Longest ending at seq[cur] res = 1 # Length is at least 1 for pre in range(cur): # Potential predecessors if seq[pre] <= seq[cur]: # A valid (smaller) predec. res = max(res, 1 + L(pre)) # Can we improve the solution? return res return max(L(i) for i in range(len(seq))) # The longest of them all # tabulated solution def basic_lis(seq): L = [1] * len(seq) for cur, val in enumerate(seq): for pre in range(cur): if seq[pre] <= val: L[cur] = max(L[cur], 1 + L[pre]) return max(L) # TODO add iterative sequence passing basic_lis([3,1,2,0,4]) for i,n in enumerate("Valdis"): print(i,n) basic_lis([3,1,0,2,4,7,9,6]) ``` A crucial insight is that if more than one predecessor terminate subsequences of length m, it doesn’t matter which one of them we use—they’ll all give us an optimal answer. Say, we want to keep only one of them around; which one should we keep? The only safe choice would be to keep the smallest of them, because that wouldn’t wrongly preclude any later elements from building on it. So let’s say, inductively, that at a certain point we have a sequence end of endpoints, where end[idx] is the smallest among the endpoints we’ve seen for increasing subsequences of length idx+1 (we’re indexing from 0). Because we’re iterating over the sequence, these will all have occurred earlier than our current value, val. All we need now is an inductive step for extending end, finding out how to add val to it. If we can do that, at the end of the algorithm len(end) will give us the final answer—the length of the longest increasing subsequence. This devilishly clever little algorithm was first was first described by Michael L. Fredman in 1975 ``` from bisect import bisect def lis(seq): # Longest increasing subseq. end = [] # End-values for all lengths for val in seq: # Try every value, in order idx = bisect(end, val) # Can we build on an end val? if idx == len(end): end.append(val) # Longest seq. extended else: end[idx] = val # Prev. endpoint reduced return len(end) # The longest we found lis([3,1,0,2,4,7,9,6]) ```
github_jupyter
# 03 - Beginner Exercises * String --- ## 🍉🍉🍉 1.Assign the string below to the variable. "**I’m just highly motivated to do nothing**" ``` # Write your own code in this cell mystring = print() ``` ## 🍉🍉🍉 2.By using **last** , **fifth** and **one before the last** characters of the ```mystring```, create a new string. ``` # Write your own code in this cell ``` ## 🍉🍉🍉 3.replace **g** with **s** in ```mystring``` using ```replace()``` method ``` # Write your own code in this cell print(mystring) ``` ## 🍉🍉 4.The following string of characters appears to be reversed; please fix it and print the result. **".enod er’uoy nehw wonk reven uoy ,drah si gnihton gnioD"** ``` # Write your own code in this cell ``` ## 🍉🍉 5.Reassign ```mystring``` with this pharase **"Friday, my second favorite F word."** and use one of the built-in methods for strings to lowercase all the characters. ``` # Write your own code in this cell print(mystring) ``` ## 🍉🍉 6.Use a built-in method to check if ```mystring``` end with a "." ``` # Write your own code in this cell # True or False check = print(check) ``` ## 🍉🍉 7.Print ```mystring``` and then Using index() method, identify the index of character: (**f**) in ```mystring``` ``` # Write your own code in this cell ``` ## 🍉 8.As you can see, in the previous exercise we had another **f** character in the ```mystring```, but we got the index of the first **f**. Can you find the index of the last **f** using a built-in method? ``` # Write your own code in this cell ``` ## 🍉 9.identify the index of character: (**F**) in ```mystring``` ,and then Explain the outcome ``` # Write your own code in this cell ``` ## 🍉 10.Using ```find()``` method, identify the index of character: **f** and **F** in ```mystring``` and compare the results with the previous exercise. ``` # Write your own code in this cell ``` ## 🍉 11.Take a look at the following string and then make the string so that everything is properly and first letter is capital with one function. ``` mystring = "i DiDn’T fAll dOwN, i DId aTtTCk thE fLOor, tHough." print(mystring) ``` ## 🍉🍉 12.Which character occur more often in the ```mystring```? "e" or "o" ? .Within the print function, print both counts. ``` # Write your own code in this cell mystring = "A jellyfish has existed as a species for 500 million years, surviving just fine without a brain. That gives hope to quite a few people." count_e = count_o = print("count of e is: ", count_e , " count of o is: ", count_o) ``` ## 🍉🍉 13.Using the print function, print the types of two given variables and print the length of ```number1```? ``` number1 = "99.41" number2 = 99.41 ``` ## 🌶️ 14.Convert 99.41 to a string. Then add $ sign to this string and assign the result to the ```str``` variable and then print ```str```. Can you explain the result? ``` number2 = 99.41 str = ```
github_jupyter
``` import sys sys.path.append('../..') import pyotc ``` ### Density of water ``` from scipy import array, arange rho = array([0.99984, 0.99970, 0.99821, 0.99565, 0.99222, 0.98803, 0.98320, 0.97778, 0.97182, 0.96535, 0.95840 ]) * 1000 temp = array(arange(0, 101, 10, dtype=float)) from lmfit import Parameters, Model, report_fit pars = Parameters() pars.add('a', value=-1, vary=True, min=None, max=None) pars.add('b', value=0, vary=True, min=None, max=None) pars.add('c', value=0, vary=True, min=None, max=None) pars.add('d', value=0, vary=True, min=None, max=None) p3 = lambda a, b, c, d, t0, temp: a * (temp-t0)**3 + b * (temp-t0)**2 + c * (temp-t0) + d t0 = 0.0 # degC poly3 = Model(lambda a, b, c, d, temp: p3(a, b, c, d, t0, temp), name='poly3', independent_vars=['temp'], param_names=['a', 'b', 'c', 'd']) poly3_res = poly3.fit(rho, temp=temp, params=pars ) report_fit(poly3_res) import matplotlib.pyplot as plt from pyotc.plotting import get_residual_plot_axes, add_plot_to_figure plt.ion() fig = plt.figure(figsize=(9, 6)) ax1, ax2 = get_residual_plot_axes(figure=fig)[0] add_plot_to_figure(fig, temp, rho, fmt='x', axis=ax1) add_plot_to_figure(fig, temp, poly3_res.eval(), axis=ax1) ax1.set_xlabel('') ax1.set_xticklabels([]) ax1.set_ylabel('Density (kg/m³)') add_plot_to_figure(fig, temp, poly3_res.residual / rho * 100, axis=ax2) ax2.set_ylabel('Error (%)') ax2.set_xlabel('Temperature (degC)'); fig.tight_layout() ``` ### Viscosity of water ``` #determine a, b, and c with resp. to Kelvin from pyotc.physics import viscosity_H2O from numpy import vectorize from scipy import array, arange from pyotc.psd_fitting import gen_fit_pars eta_CRC = array([1793, 1307, 1002, 797.7, 653.2, 547.0, 466.5, 404.0, 354.4, 314.5, 281.8 ]) * 1e-6 T_CRC = array([0, 10 ,20, 30, 40, 50, 60, 70, 80, 90, 100]) from lmfit import Model, report_fit def fitfun(T, a, b, c): return a / (1 + b * T + c * T**2) pars = gen_fit_pars(a = 1.79458, b = 0.03586, c = 0.00019) # celsius model mdl = Model(fitfun, independent_vars=['T']) mzr = mdl.fit(eta_CRC, T=T_CRC, params=pars) report_fit(mzr) pars_fit = mzr.params ax = pyotc.add_plot_to_figure(None, T_CRC, eta_CRC, fmt='ob') fig = ax.figure t_degC = arange(0, 100, 1) pyotc.add_plot_to_figure(fig, t_degC, fitfun(t_degC, pars_fit['a'].value, pars_fit['b'].value, pars_fit['c'].value)) # kelvin model mzr2 = mdl.fit(eta_CRC, T=T_CRC+273.15, params=pars) report_fit(mzr2) pars_fit2 = mzr2.params ax = pyotc.add_plot_to_figure(None, T_CRC+273.15, eta_CRC, fmt='ob') fig = ax.figure t_K = arange(0, 100, 1) + 273.15 pyotc.add_plot_to_figure(fig, t_K, fitfun(t_K, pars_fit2['a'].value, pars_fit2['b'].value, pars_fit2['c'].value)); pyotc.add_plot_to_figure(fig, t_K, vectorize(viscosity_H2O)(t_K), fmt='^') #fig ``` ### Comparing Viscosity formulas: CRC Handbook fit to other Equations ### Kinematic viscosity of water ``` from pyotc.physics import kinematic_viscosity_H2O, viscosity_H2O from numpy import vectorize vkin = vectorize(kinematic_viscosity_H2O) vvis = vectorize(viscosity_H2O) temp = arange(20.0, 40.0, 0.1, dtype=float) ax1, ax2 = get_residual_plot_axes()[0] fig = ax1.figure temp = arange(20.0, 40.0, 0.1, dtype=float) kin_vis = vkin(temp + 273.15) kin_vis_1000 = vvis(temp + 273.15) / 1000 add_plot_to_figure(fig, temp, kin_vis, axis=ax1, label='Kinematic viscosity of water') add_plot_to_figure(fig, temp, kin_vis_1000, axis=ax1, label='Kinematic viscosity of water with rho=1000 kg/m³', showLegend=True) ax1.set_xlabel('') ax1.set_xticklabels([]) ax1.set_ylabel('Kinematic viscosity (m²/s)'); add_plot_to_figure(fig, temp, (kin_vis_1000 - kin_vis)/kin_vis * 100, axis=ax2, label='Error') ax2.set_xlabel('Temperature (degC)') ax2.set_ylabel('Error (%)'); from scipy import linspace, power, exp, array #viscosity_wiki = lambda T: 2.414e-5 * power(10, (247.8 / (T-140))) viscosity_wiki2 = lambda a, b, T: exp(a + b/T) * 1e-3 eta_CRC = array([1793, 1307, 1002, 797.7, 653.2, 547.0, 466.5, 404.0, 354.4, 314.5, 281.8 ]) * 1e-6 T_CRC = array([0, 10 ,20, 30, 40, 50, 60, 70, 80, 90, 100]) T_fine = linspace(0, 100, 50) v_CRC = vvis(T_fine + 273.15) #v_wiki = viscosity_wiki(T_fine) v_wiki2 = viscosity_wiki2(-6.944, 2036.8, T_fine + 273.15) fig = plt.figure(figsize=(9,6)) ax1, ax2 = get_residual_plot_axes(figure=fig)[0] add_plot_to_figure(fig, T_CRC, eta_CRC, fmt='o', label='CRC Handbook data', axis=ax1) add_plot_to_figure(fig, T_fine, v_CRC, fmt='-g', label='Polynomial fit', axis=ax1) add_plot_to_figure(fig, T_fine, v_wiki2, fmt='-r', label='Andrade equation', axis=ax1, title='Viscosity of Water', ylabel=r'Viscosity $(\mathrm{Pa\,s})$', showLegend=True) ax1.set_xticklabels([]) ax1.set_xlabel('') add_plot_to_figure(fig, T_CRC, (eta_CRC - vvis(T_CRC + 273.15)),# / eta_CRC, axis=ax2, fmt='-g', label='deviation of fit from CRC Handbook data' ) add_plot_to_figure(fig, T_CRC, eta_CRC - viscosity_wiki2(-6.944, 2036.8, T_CRC + 273.15),# / eta_CRC, axis=ax2, fmt='-r', label='deviation of wiki equation from CRC Handbook data', xlabel=r'Temperature $(\mathrm{^\circ C})$', ylabel=r'$\Delta\eta \mathrm{Pa\,s}$' ); fig.tight_layout() ```
github_jupyter
## Operators on Multiple Bits [Watch Lecture](https://youtu.be/vd21d1KTC5c) We explain how to construct the operator of a composition system when we apply an operator to one bit or to a few bits of the composite system. *Here we have a simple rule, we assume that the identity operator is applied on the rest of the bits.* ### Single bit operators When we have two bits, then our system has four states and any operator of the system can be defined as a $ (4 \times 4) $-dimensional matrix. For example, if we apply the probabilistic operator $ M = \mymatrix{c}{ 0.3 & 0.6 \\ 0.7 & 0.4 } $ to the second bit, then how can we represent the corresponding $ (4 \times 4) $-dimensional matrix? The answer is easy. By assuming that the identity operator is applied to the first bit, the matrix is $$ I \otimes M = \I \otimes \mymatrix{c}{ 0.3 & 0.6 \\ 0.7 & 0.4 } = \mymatrix{cccc} { 0.3 & 0.6 & 0 & 0 \\ 0.7 & 0.4 & 0 & 0 \\ 0 & 0 & 0.3 & 0.6 \\ 0 & 0& 0.7 & 0.4 }. $$ <h3> Task 1</h3> We have two bits. What is $ (4 \times 4) $-dimensional matrix representation of the probabilistic operator $ M = \mymatrix{c}{ 0.2 & 0.7 \\ 0.8 & 0.3 } $ applied to the first bit? <h3>Solution</h3> We assume that the identity operator is applied to the second bit: $$ M \otimes I = \mymatrix{rr}{ 0.2 & 0.7 \\ 0.8 & 0.3 } \otimes \I = \mymatrix{rrrr}{ 0.2 & 0 & 0.7 & 0 \\ 0 & 0.2 & 0 & 0.7 \\ 0.8 & 0 & 0.3 & 0 \\ 0 & 0.8 & 0 & 0.3} $$. <h3> Task 2</h3> We have three bits. What is $ (8 \times 8) $-dimensional matrix representation of the probabilistic operator $ M = \mymatrix{c}{ 0.9 & 0.4 \\ 0.1 & 0.6 } $ applied to the second bit? <h3>Solution</h3> We assume that the identity operators are applied to the first and third bits: $ I \otimes M \otimes I = \I \otimes \mymatrix{rr}{ 0.9 & 0.4 \\ 0.1 & 0.6 } \otimes \I $. Tensor product is associative and so it does not matter from which pair we start. We first calculate the tensor product of the second and third matrices: $$ I \otimes \mypar{ M \otimes I } = \I \otimes \mymatrix{rrrr}{ 0.9 & 0 & 0.4 & 0 \\ 0 & 0.9 & 0 & 0.4 \\ 0.1 & 0 & 0.6 & 0 \\ 0 & 0.1 & 0 & 0.6} = \mymatrix{rrrr|rrrrr}{0.9 & 0 & 0.4 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0.9 & 0 & 0.4 & 0 & 0 & 0 & 0 \\ 0.1 & 0 & 0.6 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0.1 & 0 & 0.6 & 0 & 0 & 0 & 0 \\ \hline 0 & 0 & 0 & 0 & 0.9 & 0 & 0.4 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0.9 & 0 & 0.4 \\ 0 & 0 & 0 & 0 & 0.1 & 0 & 0.6 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0.1 & 0 & 0.6} $$ ### Two bits operators We start with an easy example. We have three bits and we apply the probabilistic operator $ M = \mymatrix{rrrr}{0.05 & 0 & 0.70 & 0.60 \\ 0.45 & 0.50 & 0.20 & 0.25 \\ 0.20 & 0.35 & 0.10 & 0 \\ 0.30 & 0.15 & 0 & 0.15 } $ to the first and second bits. Then, the corresponding $ (8 \times 8) $-dimensional matrix is $ M \otimes I $, where $I$ is the $(2 \times 2)$-dimensional Identity matrix. If $ M $ is applied to the second and third bits, then the corresponding matrix is $ I \otimes M $. **What if $ M $ is applied to the first and third bits?** We pick an example transition: it is given in $ M $ that $ \greenbit{0} \brownbit{1} \xrightarrow{0.35} \greenbit{1} \brownbit{0} $. - That is, when the first bit is 0 and third bit is 1, the first bit is set to 1 and the the third bit is set to 0 with probability 0.35: $$ \myarray{ccccc}{\mbox{first-bit} & \mbox{third-bit} & probability & \mbox{first-bit} & \mbox{third-bit} \\ \greenbit{0} & \brownbit{1} & \xrightarrow{0.35} & \greenbit{1} & \brownbit{0} } $$ - We put the second bit in the picture by assuming that the identity operator is applied to it: $$ \myarray{ccccccc}{ \mbox{first-bit} & \mbox{second-bit} & \mbox{third-bit} & probability & \mbox{first-bit} & \mbox{second-bit} & \mbox{third-bit} \\ \greenbit{0} & \bluebit{0} & \brownbit{1} & \xrightarrow{0.35} & \greenbit{1} & \bluebit{0} & \brownbit{0} \\ \greenbit{0} & \bluebit{1} & \brownbit{1} & \xrightarrow{0.35} & \greenbit{1} & \bluebit{1} & \brownbit{0} \\ \\ \hline \\ \greenbit{0} & \bluebit{0} & \brownbit{1} & \xrightarrow{0} & \greenbit{1} & \bluebit{1} & \brownbit{0} \\ \greenbit{0} & \bluebit{1} & \brownbit{1} & \xrightarrow{0} & \greenbit{1} & \bluebit{0} & \brownbit{0} } $$ <h3> Task 3</h3> Why are the last two transition probabilities zeros in the above table? <h3> Task 4</h3> We have three bits and the probabilistic operator $ M = \mymatrix{rrrr}{0.05 & 0 & 0.70 & 0.60 \\ 0.45 & 0.50 & 0.20 & 0.25 \\ 0.20 & 0.35 & 0.10 & 0 \\ 0.30 & 0.15 & 0 & 0.15 } $ is applied to the first and third bits. What is the corresponding the $(8 \times 8)$-dimensional matrix applied to the whole system? *You may solve this task by using python.* ``` # the given matrix M = [ [0.05, 0, 0.70, 0.60], [0.45, 0.50, 0.20, 0.25], [0.20, 0.35, 0.10, 0], [0.30, 0.15, 0, 0.15] ] # # you may enumarate the columns and rows by the strings '00', '01', '10', and '11' # int('011',2) returns the decimal value of the binary string '011' # # # your solution is here # # the given matrix M = [ [0.05, 0, 0.70, 0.60], [0.45, 0.50, 0.20, 0.25], [0.20, 0.35, 0.10, 0], [0.30, 0.15, 0, 0.15] ] print("Matrix M is") for row in M: print(row) print() # the target matrix is K # we create it and filled with zeros K = [] for i in range(8): K.append([]) for j in range(8): K[i].append(0) # for each transition in M, we create four transitions in K, two of which are always zeros for col in ['00','01','10','11']: for row in ['00','01','10','11']: prob = M[int(col,2)][int(row,2)] # second bit is 0 newcol = col[0]+'0'+col[1] newrow = row[0]+'0'+row[1] K[int(newcol,2)][int(newrow,2)] = prob # second bit is 1 newcol = col[0]+'1'+col[1] newrow = row[0]+'1'+row[1] K[int(newcol,2)][int(newrow,2)] = prob print("Matrix K is") for row in K: print(row) ``` ### Controlled operators The matrix form of the controlled-NOT operator is as follows: $$ CNOT = \mymatrix{cc|cc}{ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \hline 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 } = \mymatrix{c|c}{ I & \mathbf{0} \\ \hline \mathbf{0} & X}, $$ where $ X $ denotes the NOT operator. Similarly, for a given single bit operator $ M $, we can define the controlled-$M$ operator (where the first bit is the control bit and the second bit is target bit) as follows: $$ CM = \mymatrix{c|c}{ I & \mathbf{0} \\ \hline \mathbf{0} & M } $$ By definition: * when the first bit is 0, the identity is applied to the second bit, and * when the first bit is 1, the operator $ M $ is applied to the second bit. Here we observe that the matrix $ CM $ has a nice form because the first bit is control bit. The matrix $ CM $ given above is divided into four sub-matrices based on the states of the first bit. Then, we can follow that * the value of the first bit never changes, and so the off diagonal sub-matrices are zeros; * when the first bit is 0, the identity is applied to the second bit, and so top-left matrix is $ I $; and, * when the first bit is 1, the operator $ M $ is applied to the second bit, and so the bottom-right matrix is $ M $. <h3> Task 5</h3> Let $ M = \mymatrix{cc}{0.7 & 0.4 \\ 0.3 & 0.6} $ be a single bit operator. What is the matrix form of the controlled-$M$ operator where the first bit is the target bit and the second bit is the control bit. <h3>Solution</h3> When the second bit is zero, the state of the first bit does not change. We can write this as * $ 00 \xrightarrow{1} 00 $ and * $ 10 \xrightarrow{1} 10 $, So, we have the first and third columns as $ \myvector{ 1 \\ 0 \\ 0 \\0 } $ and $ \myvector{0 \\ 0 \\ 1 \\ 0} $, respectively. When the second bit is one, the operator $ M $ is appled to the first bit. We can write this as * $ \pstate{ \bluebit{0} \redbit{1} } \rightarrow 0.7 \pstate{ \bluebit{0} \redbit{1} } + 0.3 \pstate{ \bluebit{1} \redbit{1} } $, and * $ \pstate{ \bluebit{1} \redbit{1} } \rightarrow 0.4 \pstate{ \bluebit{0} \redbit{1} } + 0.6 \pstate{ \bluebit{1} \redbit{1} } $. Thus, we also have the second and fourth columns as $ \myvector{ 0 \\ 0.7 \\ 0 \\ 0.3 } $ and $ \myvector{0 \\ 0.4 \\ 0 \\ 0.6} $. Therefore, the overall matrix is $ \mymatrix{cccc}{ 1 & 0 & 0 & 0 \\ 0 & 0.7 & 0 & 0.4 \\ 0 & 0 & 1 & 0 \\ 0 & 0.3 & 0 & 0.6 }. $ ### Controlled operator activated when in state 0 For a given single bit operator $ M $, **how can we obtain the following operator** by using the operator $ CM $? $$ C_0M = \mymatrix{c|c}{ M & \mathbf{0} \\ \hline \mathbf{0} & I } $$ Controlled operator are defined to be triggered when the control bit is in state 1. In this example, we expect it to be triggered when the control bit is in state 0. Here we can use a simple trick. We first apply NOT operator to the first bit, and then the CM operator, and again NOT operator. In this way, we guarentee that $ M $ is applied to the second bit if the first bit is state 0 and do nothing if the first bit is in state 1. In short: $$ C_0M = (X \otimes I) \cdot (CM) \cdot ( X \otimes I ). $$ <h3> Task 6</h3> Verify that $ C_0M = (X \otimes I) \cdot (CM) \cdot ( X \otimes I ) = \mymatrix{c|c}{ M & \mathbf{0} \\ \hline \mathbf{0} & I } $. <h3>Solution</h3> We start with $ X \otimes I $, which is equal to $ \X \otimes \I = \mymatrix{cc|cc}{ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \hline 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 } = \mymatrix{c|c}{ \mathbf{0} & I \\ \hline I & \mathbf{0} } $. $$ C_0M = (X \otimes I) \cdot (CM) \cdot ( X \otimes I ) = \mymatrix{c|c}{ \mathbf{0} & I \\ \hline I & \mathbf{0} } \mymatrix{c|c}{ I & \mathbf{0} \\ \hline \mathbf{0} & M } \mymatrix{c|c}{ \mathbf{0} & I \\ \hline I & \mathbf{0} } $$ This multiplication can be easily done by seeing the sub-matrices as the entries of $ (2 \times 2) $-matrices (*[block matrix multiplication](https://en.wikipedia.org/wiki/Block_matrix)*). The multiplication of the first two matrices are $ \mymatrix{c|c}{ \mathbf{0} & I \\ \hline I & \mathbf{0} } \mymatrix{c|c}{ I & \mathbf{0} \\ \hline \mathbf{0} & M } = \mymatrix{c|c}{ \mathbf{0} & M \\ \hline I & \mathbf{0} }. $ Then, its multiplication with the third matrix is $ \mymatrix{c|c}{ \mathbf{0} & M \\ \hline I & \mathbf{0} } \mymatrix{c|c}{ \mathbf{0} & I \\ \hline I & \mathbf{0} } = \mymatrix{c|c}{ M & \mathbf{0} \\ \hline \mathbf{0} & I } $. Alternatively, we define $ M $ as $ \mymatrix{cc}{a & b \\ c & d} $, and then verify the result by doing all multiplications explicitly. $$ C_0M = (X \otimes I) \cdot (CM) \cdot ( X \otimes I ) = \mymatrix{cc|cc}{ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \hline 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 } \cdot \mymatrix{cc|cc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \hline 0 & 0 & a & b \\ 0 & 0 & c & d} \cdot \mymatrix{cc|cc}{ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \hline 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 } = $$ $$ \mymatrix{cc|cc}{ 0 & 0 & a & b \\ 0 & 0 & c & d \\ \hline 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 } \mymatrix{cc|cc}{ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \hline 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 } = \mymatrix{cc|cc}{ a & b & 0 & 0 \\ c & d & 0 & 0 \\ \hline 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 } = \mymatrix{c|c}{ M & \mathbf{0} \\ \hline \mathbf{0} & I }. $$ <h3> Task 7</h3> For the given two single bit operators $ M $ and $ N $, let $ CM $ and $ CN $ be the controlled-$M$ and controlled-$N$ operators. By using $ X $, $ CM $, and $ CN $ operators, how can we obtain the operator $ \mymatrix{c|c}{ M & \mathbf{0} \\ \hline \mathbf{0} & N} $?
github_jupyter
# 02 - Experiment - machine learning - model tuning ``` from IPython.core.display import display, HTML display(HTML("""<style> .container {width:96% !important;}</style>""")) from IPython.display import IFrame import pandas as pd import numpy as np from __future__ import division import xgboost as xgb import sys sys.path.insert(0,'../') from utils.paths import * # mod = reload(sys.modules['utils.paths']) # vars().update(mod.__dict__) import preprocessing as pp reload(pp); import graphs as gg reload(gg); ``` ## Preprocessing ``` nat = pd.read_csv(path_SBA + 'SBAnational_new.csv', sep = ';', low_memory=False) nat34 = nat[nat.ApprovalFY.isin([2003, 2004])].reset_index(drop = True) nat5 = nat[nat.ApprovalFY.isin([2005])].reset_index(drop = True) # Add job related features # nat34 nat34['Expanding'] = nat34.CreateJob.apply(pp.expanding) nat34['Retaining'] = nat34.CreateJob.apply(pp.retaining) nat34['Expanding_ratio'] = nat34.apply(lambda x: pp.expanding_ratio(x['CreateJob'], x['NoEmp']), axis= 1) nat34['Retaining_ratio'] = nat34.apply(lambda x: pp.retaining_ratio(x['RetainedJob'], x['NoEmp']), axis= 1) # nat5 nat5['Expanding'] = nat5.CreateJob.apply(pp.expanding) nat5['Retaining'] = nat5.CreateJob.apply(pp.retaining) nat5['Expanding_ratio'] = nat5.apply(lambda x: pp.expanding_ratio(x['CreateJob'], x['NoEmp']), axis= 1) nat5['Retaining_ratio'] = nat5.apply(lambda x: pp.retaining_ratio(x['RetainedJob'], x['NoEmp']), axis= 1) use_col = ['LoanNr_ChkDgt', 'Name', 'City', 'State', 'Bank', 'BankState', 'NAICS', 'ApprovalDate', 'ApprovalFY', 'Term', 'NoEmp', 'NewExist', 'CreateJob', 'RetainedJob', 'FranchiseCode', 'UrbanRural', 'RevLineCr', 'LowDoc', 'ChgOffDate', 'DisbursementDate', 'DisbursementGross', 'BalanceGross', 'ChgOffPrinGr', 'GrAppv', 'SBA_Appv', 'default', 'Zip5d', 'Zip3d', 'SBA_ratio', 'RealEstate', 'NAICS_default_rate', 'NAICS_group', 'suffix', 'Loan_age', 'Previous_loan', 'default_times', 'fips', 'BusinessType', 'Expanding', 'Retaining', 'Expanding_ratio', 'Retaining_ratio' ] print nat34.shape, nat5.shape nat34 = nat34[use_col] nat5 = nat5[use_col] print nat34.shape, nat5.shape # nat[use_col].head().T # save_csv(nat34, 'nat34.csv') # save_csv(nat5, 'nat5.csv') nat34.head() ``` # Build machine learning model with the trainning set ## Train, Test split ``` from sklearn import model_selection Train, Test = model_selection.train_test_split(nat34, test_size = 0.25, random_state = 1868, stratify = nat34.default ) print Train.shape, Test.shape print Train.default.sum(), Test.default.sum() print Train.default.sum()/Train.shape[0], Test.default.sum()/Test.shape[0] print Train.columns.tolist() # Preprocessing train set features = Train target = Train.default drop = ['LoanNr_ChkDgt', 'Name', 'ApprovalDate', 'ApprovalFY', 'ChgOffDate', 'DisbursementDate', 'DisbursementGross', 'BalanceGross', 'ChgOffPrinGr', 'GrAppv', 'SBA_Appv', 'SBA_ratio', 'default', 'FranchiseCode', 'Term', 'NAICS'] categorical = ['City', 'State', 'Zip5d', 'Zip3d', 'Bank', 'BankState', 'RevLineCr', 'LowDoc', 'NAICS_group', 'suffix', 'fips', 'BusinessType', 'Expanding_ratio', 'Retaining_ratio' ] dict_categorical, features = pp.extract_train_features(features, drop, categorical) print features.shape print target.sum() features.head() X_train, X_test, y_train, y_test = model_selection.train_test_split(features, target, test_size = 0.25, random_state=3776, stratify=target ) dtrain = xgb.DMatrix(X_train.values, label=y_train.values) dtest = xgb.DMatrix(X_test.values, y_test.values) num_rounds = 1100 # num_rounds = 2000 params = {'silent':1, 'eta':0.01, 'max_depth':11, 'subsample': 0.6, 'colsample_bytree': 0.4, 'min_child_weight':1, 'objective':'binary:logistic', 'eval_metric':'auc', 'seed':2017, 'gamma':0.1, 'nthread':-1} watchlist = [(dtrain, 'train'),(dtest,'validation')] bst=xgb.train(params, dtrain, num_rounds, watchlist, early_stopping_rounds = 50, verbose_eval = False); num_rounds = bst.best_iteration print num_rounds print bst.best_iteration, bst.best_score # dir(bst) # Use all the train data to train the model X_train_matrix = features.values #SKLEARN clf_xgb = xgb.XGBClassifier(silent = params['silent'], learning_rate = params['eta'], max_depth = params['max_depth'], subsample = params['subsample'], colsample_bytree = params['colsample_bytree'], min_child_weight = params['min_child_weight'], objective = params['objective'], n_estimators = num_rounds, seed = params['seed'], nthread = params['nthread'], gamma = params['gamma'] ) clf_xgb.fit(X_train_matrix, target, eval_metric ='auc') ``` # Validate model with test set ``` # Preprocessing test set test_X = Test.copy() # Preprocessing drop = ['LoanNr_ChkDgt', 'Name', 'ApprovalDate', 'ApprovalFY', 'ChgOffDate', 'DisbursementDate', 'DisbursementGross', 'BalanceGross', 'ChgOffPrinGr', 'GrAppv', 'SBA_Appv', 'SBA_ratio', 'default', 'FranchiseCode', 'Term', 'NAICS'] categorical = ['City', 'State', 'Zip5d', 'Zip3d', 'Bank', 'BankState', 'RevLineCr', 'LowDoc', 'NAICS_group', 'suffix', 'fips', 'BusinessType', 'Expanding_ratio', 'Retaining_ratio' ] test_bas = pp.extract_test_features(test_X, drop, categorical, dict_categorical) # Prediction for col in features.columns: if col not in test_bas.columns: print 'MISSING COLUMN: ',col test_bas= test_bas[features.columns] X_test_matrix = test_bas.values print X_train_matrix.shape, X_test_matrix.shape y_test_pred_xgb = clf_xgb.predict_proba(X_test_matrix) # temp = pd.DataFrame(y_test_pred_xgb) result_table_test = test_X[['LoanNr_ChkDgt', 'Name', 'ApprovalFY', 'State', 'default', 'ChgOffPrinGr', 'GrAppv', 'SBA_Appv', 'SBA_ratio']] result_table_test.loc[:, 'prob'] = y_test_pred_xgb[:,1] result_table_test.head() %pylab inline pylab.rcParams['figure.figsize'] = (10, 10) gg.plot_roc(result_table_test.default, result_table_test.prob, 'Test') ``` # Check model features ``` # Feature Importance #BOOSTER dtrain_ex=xgb.DMatrix(features.values, label=target.values, feature_names=features.columns) bst_ex=xgb.train(params, dtrain_ex, num_boost_round=bst.best_iteration, verbose_eval=False ) bst_ex.feature_names[:10] def plot_features_importance(bst): x = bst.get_fscore() sorted_x = sorted(x.items(), key=lambda x: x[1], reverse=True) keys_max = [item[0] for item in sorted_x[:30]] feat_max = {key: x[key] for key in keys_max} fig, ax = plt.subplots(1, 1, figsize=(20, 15)) xgb.plot_importance(feat_max, ax=ax) def print_features_importance(bst): x = bst.get_fscore() sorted_x = sorted(x.items(), key=lambda x: x[1], reverse=True) keys_max = [item[0] for item in sorted_x[:30]] feat_max = {key: x[key] for key in keys_max} features_importance = pd.DataFrame([feat_max]).T features_importance = features_importance.rename(columns = {0: 'Score'}) features_importance = features_importance.sort_values('Score', ascending=False) return features_importance plot_features_importance(bst_ex) feat_max = print_features_importance(bst_ex) feat_max.rename(columns = {'Score': 'Accumlated score'}).head(15) ``` # Assign grades using test set ``` # Tune grades based on percentile def tuning_grades(num_grades, prob): Percentile = list(np.linspace(0, 100, num_grades+1)) thresholds = [np.percentile(prob, i) for i in Percentile] thresholds[0] = 0 thresholds[-1] = 1 thresholds = [round(i, 3) for i in thresholds] return thresholds prob_th = tuning_grades(5, result_table_test.prob) prob_th grades = [str(g) for g in range(1,6)] result_table_test.loc[:, 'Grade'] = pd.cut(result_table_test.prob, bins=prob_th, labels=grades) result_table_test.loc[:, 'Grade'] = result_table_test['Grade'].astype('int') result_table_test.Grade.value_counts().sort_index() gg.plot_grade_roc(result_table_test.default, result_table_test.Grade, result_table_test.prob, 'Prediction of Test set 2003 & 2004') result_table_test.groupby('Grade').default.sum()/result_table_test.Grade.value_counts() ``` # Apply machine learning model to 2005 data and predict defaults ``` Test.head() # Preprocessing projection set proj_X = nat5.copy() # Preprocessing drop = ['LoanNr_ChkDgt', 'Name', 'ApprovalDate', 'ApprovalFY', 'ChgOffDate', 'DisbursementDate', 'DisbursementGross', 'BalanceGross', 'ChgOffPrinGr', 'GrAppv', 'SBA_Appv', 'SBA_ratio', 'default', 'FranchiseCode', 'Term', 'NAICS'] categorical = ['City', 'State', 'Zip5d', 'Zip3d', 'Bank', 'BankState', 'RevLineCr', 'LowDoc', 'NAICS_group', 'suffix', 'fips', 'BusinessType', 'Expanding_ratio', 'Retaining_ratio' ] proj_bas = pp.extract_test_features(proj_X, drop, categorical, dict_categorical) # Projection (2005 data) for col in features.columns: if col not in proj_bas.columns: print 'MISSING COLUMN: ',col proj_bas= proj_bas[features.columns] X_proj_matrix = proj_bas.values print X_train_matrix.shape, X_test_matrix.shape, X_proj_matrix.shape y_proj_pred_xgb = clf_xgb.predict_proba(X_proj_matrix) temp = pd.DataFrame(y_proj_pred_xgb) result_table_proj = proj_X[['LoanNr_ChkDgt', 'Name', 'ApprovalFY', 'State', 'default', 'ChgOffPrinGr', 'GrAppv', 'SBA_Appv', 'SBA_ratio']] result_table_proj.loc[:, 'prob'] = y_proj_pred_xgb[:,1] result_table_proj.head() %pylab inline pylab.rcParams['figure.figsize'] = (10, 10) gg.plot_roc(result_table_proj.default, result_table_proj.prob, 'Projection 2005') result_table_proj.loc[:, 'Grade'] = pd.cut(result_table_proj.prob, bins=prob_th, labels=grades) result_table_proj.loc[:, 'Grade'] = result_table_proj['Grade'].astype('int') result_table_proj.Grade.value_counts().sort_index() gg.plot_grade_roc(result_table_proj.default, result_table_proj.Grade, result_table_proj.prob, 'Project 2005 with grades') result_table_proj.head() save_csv(result_table_proj, 'result_table_proj.csv') save_csv(proj_bas, 'proj_bas.csv') save_dict(dict_categorical, 'dict_categorical') save_model(clf_xgb, 'clf_xgb') save_model(bst_ex, 'bst_ex') ```
github_jupyter
# Caso de uso: corrección y traducción de un texto <div class="alert alert-warning"> <i class="fa fa-bug"></i> Si no queréis perder los resultados de ejemplo, **no ejecutéis** el cuaderno de jupyter sin tener una clave propia de la API de OpenAI. Estos cuadernos solo cumplen una función ilustrativa de los códigos y formas de utilizar el modelo GPT-3 con Python. En caso de disponer de una **clave**, guardadla en un archivo **.env** para mayor seguridad como **un texto entrecomillado asignado a la variable OPENAI_API_KEY**. </div> ## Autentificación La API de OpenAI utiliza claves de API para la autentificación. <div class="alert alert-danger"> <i class="fa fa-exclamation-circle"></i> Recuerda que tu clave de API es un secreto. No la compartas con otros ni la expongas en ningún código del lado del cliente (navegadores, aplicaciones). Las solicitudes de producción deben dirigirse a través de su propio servidor *backend*, donde su clave de API puede cargarse de forma segura desde una variable de entorno o un servicio de gestión de claves. </div> Todas las solicitudes de API deben incluir su clave de API, es importante almacenar en un documento seguro la llave. Para ello, crea un archivo nuevo `.env` para almacenarla en su interior de la siguiente forma: `OPENAI_API_KEY = "MI_API_KEY"` Con esto, para recuperar la clave de la API, tendremos que usar el **getenv** de `os`. ``` import os import openai from dotenv import load_dotenv load_dotenv() openai.api_key = os.getenv("OPENAI_API_KEY") ``` ## Introducción En este cuaderno voy a tratar de utilizar a GPT-3 para dos tareas: **corregir un texto en castellano y traducirlo a inglés**. La entrada de texto estará cargada desde un documento Word y generará dos nuevos documentos: uno con la corrección y otro con la traducción. GPT-3 solo es capaz de hacer una tarea a la vez, por lo que estas acciones requerirán de dos llamadas a la API consecutivas, donde el resultado de una será la entrada para la petición de la segunda. Lo primero que voy a hacer, como en cualquier problema, es hacer la carga de bibliotecas que necesito para conseguir extraer el texto del documento Word, utilizaré para ello la biblioteca `docx` para Python. Con esta biblioteca, lo que haremos es extraer los párrafos del documento Word y, en este caso, pasar cada párrafo como una petición diferenciada para GPT-3. ``` # Instalar biblioteca si no la tenéis #!pip3 install python-docx from docx import Document documento = Document(docx='./documentos/original.docx').paragraphs for parrafos in documento: print(len(parrafos.text)) print(parrafos.text) ``` Tenemos el texto cargado en la variable *documento*, toca ahora pedirle a GPT-3 que haga la corrección de texto, estos trabajos se hacen estructurando el mensaje que le enviamos a GPT-3 a través de **Completion**. Tanto para las correcciones como traducciones, el modelo funciona y entiende correctamente la tarea a realizar si utilizamos las etiquetas **Original:** y **Corrección:**. Además, como quiero que se ajuste al texto a traducir y no genere más texto, utilizaré como secuencia de parada el propio *salto de línea (\n)* o la palabra *Corregido:*, para que, cuando termine de recorrer el texto, interrumpa la ejecución. Como son tareas complejas que requieren del contexto de la entrada de texto, utilizaremos el motor *Davinci*; es recomendable ajustar el motor utilizado a la tarea que vamos a pedirle para poder mejorar el rendimiento general. ``` correcciones = [] for parrafo in documento: entrada = 'Original: ' + parrafo.text + '\nCorregido:' correccion = openai.Completion.create(engine='davinci', prompt=entrada, temperature=0.81, stop=['\n', 'Corregido:'], max_tokens = len(parrafo.text) + 5) correcciones.append(correccion['choices'][0]['text']) print(correcciones) ``` Aunque la corrección podría ser mejor, hay que tener en cuenta que estamos trabajando en español directamente en un modelo que se ha entrenado originalmente en inglés y que, aunque se defiende en español, no es demasiado bueno con él. Del texto generado, podría esperarse de una corrección profesional: ``` Nunca llegué a ver el árbol del que aquel hombre que me encontré bajo el puente me habló (dijo). Los perros no ladraban anoche durante la fiesta a la que tuve que ir. ``` Al menos, podemos utilizar el modelo para una corrección ortográfica muy básica que, en casos donde el texto requiera de mucha limpieza antes de una corrección de estilo propiamente dicha, podría ser interesante. Por ahora, vamos a guardar la corrección que nos ha facilitado GPT-3, volviendo a utilizar la biblioteca **doc**. ``` doc_CORR = Document() for parrafo in correcciones: doc_CORR.add_paragraph(parrafo) doc_CORR.save('./documentos/correccion.docx') ``` Ahora que tenemos la corrección hecha y guardada, toca traducir el texto; volveremos a hacer una petición a GPT-3 como la anterior, pero esta vez diciéndole que el texto lo quiero traducir, no corregir. Además, ya que estamos, le pediré que haga la traducción al inglés de su corrección y la mía, a ver qué tal hace cada una. Por supuesto, ambas traducciones también las guardaremos en un documento. ``` # ----------------------------------------------- TRADUCCIÓN CON GPT-3 traducciones = [] for parrafo in correcciones: entrada = 'Original: ' + parrafo + '\nTraducido al inglés:' traduccion = openai.Completion.create(engine='davinci', prompt=entrada, temperature=0.81, stop=['\n', 'Traducido al inglés:'], max_tokens = len(parrafo) + 5) traducciones.append(traduccion['choices'][0]['text']) print(traducciones) # ----------------------------------------------- GUARDADO DEL DOCUMENTO doc_TRAD = Document() for parrafo in traducciones: doc_TRAD.add_paragraph(parrafo) doc_TRAD.save('./documentos/traduccion_de_COR-GPT3.docx') ``` Podemos observar como hay cosas que, a veces, pasan, como que añade comillas que no debería tener y similares; al ser un lenguaje estocástico, no podemos garantizar que siempre mantenga la coherencia (a veces, vamos, va a hacer lo que le dé la gana) y generará alguna respuesta que no nos interese o tengamos que revisar. **Vamos a comprobar qué tal hace la traducción de la corrección de estilo que he hecho del texto.** ``` correccion_reyes = ['Nunca llegué a ver el árbol del que aquel hombre que me encontré bajo el puente me habló.', 'Los perros no ladraban anoche durante la fiesta a la que tuve que ir.'] # ----------------------------------------------- TRADUCCIÓN CON GPT-3 traducciones = [] for parrafo in correccion_reyes: entrada = 'Original: ' + parrafo + '\nTraducido al inglés:' traduccion = openai.Completion.create(engine='davinci', prompt=entrada, temperature=0.81, stop=['\n', 'Traducido al inglés:'], max_tokens = len(parrafo) + 5) traducciones.append(traduccion['choices'][0]['text']) print(traducciones) # ----------------------------------------------- GUARDADO DEL DOCUMENTO doc_TRAD = Document() for parrafo in traducciones: doc_TRAD.add_paragraph(parrafo) doc_TRAD.save('./documentos/traduccion_de_COR-Reyes.docx') ``` ### Conclusiones Aunque el trabajo de corrección es malo, sobre todo, si esperásemos una corrección de estilo a nivel profundo del texto; a la hora de traducir, GPT-3 realiza correctamente la traducción sin perder la coherencia del texto ni el estilo propio de cada uno de ellos (aunque este sea engorroso). A la hora de corregir, es mejorable (al menos atendiendo al texto en español), pero podría llegar a ser útil para una primera limpieza. También hay que tener en cuenta que, en estos casos, hemos estado trabajando con ejemplos no relacionados, cosa que no ocurre en un texto completo y de cierta complejidad. Hacer peticiones individuales de cada párrafo podría generar problemas de coherencia a lo largo de un texto de varios párrafos al perder las referencias; por lo tanto, es recomendable encontrar otra forma de hacer esta petición, cosa que veremos en el cuaderno **2.2_Ejemplo de un caso de uso_Traduccion**. Por otro lado, existe una limitación de palabras máximas que GPT-3 puede generar, lo que significa que, aunque queramos hacer una corrección de un texto íntegro, probablemente tengamos que hacer un proceso recurrente para poder enlazar lo que va traduciendo con lo anterior. <div class="alert alert-info"> <i class="fa fa-code"></i> **Este cuaderno ha sido creado con la ayuda de GPT-3.** <hr/> **Si tienes alguna duda relacionada con estos cuadernos, puedes contactar conmigo:** Mª Reyes R.P. (Erebyel). **[Web](https://www.erebyel.es/) • [Twitter](https://twitter.com/erebyel) • [Linkedin](https://www.linkedin.com/in/erebyel/)**. <hr/> <i class="fa fa-plus-circle"></i> **Fuentes:** * ***Documentación de la Beta de OpenAI***: https://beta.openai.com/docs/introduction </div>
github_jupyter
<a href="https://colab.research.google.com/github/ARCTraining/python-2021-04/blob/gh-pages/020_data_pandas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Starting with Data ``` # Author: Martin Callaghan # Date: 2021-04-26 # Lesson link: https://arctraining.github.io/python-2021-04/02-starting-with-data/index.html # Connect my Google Drive to Google Colab from google.colab import drive drive.mount ('/content/gdrive') # import pandas library import pandas as pd # We use the Portal Project teaching dataset # https://figshare.com/articles/dataset/Portal_Project_Teaching_Database/1314459 # But first we need to download it !wget https://arctraining.github.io/python-2021-04/data/portal-teachingdb-master.zip # after downloading rename # portal-teachingdb-master.zip to data.zip # <<< do that in the file explorer window to the left of the notebook <<< # now unzip the file we've just downloaded !unzip data.zip # rename the folder portal-teachingdb-master to data # <<< do that in the file explorer window to the left of the notebook <<< # see what we have done !ls -l # The full filepath has a space in it (between Colab and Notebooks) which we need to accomodate # Either by including the path in quotes - like this: !ls "/content/gdrive/MyDrive/Colab Notebooks/intro-python-2021-04/data" # Or by 'escaping out' the space by putting a backslash \ in front of it # Like this: !ls /content/gdrive/MyDrive/Colab\ Notebooks/intro-python-2021-04/data # Move the data folder into your 'gdrive' # Move it into gdrive/MyDrive/Colab Notebooks/intro-python-2021-04 # First look at the data # Copy the 'path' to your Google Drive data folder and replace it with the 'data' folder we had last week pd.read_csv ('/content/gdrive/MyDrive/Colab Notebooks/intro-python-2021-04/data/surveys.csv') # Read in the df and assign to a variable surveys_df = pd.read_csv ('/content/gdrive/MyDrive/Colab Notebooks/intro-python-2021-04/data/surveys.csv') # But having to include this long path every time is a pain so filepath = "/content/gdrive/MyDrive/Colab Notebooks/intro-python-2021-04/data/" surveys_df = pd.read_csv (filepath + 'surveys.csv') # Quickly view the contents surveys_df # look at the fist few rows surveys_df.head(3) ``` ## Exploring the dataset ``` # What is surveys_df type(surveys_df) # What data types are in the dataframe surveys_df.dtypes # Challenge 1 surveys_df.columns surveys_df.shape surveys_df.tail() ``` ## Basic statistics on the dataframe ``` # Look at the columns surveys_df.columns # What are the unique values of species pd.unique(surveys_df['species_id']) # Challenge 2: Statistics # Create a list of unique plot_id site_names = pd.unique(surveys_df['plot_id']) site_names # How many unique site names are there? len (site_names) # Number of species len (pd.unique(surveys_df['species_id'])) # What is the difference between len(site_names) and surveys_df['plot_id'].nunique()? surveys_df['plot_id'].nunique() ``` ## Grouping in Pandas ``` # Homework: # Try the 'grouping in pandas' section *and* the Challenge - Summary Data surveys_df['weight'].describe() # If I want just one statistical metric surveys_df['weight'].max() surveys_df['weight'].mean() # To sumaarise by one or more variables: grouped_data_sex = surveys_df.groupby('sex') # Summary statistics grouped_data_sex.describe() # Get just the mean by sex grouped_data_sex.mean() # Challenge # Q1: How many recorded individuals are female F and how many male M grouped_data_sex.describe() # Q2: What happens when you group by two columns using the following syntax and then calculate mean values? grouped_data2 = surveys_df.groupby(['plot_id', 'sex']) grouped_data2.mean() # Q3: Summarize weight values for each site in your data. HINT: you can use the following syntax to only # create summary statistics for one column in your data. # by_site['weight'].describe() by_site = surveys_df.groupby(['plot_id']) by_site["weight"].describe() ``` ## Summary counts and and basic maths ``` # Count the number of samples per species species_count = surveys_df.groupby('species_id')['record_id'].count() print (species_count) species_count = surveys_df.groupby('species_id')[['record_id', 'month']].count() print (species_count) # Just for one species , eg. RM surveys_df.groupby('species_id')['record_id'].count()[['RM', 'UR']] # Basic Maths surveys_df['weight'] * 2 ``` ## Simple plotting in Python ``` # If you're using Notebooks then we need to make sure the plots appear in the browser %matplotlib inline # Create a quick bar chart species_count.plot (kind = 'bar'); # How many animals were captured in each site total_count = surveys_df.groupby('plot_id')['record_id'].count() total_count.plot(kind = 'bar'); ``` ## Indexing and slicing with pandas ``` a = [1, 2, 3, 4, 5] a[2] # view the available columns in the dataframe surveys_df.columns # indexing out a whole column # here we get an error because we're trying to index into the special .columns attribute surveys_df.columns['hindfoot_length'] # we can slice out a column using square brackets and the string of the column name surveys_df['hindfoot_length'] # here we sliced out the column name from the .columns attribute # but not the values of the column itself surveys_df.columns[-2] # we can also use dot notation to index out a single column surveys_df.hindfoot_length # indexing out multiple columns means passing a list # to the index brackets surveys_df[['hindfoot_length','month']] # we can also do this by assigning a list as a variable # and passing the variable to the index brackets desired_cols = ['hindfoot_length','month'] surveys_df[desired_cols] # indexing a column name that does not exist gives an error surveys_df['name'] surveys_df.head() # we can also use .iloc and .loc to index rows, columns # .iloc index by integer location # index out the row at the 0th index surveys_df.iloc[0] # here we've indexed the first 3 rows of the 6th column surveys_df.iloc[0:3,7] # .loc indexes by labels # here we've indexed from rows 0 to 3 (by label) # at the 'hindfoot_length' column surveys_df.loc[0:3,'hindfoot_length'] # using iloc to index the first three rows and columns surveys_df.iloc[0:3, 0:3] # we can make a copy of a subsetted data frame by using .copy() # this creates a separate object from our original dataframe surveys_df copy_of_first3 = surveys_df.iloc[0:3, 0:3].copy() # this notation just creates a reference to the original dataframe # so any changes we make to copy_of_first3 are actually made to the originak # surveys_df dataframe copy_of_first3 = surveys_df.iloc[0:3, 0:3] # we can also perform a vectorised boolean comparison and # return a boolean array surveys_df['species_id'] == 'NL' # we can use this to subset the dataframe for a specific species subset_df = surveys_df[surveys_df['species_id'] == 'NL'].copy() # we can also do this in two lines by assign a bool_mask variable bool_mask = surveys_df['species_id'] == 'NL' surveys_df[bool_mask] subset_df.head() # if you want to update your indexes on a subset # subset_df.reset_index(inplace=True) # we can set values of rows when indexing # but this can be quite dangerous so be careful subset_df.iloc[0:3, 0] = 0 # checking the changes subset_df.iloc[0:3] # we can use subsetting to generate quick summary data surveys_df[surveys_df['species_id'] == 'ZL'].count() surveys_df[surveys_df['species_id'] == 'ZL'] # we can also subset by multiple boolean values # using the `&` AND operator here which checks whether both conditions are True # we can use `|` OR operator to check if one of the two is True surveys_df[(surveys_df.year >= 1980) & (surveys_df.year <= 1985)] ## Homework num_rows = surveys_df[(surveys_df.year == 1999) & (surveys_df.weight <= 8)].shape[0] print(f"Number of rows from the year 1999 with weight less than or equal to 8 are: {num_rows}") # using .isin species_list = ['NL','PF','PE','AS','ST'] surveys_df['species_id'].isin(species_list) surveys_df[surveys_df['species_id'].isin(species_list)] # using masks to subset data # using .isin species_list = ['NL','PF','PE','AS','ST'] surveys_df['species_id'].isin(species_list) pd.isnull(surveys_df) ~pd.isnull(surveys_df).any(axis=1) surveys_df[~pd.isnull(surveys_df).any(axis=1)] surveys_df.dropna(axis=0, how='any') ``` ## Data Types and formats ``` type(surveys_df) surveys_df.dtypes print(5 + 5) print( 24 - 4) print( 5 / 9 ) print( 1 // 5) print( 2.5 + 1) # changing data types of a pandas column surveys_df['record_id'] surveys_df['record_id'].astype('float64') surveys_df.head() df = surveys_df['record_id'].astype('float64') surveys_df.head() surveys_df['record_id'] = surveys_df['record_id'].astype('float64') surveys_df['weight'].astype('int64') # handling NaN with subsetting surveys_df['weight'].dropna().astype('int64') surveys_df['weight'].fillna(0) surveys_df['weight'].dtype surveys_df.head() date_col = surveys_df['year'].astype(str) + '-' + surveys_df['month'].astype(str) + '-' + surveys_df['day'].astype(str) date_col_dt = pd.to_datetime(date_col) date_col_dt[0].month date_col_dt.apply(lambda x: x.day) surveys_df surveys_df[(surveys_df.year ==1997)]\ .to_csv('/content/gdrive/MyDrive/Colab Notebooks/intro-python-2021-04/data/1997-data.csv', index=False) ```
github_jupyter
# Univariate Density Estimation via Dirichlet Process Mixture ``` import numpy as np import matplotlib.pyplot as plt from pybmix.core.mixing import DirichletProcessMixing, StickBreakMixing from pybmix.core.hierarchy import UnivariateNormal from pybmix.core.mixture_model import MixtureModel np.random.seed(2021) ``` ## Data Generation We generate data from a two-component mixture model $$ y_i \sim \frac{1}{2} \mathcal N(-3, 1) + \frac{1}{2} \mathcal N(3, 1), \quad i=1, \ldots, 200 $$ ``` def sample_from_mixture(weigths, means, sds, n_data): n_comp = len(weigths) clus_alloc = np.random.choice(np.arange(n_comp), p=[0.5, 0.5], size=n_data) return np.random.normal(loc=means[clus_alloc], scale=sds[clus_alloc]) y = sample_from_mixture( np.array([0.5, 0.5]), np.array([-3, 3]), np.array([1, 1]), 200) plt.hist(y) plt.show() ``` ## The statistical model We assume the following model \begin{equation} \begin{aligned} y_i | \tilde{p} &\sim f(\cdot) = \int_{R \times R^+} \mathcal{N}(\cdot | \mu, \sigma^2) \tilde{p}(d\mu, d\sigma^2) \\ \tilde{p} &\sim DP(\alpha, G_0) \end{aligned} \end{equation} where $DP(\alpha, G_0)$ is the Dirichlet Process with base measure $\alpha G_0$. Given the stick-breaking represetation of the Dirichlet Process, the model is equivalently written as \begin{equation} \begin{aligned} y_i | \{w_h\}_h \{(\mu_h, \sigma^2_h)\}_h & \sim f(\cdot) = \sum_{h=1}^\infty w_h \mathcal{N}(\cdot | \mu_h, \sigma_h^2) \\ \{w_h\}_h &\sim GEM(\alpha) \\ \{(\mu_h, \sigma^2_h)\}_h &\sim G_0 \\ \end{aligned} \end{equation} In pybmix we take advantage of the second representation, and specify a MixtureModel in terms of a Mixing and a Hierarchy. The Mixing is the prior for the weights, while the Hierarchy combines the base measure $G_0$ with the kernel of the mixture (in this case, the univariate Gaussian distribution) Here, we assume that $\alpha = 5$ and $G_0(d\mu, d\sigma^2) = \mathcal N(d\mu | \mu_0, \lambda \sigma^2) \times IG(d\sigma^2 | a, b)$, i.e., $G_0$ is a normal-inverse gamma distribution. The parameters $(\mu_0, \lambda, a , b)$ of $G_0$ can be set automatically by the method 'make_default_fixed_params' which takes as input the observations and a "guess" on the number of clusters ``` mixing = DirichletProcessMixing(total_mass=5) hierarchy = UnivariateNormal() hierarchy.make_default_fixed_params(y, 2) mixture = MixtureModel(mixing, hierarchy) ``` ## Run MCMC simulations ``` mixture.run_mcmc(y, algorithm="Neal2", niter=2000, nburn=1000) ``` ## Get the density estimates 1) fix a grid where to estimate the densities 2) the method 'estimate_density' returns a matrix of shape [niter - nburn, len(grid)] ``` from pybmix.estimators.density_estimator import DensityEstimator grid = np.linspace(-6, 6, 500) dens_est = DensityEstimator(mixture) densities = dens_est.estimate_density(grid) ``` Plot some of the densities and their mean ``` plt.hist(y, density=True) plt.plot(grid, np.mean(densities, axis=0), lw=3, label="predictive density") idxs = [5, 100, 300] for idx in idxs: plt.plot(grid, densities[idx, :], "--", label="iteration: {0}".format(idx)) plt.legend() plt.show() ```
github_jupyter
``` # ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated # ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position. # ATTENTION: Please use the provided epoch values when training. import csv import numpy as np import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator from os import getcwd def get_data(filename): # You will need to write code that will read the file passed # into this function. The first line contains the column headers # so you should ignore it # Each successive line contians 785 comma separated values between 0 and 255 # The first value is the label # The rest are the pixel values for that picture # The function will return 2 np.array types. One with all the labels # One with all the images # # Tips: # If you read a full line (as 'row') then row[0] has the label # and row[1:785] has the 784 pixel values # Take a look at np.array_split to turn the 784 pixels into 28x28 # You are reading in strings, but need the values to be floats # Check out np.array().astype for a conversion with open(filename) as training_file: reader = csv.reader(training_file, delimiter=',') imgs = [] labels = [] next(reader,None) for row in reader: label = row[0] data = row[1:] img = np.array(data).reshape((28,28)) imgs.append(img) labels.append(label) images = np.array(imgs).astype(float) labels = np.array(labels).astype(float) return images, labels path_sign_mnist_train = f"{getcwd()}/../tmp2/sign_mnist_train.csv" path_sign_mnist_test = f"{getcwd()}/../tmp2/sign_mnist_test.csv" training_images, training_labels = get_data(path_sign_mnist_train) testing_images, testing_labels = get_data(path_sign_mnist_test) # Keep these print(training_images.shape) print(training_labels.shape) print(testing_images.shape) print(testing_labels.shape) # Their output should be: # (27455, 28, 28) # (27455,) # (7172, 28, 28) # (7172,) # In this section you will have to add another dimension to the data # So, for example, if your array is (10000, 28, 28) # You will need to make it (10000, 28, 28, 1) # Hint: np.expand_dims training_images = np.expand_dims(training_images, axis=3) testing_images = np.expand_dims(testing_images, axis=3) # Create an ImageDataGenerator and do Image Augmentation train_datagen = ImageDataGenerator( rescale=1/255, rotation_range=0.2, shear_range=0.2, height_shift_range=0.2, width_shift_range=0.2, zoom_range=0.2, horizontal_flip=True ) validation_datagen = ImageDataGenerator( rescale=1/255 ) # Keep These print(training_images.shape) print(testing_images.shape) # Their output should be: # (27455, 28, 28, 1) # (7172, 28, 28, 1) # Define the model # Use no more than 2 Conv2D and 2 MaxPooling2D model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32,3,activation='relu',input_shape=(28,28,1)), tf.keras.layers.MaxPool2D(), tf.keras.layers.Conv2D(32,3,activation='relu'), tf.keras.layers.MaxPool2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128,activation='relu'), tf.keras.layers.Dense(26,activation='softmax') ]) # Compile Model. model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) train_gen = train_datagen.flow(training_images, training_labels, ) val_gen = validation_datagen.flow(testing_images, testing_labels, ) # Train the Model history = model.fit_generator(train_gen, epochs=2, validation_data=val_gen) model.evaluate(testing_images, testing_labels, verbose=0) # Plot the chart for accuracy and loss on both training and validation %matplotlib inline import matplotlib.pyplot as plt acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'r', label='Training accuracy') plt.plot(epochs, val_acc, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'r', label='Training Loss') plt.plot(epochs, val_loss, 'b', label='Validation Loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` # Submission Instructions ``` # Now click the 'Submit Assignment' button above. ``` # When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners. ``` %%javascript <!-- Save the notebook --> IPython.notebook.save_checkpoint(); %%javascript IPython.notebook.session.delete(); window.onbeforeunload = null setTimeout(function() { window.close(); }, 1000); ```
github_jupyter
# Deep Learning Toolkit for Splunk - Rapids UMAP This notebook contains an example workflow how to work on custom containerized code that seamlessly interfaces with the Deep Learning Toolkit for Splunk. Note: By default every time you save this notebook the cells are exported into a python module which is then invoked by Splunk MLTK commands like <code> | fit ... | apply ... | summary </code>. Please read the Model Development Guide in the Deep Learning Toolkit app for more information. ## Stage 0 - import libraries At stage 0 we define all imports necessary to run our subsequent code depending on various libraries. ``` # this definition exposes all python module imports that should be available in all subsequent commands import json import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import datashader as ds import datashader.transfer_functions as tf import base64 import io import cudf from cuml.manifold.umap import UMAP as cumlUMAP # ... # global constants MODEL_DIRECTORY = "/srv/app/model/data/" # THIS CELL IS NOT EXPORTED - free notebook cell for testing or development purposes print("numpy version: " + np.__version__) print("pandas version: " + pd.__version__) ``` ## Stage 1 - get a data sample from Splunk In Splunk run a search to pipe a dataset into your notebook environment. Note: mode=stage is used in the | fit command to do this. | inputlookup dga_domains_features.csv</br> | fit MLTKContainer algo=rapids_umap plot="datashader" class from ut* PC* into app:dga_rapids_umap After you run this search your data set sample is available as a csv inside the container to develop your model. The name is taken from the into keyword ("barebone_model" in the example above) or set to "default" if no into keyword is present. This step is intended to work with a subset of your data to create your custom model. ``` # this cell is not executed from MLTK and should only be used for staging data into the notebook environment def stage(name): with open("data/"+name+".csv", 'r') as f: df = pd.read_csv(f) with open("data/"+name+".json", 'r') as f: param = json.load(f) return df, param # THIS CELL IS NOT EXPORTED - free notebook cell for testing or development purposes df, param = stage("dga_rapids_umap") print(df.describe()) print(param) param['feature_variables'] + param['target_variables'] ``` ## Stage 2 - create and initialize a model ``` # initialize your model # available inputs: data and parameters # returns the model object which will be used as a reference to call fit, apply and summary subsequently def init(df,param): model = {} return model # THIS CELL IS NOT EXPORTED - free notebook cell for testing or development purposes model = init(df,param) print(model) ``` ## Stage 3 - fit the model ``` # train your model # returns a fit info json object and may modify the model object def fit(model,df,param): # model.fit() info = {"message": "no fit needed"} return info # THIS CELL IS NOT EXPORTED - free notebook cell for testing or development purposes print(fit(model,df,param)) ``` ## Stage 4 - apply the model ``` # apply your model # returns the calculated results def plot_to_base64(plot): pic_IObytes = io.BytesIO() if hasattr(plot,'fig'): plot.fig.savefig(pic_IObytes, format='png') elif hasattr(plot,'figure'): plot.figure.savefig(pic_IObytes, format='png') pic_IObytes.seek(0) pic_hash = base64.b64encode(pic_IObytes.read()) pic_IObytes.close() return pic_hash def plot_datashader_as_base64(df,param): cat = param['target_variables'][0] dfr = df.astype({cat: 'category'}) squ = 25.0 dfr = dfr[dfr["UMAP1"].between(-squ, squ) & dfr["UMAP2"].between(-squ, squ)] cvs = ds.Canvas(plot_width=800, plot_height=600) agg = cvs.points(dfr, 'UMAP1', 'UMAP2', ds.count_cat(cat)) color_key_dga = {'dga':'red', 'legit':'blue'} img = tf.shade(agg, cmap=color_key_dga, how="eq_hist") #img.plot() pic_IObytes = img.to_bytesio() pic_IObytes.seek(0) pic_hash = base64.b64encode(pic_IObytes.read()) return str(pic_hash) def plot_scatter_as_base64(df,param): hue=None if 'options' in param: if 'target_variable' in param['options']: hue=str(param['options']['target_variable'][0]) #plot = sns.pairplot(df,hue=hue, palette="husl") sns.set() plot = sns.scatterplot(x="UMAP1", y="UMAP2", data=df) res = str(plot_to_base64(plot)) return res def apply(model,df,param): # param['options']['model_name'] dfeatures = df[param['feature_variables']] cuml_umap = cumlUMAP() #model['umap'] = cuml_umap gdf = cudf.DataFrame.from_pandas(df) embedding = cuml_umap.fit_transform(gdf) result = embedding.rename(columns={0: "UMAP1", 1: "UMAP2"}).to_pandas() result_plot = df[param['target_variables']].join(result) if 'plot' in param['options']['params']: plots = param['options']['params']['plot'].lstrip("\"").rstrip("\"").lower().split(',') for plot in plots: if plot=='scatter': model["plot_scatter"] = plot_scatter_as_base64(result,param) elif plot=='datashader': model["plot_datashader"] = plot_datashader_as_base64(result_plot,param) else: continue return result_plot # THIS CELL IS NOT EXPORTED - free notebook cell for testing or development purposes result = apply(model,df,param) result plot = sns.scatterplot(x="UMAP1", y="UMAP2", data=result) plot ``` ## Stage 5 - save the model ``` # save model to name in expected convention "<algo_name>_<model_name>" def save(model,name): with open(MODEL_DIRECTORY + name + ".json", 'w') as file: json.dump(model, file) return model model = save(model,'umap_dga') ``` ## Stage 6 - load the model ``` # load model from name in expected convention "<algo_name>_<model_name>" def load(name): model = {} with open(MODEL_DIRECTORY + name + ".json", 'r') as file: model = json.load(file) return model model = load('umap_dga') ``` ## Stage 7 - provide a summary of the model ``` # return a model summary def summary(model=None): returns = {"version": {"numpy": np.__version__, "pandas": pd.__version__} } return returns ``` ## End of Stages All subsequent cells are not tagged and can be used for further freeform code ``` print(summary(model)) ```
github_jupyter
# Системы контроля версий **Version Control System, VCS** Современные VCS позволяют * сохранить состояние файла и нужные метаданные (кто и когда сделал изменение) * откатить файл к предыдущей версии если что-то пошло не так * откатить целый проект к нужному состоянию * сраванивать разные версии файла между собой ## Локальные VCS Самые ранние VCS начали появляться в 70-х и работали в пределах на одной машине. Наиболее яркий представитель - **RCS**. Все работало через запись дельты между версиями файлов. Пример вычисления дельты с помощью **diff** ``` !echo "Курс USDRUB_TOM\n был 30.23\n По кайфу\n Прекрасная сырьевая экономика" > curr_v1.txt !echo "Курс USDRUB_TOM\n стал 75.80\nЭх, трудно стало жить\n По кайфу\n Прекрасная сырьевая экономика?" > curr_v2.txt !diff -u curr_v1.txt curr_v2.txt !rm curr_v1.txt curr_v2.txt ``` ## Централизированные VCS <img src="./ipynb_content/shared.png" alt="SO" width="500"/> Они появились с развитием командной разработки, и стали стандартом на протяжении 90-х и первой половины нулевых. Возможности: * все всегда знают кто и что делает * единое пространство для контроля Минусы: * клиенты хранят только одно состояние репозитория * единая точка отказа - сервер недоступен, никто не внесет новых изменений * история изменений хранится только на сервере - риск потерять все за один раз Представители: * Subversion (SVN) * CVS * Microsoft Team Foundation Server (TFS) * SourceSafe ## Распределенные VCS **Distributed VCS (DVCS)** <img src="./ipynb_content/distr.png" alt="SO" width="500"/> Все участники DVCS хранят у себя историю изменений, не только на общем сервере. Более того, каждый участник может работать с несколькими удаленными репозиториями. Например, экспериментальные части проекта отправляются на один сервер, а стабильные - на другой. Реализации: * Git * Mercurial * Bazaar ## GIt Самая популярная VCS и одна из самых мощных. ### История появления В 2005 году разработчики ядра Linux были вынуждены мигрировать с VCS BitKeeper. Разработчик BitKeeper предложил неприемлимые условия проекту, что взбесило их. Так появился Git, при разработке которого преследовались: * скорость * простота архитектуры * поддержка большого числа веток (>> 1000) * полная распределенность * способность поддерживать огромные проекты (~28 млн строк - Linux Kernel) Разумеется, со времен первого релиза, Git стал только лучше :) ### Главный миф о Git Говорят, что Git - очень сложная штука. Когда-то это действительно было так. Однако сейчас, существует разделение команд на два класса - **plumber** и **porcelain**. Plumber-команды позволяют работать с Git на самом низком уровне. Porcelain-команды работают поверх Plumber-слоя. В большинстве кейсов для жизни хватает **porcelain**-команд ## Поработаем с GIt ### Установка По умолчанию, Git в комплекте со многими Linux-дистрибутивами и macOS. Для Windows также есть пакет, при установке нужно выбрать режим терминала (встроенный windows/cygwin). ### Начало работы Для примеров мы будем использовать два репозитория - курсовой и пустой ``` !git clone https://github.com/kib-courses/python_developer.git /tmp/git_workshop ``` Мы только что склонировали удаленный репозиторий. Посмотрим что получилось: ``` !ls -al /tmp/git_workshop ``` Узнаем все файлы, которые уже видели на GitHub. При создании репозитория или клонировании существующего, создается каталог **.git**. Все что необходимо, **git** хранит именно там ``` !ls -al /tmp/git_workshop/.git ``` ## Основные понятия GIt (from low-level to high) **На основе Pro Git, глава Git Internals** Рассмотрим три основных вида объектов - blob, tree, commit Для начала, создадим чистый Git-репозиторий. И прежде чем смотреть дальше, нужно представить что Git работает как файловая система ``` !git init /tmp/git_clean !cd /tmp/git_clean/ && ls -al !cd /tmp/git_clean/ && ls -al .git ``` ### blob objects Хранение сырых данных в Git устроено как в обычном словаре. Каждому блобу выдается метка. Для добавления есть plumbing-команда hash-object ``` !cd /tmp/git_clean/ && echo 'test content' | git hash-object -w --stdin !cd /tmp/git_clean/ && ls -al .git/objects/d6 ``` Наш объект добавился в каталог objects/d6. Взглянуть на него снова мы сможем с помощью его метки и команды cat-file ``` !cd /tmp/git_clean/ && git cat-file -p d670460b4b4aece5915caf5c68d12f560a9fe3e4 ``` Тип объекта - blob. **Вся информация о файлах хранится в таких блобах** ``` !cd /tmp/git_clean/ && git cat-file -t d670460b4b4aece5915caf5c68d12f560a9fe3e4 ``` ### tree objects Отлично, что git работает как обычный словарь. Но как нам сохранить отношение между блобами и названиями файлов? **tree-объект** - своего рода каталог, который знает свои файлы, их режим чтения, и где получить их содержимое Их SHA-1 можно получить из коммитов (**о них чуть ниже**). Посмотрим на текущее состояние репозитория ``` !cd /tmp/git_workshop/ && git cat-file -p HEAD !cd /tmp/git_workshop/ && git cat-file -p f6ea389a06e9b5b0f943b554c0f727a304c880b6 ``` Предыдущее состояние репозитория ``` !cd /tmp/git_workshop/ && git cat-file -p 2bd4ec2df932ed1d02b6bbc4f9b133f50ac1b440 !cd /tmp/git_workshop/ && git cat-file -p a4e6162c8b3cc7a9ba4c635ff2e965bc5da9b558 ``` Подкаталог в репозитории - дочерний tree-объект. Заглянем в каталог **lecture_2**, когда с ним что-то делал Иван ``` !cd /tmp/git_workshop/ && git cat-file -p cd1a372d5ab06057d39e780660883ecf566913e4 ``` staging-area > tree object ### commit-objects Единомоментный набор изменений выражается в Git с помощью коммита. В нем есть ссылка на tree-объект, id предыдщего коммита, время и инфа о создателе. Мы уже добавляли новый объект в пустой репозиторий (ячейка №36). Создадим tree-объект. обновим индекс и создадим коммит В каждый момент времени активен tree-объект - working tree ``` !cd /tmp/git_clean/ && git update-index --add --cacheinfo 100644 d670460b4b4aece5915caf5c68d12f560a9fe3e4 hello.wrd !cd /tmp/git_clean/ && git write-tree !cd /tmp/git_clean/ && git cat-file -p 31ce1651aa5db4dde43a532a5bb30921aeb04f32 ``` Снова добавим новую фигню и обновим working tree ``` !cd /tmp/git_clean/ && echo 'great_day' > pumpkins.wrd !cd /tmp/git_clean/ && git update-index --add pumpkins.wrd !cd /tmp/git_clean/ && git write-tree !cd /tmp/git_clean/ && git cat-file -p aed1646ec85d9097531e7230ee9c611a1a2ee8a8 ``` Создадим вложенный каталог 'whoa' и поместим в него файлы из предыдущего tree ``` !cd /tmp/git_clean/ && git read-tree --prefix=whoa 31ce1651aa5db4dde43a532a5bb30921aeb04f32 !cd /tmp/git_clean/ && git write-tree !cd /tmp/git_clean/ && git cat-file -p eef1341de0ac629cc4bbc0ed82faa37056feb7de !cd /tmp/git_clean/ && git cat-file -p 31ce1651aa5db4dde43a532a5bb30921aeb04f32 ``` Сделаем собственно коммит ``` !cd /tmp/git_clean/ && echo 'First plumber commit' | git commit-tree eef1341de0ac629cc4bbc0ed82faa37056feb7de !cd /tmp/git_clean/ && git cat-file -p 637c1127f20ec446c94addf41d0b6e9517ff9b00 ``` Вот что получилось: <img src="./ipynb_content/plumbing.png" alt="SO" width="500"/> ### references Очень напряжно помнить короткий sha1 коммита. В git есть reference-объекты. Давайте создадим указатель 'master', который будет ссылаться на наш рукотворный коммит ``` !cd /tmp/git_clean/ && ls -al .git/refs !cd /tmp/git_clean/ && echo '637c1127f20ec446c94addf41d0b6e9517ff9b00' > .git/refs/heads/master !cd /tmp/git_clean/ && git log --pretty=oneline master ``` Ветки тоже являются своего рода указателями ``` !cd /tmp/git_workshop/ && cat .git/refs/heads/master ``` ## Жизненный цикл данных в Git Дано: подготовленный локальный репозиторий, в котором происходят изменения. <img src="./ipynb_content/cycle.png" alt="SO" width="700"/> Каждый файл проходит через 4 состояния: * untracked Файл создан и не добавлен в репозиторий * staged Файл добавлен в индекс (еще называют staging area, working tree). Но пока он не содержится ни в одном коммите * unmodified Файл уже содержится в репозитории, и не был изменен. * modified Файл уже содержится в репозитории, и претерпел изменения. **Важно помнить - Git не хранит изменения дельтами. При каждом изменении индексированного файла в базу добавляется новый blob** ### Переходные состояния файлов. Индекс и рабочая директория <img src="./ipynb_content/commit.png" alt="SO" width="700"/> ``` !git init /tmp/git_clean2 !cd /tmp/git_clean2/ && touch buffalo ``` Могут быть нюансы при последовательном выполнении Plumbing-команд, поэтому получаем необычный результат - файлы удалены из индекса ``` !cd /tmp/git_clean2/ && git status ``` Добавить untracked файл в индекс: ``` !cd /tmp/git_clean2/ && git add buffalo !cd /tmp/git_clean2/ && git status ``` Попробуем совершить коммит ``` !cd /tmp/git_clean2/ && git commit -m "First commit" ``` Изменим проиндексированный файл ``` !cd /tmp/git_clean2/ && echo "123" > buffalo !cd /tmp/git_clean2/ && git status ``` ### remotes and push Время закинуть изменения на удаленку. Создадим пустую репу на GitLab.com. После чего сможем добавить remote origin и передать ему изменения Опция -u создаем маппинг на remote-версию ветки ``` !cd /tmp/git_clean2/ && git remote add origin2 git@gitlab.com:lancerx/garbage_repo.git !cd /tmp/git_clean2/ && git push -u origin2 master !cd /tmp/git_clean2/ && ls -al .git/refs/remotes/origin2 ``` ### fetch, log and pull Сделаем имитацию изменений репозитория через GitLab UI ``` !cd /tmp/git_clean2/ && git fetch origin2 ``` Наш локальный master отстает от origin2 на один коммит ``` !cd /tmp/git_clean2/ && git log --oneline --decorate --graph --all ``` Накатим новые изменения. Не получится из-за пересекающихся изменений ``` !cd /tmp/git_clean2/ && git pull origin2 !cd /tmp/git_clean2/ && git status ``` Один из вариантов - очистить индекс от пересекающихся изменений ``` !cd /tmp/git_clean2/ && cp buffalo buf.bkp !cd /tmp/git_clean2/ && git checkout HEAD -- buffalo !cd /tmp/git_clean2/ && git status !cd /tmp/git_clean2/ && git pull origin2 !cd /tmp/git_clean2/ && git log --oneline --decorate --graph --all ```
github_jupyter
``` %matplotlib inline ``` # OT for image color adaptation This example presents a way of transferring colors between two images with Optimal Transport as introduced in [6] [6] Ferradans, S., Papadakis, N., Peyre, G., & Aujol, J. F. (2014). Regularized discrete optimal transport. SIAM Journal on Imaging Sciences, 7(3), 1853-1882. ``` # Authors: Remi Flamary <remi.flamary@unice.fr> # Stanislas Chambon <stan.chambon@gmail.com> # # License: MIT License import numpy as np from scipy import ndimage import matplotlib.pylab as pl import ot r = np.random.RandomState(42) def im2mat(I): """Converts an image to matrix (one pixel per line)""" return I.reshape((I.shape[0] * I.shape[1], I.shape[2])) def mat2im(X, shape): """Converts back a matrix to an image""" return X.reshape(shape) def minmax(I): return np.clip(I, 0, 1) ``` Generate data ------------- ``` # Loading images I1 = ndimage.imread('../data/ocean_day.jpg').astype(np.float64) / 256 I2 = ndimage.imread('../data/ocean_sunset.jpg').astype(np.float64) / 256 X1 = im2mat(I1) X2 = im2mat(I2) # training samples nb = 1000 idx1 = r.randint(X1.shape[0], size=(nb,)) idx2 = r.randint(X2.shape[0], size=(nb,)) Xs = X1[idx1, :] Xt = X2[idx2, :] ``` Plot original image ------------------- ``` pl.figure(1, figsize=(6.4, 3)) pl.subplot(1, 2, 1) pl.imshow(I1) pl.axis('off') pl.title('Image 1') pl.subplot(1, 2, 2) pl.imshow(I2) pl.axis('off') pl.title('Image 2') ``` Scatter plot of colors ---------------------- ``` pl.figure(2, figsize=(6.4, 3)) pl.subplot(1, 2, 1) pl.scatter(Xs[:, 0], Xs[:, 2], c=Xs) pl.axis([0, 1, 0, 1]) pl.xlabel('Red') pl.ylabel('Blue') pl.title('Image 1') pl.subplot(1, 2, 2) pl.scatter(Xt[:, 0], Xt[:, 2], c=Xt) pl.axis([0, 1, 0, 1]) pl.xlabel('Red') pl.ylabel('Blue') pl.title('Image 2') pl.tight_layout() ``` Instantiate the different transport algorithms and fit them ----------------------------------------------------------- ``` # EMDTransport ot_emd = ot.da.EMDTransport() ot_emd.fit(Xs=Xs, Xt=Xt) # SinkhornTransport ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1) ot_sinkhorn.fit(Xs=Xs, Xt=Xt) # prediction between images (using out of sample prediction as in [6]) transp_Xs_emd = ot_emd.transform(Xs=X1) transp_Xt_emd = ot_emd.inverse_transform(Xt=X2) transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=X1) transp_Xt_sinkhorn = ot_sinkhorn.inverse_transform(Xt=X2) I1t = minmax(mat2im(transp_Xs_emd, I1.shape)) I2t = minmax(mat2im(transp_Xt_emd, I2.shape)) I1te = minmax(mat2im(transp_Xs_sinkhorn, I1.shape)) I2te = minmax(mat2im(transp_Xt_sinkhorn, I2.shape)) ``` Plot new images --------------- ``` pl.figure(3, figsize=(8, 4)) pl.subplot(2, 3, 1) pl.imshow(I1) pl.axis('off') pl.title('Image 1') pl.subplot(2, 3, 2) pl.imshow(I1t) pl.axis('off') pl.title('Image 1 Adapt') pl.subplot(2, 3, 3) pl.imshow(I1te) pl.axis('off') pl.title('Image 1 Adapt (reg)') pl.subplot(2, 3, 4) pl.imshow(I2) pl.axis('off') pl.title('Image 2') pl.subplot(2, 3, 5) pl.imshow(I2t) pl.axis('off') pl.title('Image 2 Adapt') pl.subplot(2, 3, 6) pl.imshow(I2te) pl.axis('off') pl.title('Image 2 Adapt (reg)') pl.tight_layout() pl.show() ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. ``` # はじめてのニューラルネットワーク:分類問題の初歩 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/basic_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ja/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ja/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [docs-ja@tensorflow.org メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。 このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。 ``` from __future__ import absolute_import, division, print_function, unicode_literals # TensorFlow と tf.keras のインポート import tensorflow as tf from tensorflow import keras # ヘルパーライブラリのインポート import numpy as np import matplotlib.pyplot as plt print(tf.__version__) ``` ## ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <table> <tr><td> <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> </td></tr> <tr><td align="center"> <b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>&nbsp; </td></tr> </table> Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。 Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。 ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。 ``` fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() ``` ロードしたデータセットは、NumPy配列になります。 * `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。 * 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。 画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 <table> <tr> <th>Label</th> <th>Class</th> </tr> <tr> <td>0</td> <td>T-shirt/top</td> </tr> <tr> <td>1</td> <td>Trouser</td> </tr> <tr> <td>2</td> <td>Pullover</td> </tr> <tr> <td>3</td> <td>Dress</td> </tr> <tr> <td>4</td> <td>Coat</td> </tr> <tr> <td>5</td> <td>Sandal</td> </tr> <tr> <td>6</td> <td>Shirt</td> </tr> <tr> <td>7</td> <td>Sneaker</td> </tr> <tr> <td>8</td> <td>Bag</td> </tr> <tr> <td>9</td> <td>Ankle boot</td> </tr> </table> 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。 ``` class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] ``` ## データの観察 モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。 ``` train_images.shape ``` 同様に、訓練用データセットには60,000個のラベルが含まれます。 ``` len(train_labels) ``` ラベルはそれぞれ、0から9までの間の整数です。 ``` train_labels ``` テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。 ``` test_images.shape ``` テスト用データセットには10,000個のラベルが含まれます。 ``` len(test_labels) ``` ## データの前処理 ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。 ``` plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.gca().grid(False) plt.show() ``` ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。 **訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。 ``` train_images = train_images / 255.0 test_images = test_images / 255.0 ``` **訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。 ``` plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() ``` ## モデルの構築 ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 ### 層の設定 ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。 ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。 ``` model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation=tf.nn.relu), keras.layers.Dense(10, activation=tf.nn.softmax) ]) ``` このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。 ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 ### モデルのコンパイル モデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。 * **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。 * **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。 * **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。 ``` model.compile(optimizer=tf.keras.optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) ``` ## モデルの訓練 ニューラルネットワークの訓練には次のようなステップが必要です。 1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。 2. モデルは、画像とラベルの対応関係を学習します。 3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。 ``` model.fit(train_images, train_labels, epochs=5) ``` モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 ## 正解率の評価 次に、テスト用データセットに対するモデルの性能を比較します。 ``` test_loss, test_acc = model.evaluate(test_images, test_labels) print('Test accuracy:', test_acc) ``` ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 ## 予測する モデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。 ``` predictions = model.predict(test_images) ``` これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。 ``` predictions[0] ``` 予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。 ``` np.argmax(predictions[0]) ``` というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。 ``` test_labels[0] ``` 10チャンネルすべてをグラフ化してみることができます。 ``` def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') ``` 0番目の画像と、予測、予測配列を見てみましょう。 ``` i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() i = 12 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() ``` 予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。 ``` # X個のテスト画像、予測されたラベル、正解ラベルを表示します。 # 正しい予測は青で、間違った予測は赤で表示しています。 num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions, test_labels) plt.show() ``` 最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。 ``` # テスト用データセットから画像を1枚取り出す img = test_images[0] print(img.shape) ``` `tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。 ``` # 画像を1枚だけのバッチのメンバーにする img = (np.expand_dims(img,0)) print(img.shape) ``` そして、予測を行います。 ``` predictions_single = model.predict(img) print(predictions_single) plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=45) plt.show() ``` `model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。 ``` prediction = predictions[0] np.argmax(prediction) ``` というわけで、モデルは9というラベルを予測しました。
github_jupyter
<a href="https://colab.research.google.com/github/karaage0703/karaage-ai-book/blob/master/ch03/03_karaage_ai_book_generate_text_markov_chain.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # マルコフ連鎖による文章生成 ## 教師データのダウンロード ``` !wget https://github.com/aozorabunko/aozorabunko/raw/master/cards/000096/files/2093_ruby_28087.zip !unzip 2093_ruby_28087.zip ``` ファイルを読み込みます ``` text_list = [] with open('dogura_magura.txt', encoding='shift_jis') as f: text_list = f.readlines() text_list[0:10] ``` ## データの前処理 不要な文字の削除 ``` import re def normalize_text(text): text = re.sub(r'》', '', text) text = re.sub(r'※', '', text) text = re.sub(r'《', '', text) text = re.sub(r'[', '', text) text = re.sub(r'#', '', text) text = re.sub(r'-', '', text) text = re.sub(r'|', '', text) text = re.sub(r']', '', text) text = re.sub(r'[', '', text) text = re.sub(r'【','', text) text = re.sub(r'】','', text) text = text.strip() return text new_text_list = [] for text in text_list: text = normalize_text(text) new_text_list.append(text) text_list = new_text_list text_list[0:10] ``` 形態素解析ライブラリの「janome」をインストール ``` !pip install janome ``` 分かち書き ``` from janome.tokenizer import Tokenizer def wakachigaki(text_list): t = Tokenizer() words = [] for text in text_list: tokens = t.tokenize(text) for token in tokens: pos = token.part_of_speech.split(',')[0] words.append(token.surface) text = ' '.join(words) return text word_list = [w for w in wakachigaki(text_list).split()] word_list[0:10] ``` ## マルコフ連鎖モデルの学習 ### 簡単な例で確認 ``` test_text = ['私はからあげが好きだ。君はからあげを食べる。私はおやつが好きだ。'] test_text = wakachigaki(test_text) test_text = test_text.replace('から あげ', 'からあげ') test_text = test_text.replace('お やつ', 'おやつ') test_word_list = [w for w in test_text.split()] test_word_list ``` 一階のマルコフ連鎖モデル ``` def make_markov_model_1(word_list): markov = {} w1 = '' for word in word_list: if w1: if w1 not in markov: markov[w1] = [] markov[w1].append(word) w1 = word return markov markov_model_test_1 = make_markov_model_1(test_word_list) ``` 学習したモデルの確認 ``` def check_model(model, check_numb=10): count = 0 for key in model.keys(): if count >= 0: print('key:', key) print('value:', model[key]) print('------------------------') count += 1 if count > check_numb: break check_model(markov_model_test_1, check_numb=20) test_text = ['私はからあげが好きだ。君はからあげを食べる。私はおやつが好きだ。空が青い。'] test_text = wakachigaki(test_text) test_text = test_text.replace('から あげ', 'からあげ') test_text = test_text.replace('お やつ', 'おやつ') test_word_list = [w for w in test_text.split()] test_word_list markov_model_test_1 = make_markov_model_1(test_word_list) check_model(markov_model_test_1, check_numb=20) def make_markov_model_2(text_list): markov = {} w1 = '' w2 = '' for word in text_list: if w1 and w2: if (w1, w2) not in markov: markov[(w1, w2)] = [] markov[(w1, w2)].append(word) w1, w2 = w2, word return markov markov_model_test_2 = make_markov_model_2(test_word_list) check_model(markov_model_test_2, check_numb=20) ``` ### ドグラ・マグラを使ってモデルを生成 ``` markov_model_2 = make_markov_model_2(word_list) check_model(markov_model_2, check_numb=20) ``` ## 文章の生成 ### 2階のマルコフモデルで文章生成 作成したモデルを使って文章を生成する ``` import random def generate_text_2(model, max_sentence): count_sentence = 0 sentence = '' w1, w2 = random.choice(list(model.keys())) while count_sentence < max_sentence: try: tmp = random.choice(model[(w1, w2)]) sentence += tmp if(tmp=='。'): count_sentence += 1 sentence += '\n' w1, w2 = w2, tmp except: w1, w2 = random.choice(list(model.keys())) return sentence print(generate_text_2(markov_model_2, 10)) ``` ### 2階から5階のマルコフ連鎖に変更 ``` def make_markov_model_5(word_list): markov = {} w1 = '' w2 = '' w3 = '' w4 = '' w5 = '' for word in word_list: if w1 and w2 and w3 and w4 and w5: if (w1, w2, w3, w4, w5) not in markov: markov[(w1, w2, w3, w4, w5)] = [] markov[(w1, w2, w3, w4, w5)].append(word) w1, w2, w3, w4, w5 = w2, w3, w4, w5, word return markov markov_model_5 = make_markov_model_5(word_list) check_model(markov_model_5, 20) import random def generate_text_5(model, max_sentence): count_sentence = 0 sentence = '' w1, w2, w3, w4, w5 = random.choice(list(model.keys())) while count_sentence < max_sentence: try: tmp = random.choice(model[(w1, w2, w3, w4, w5)]) sentence += tmp if(tmp=='。'): count_sentence += 1 sentence += '\n' w1, w2, w3, w4, w5 = w2, w3, w4, w5, tmp except: w1, w2, w3, w4, w5 = random.choice(list(model.keys())) return sentence print(generate_text_5(markov_model_5, 10)) ``` # 参考リンク - https://omedstu.jimdofree.com/2018/05/06/%E3%83%9E%E3%83%AB%E3%82%B3%E3%83%95%E9%80%A3%E9%8E%96%E3%81%AB%E3%82%88%E3%82%8B%E6%96%87%E6%9B%B8%E7%94%9F%E6%88%90/ - https://qiita.com/k-jimon/items/f02fae75e853a9c02127
github_jupyter
# Reinforcement Learning This IPy notebook acts as supporting material for **Chapter 21 Reinforcement Learning** of the book* Artificial Intelligence: A Modern Approach*. This notebook makes use of the implementations in rl.py module. We also make use of implementation of MDPs in the mdp.py module to test our agents. It might be helpful if you have already gone through the IPy notebook dealing with Markov decision process. Let us import everything from the rl module. It might be helpful to view the source of some of our implementations. Please refer to the Introductory IPy file for more details. ``` from rl import * ``` ## Review Before we start playing with the actual implementations let us review a couple of things about RL. 1. Reinforcement Learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. 2. Reinforcement learning differs from standard supervised learning in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). -- Source: [Wikipedia](https://en.wikipedia.org/wiki/Reinforcement_learning) In summary we have a sequence of state action transitions with rewards associated with some states. Our goal is to find the optimal policy (pi) which tells us what action to take in each state. ## Passive Reinforcement Learning In passive Reinforcement Learning the agent follows a fixed policy and tries to learn the Reward function and the Transition model (if it is not aware of that). ### Passive Temporal Difference Agent The PassiveTDAgent class in the rl module implements the Agent Program (notice the usage of word Program) described in **Fig 21.4** of the AIMA Book. PassiveTDAgent uses temporal differences to learn utility estimates. In simple terms we learn the difference between the states and backup the values to previous states while following a fixed policy. Let us look into the source before we see some usage examples. ``` %psource PassiveTDAgent ``` The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the __ call __ method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a policy(pi) and a mdp whose utility of states will be estimated. Let us import a GridMDP object from the mdp module. **Figure 17.1 (sequential_decision_environment)** is similar to **Figure 21.1** but has some discounting as **gamma = 0.9**. ``` from mdp import sequential_decision_environment sequential_decision_environment ``` **Figure 17.1 (sequential_decision_environment)** is a GridMDP object and is similar to the grid shown in **Figure 21.1**. The rewards in the terminal states are **+1** and **-1** and **-0.04** in rest of the states. <img src="files/images/mdp.png"> Now we define a policy similar to **Fig 21.1** in the book. ``` # Action Directions north = (0, 1) south = (0,-1) west = (-1, 0) east = (1, 0) policy = { (0, 2): east, (1, 2): east, (2, 2): east, (3, 2): None, (0, 1): north, (2, 1): north, (3, 1): None, (0, 0): north, (1, 0): west, (2, 0): west, (3, 0): west, } ``` Let us create our object now. We also use the **same alpha** as given in the footnote of the book on **page 837**. ``` our_agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n)) ``` The rl module also has a simple implementation to simulate iterations. The function is called **run_single_trial**. Now we can try our implementation. We can also compare the utility estimates learned by our agent to those obtained via **value iteration**. ``` from mdp import value_iteration ``` The values calculated by value iteration: ``` print(value_iteration(sequential_decision_environment)) ``` Now the values estimated by our agent after **200 trials**. ``` for i in range(200): run_single_trial(our_agent,sequential_decision_environment) print(our_agent.U) ``` We can also explore how these estimates vary with time by using plots similar to **Fig 21.5a**. To do so we define a function to help us with the same. We will first enable matplotlib using the inline backend. ``` %matplotlib inline import matplotlib.pyplot as plt def graph_utility_estimates(agent_program, mdp, no_of_iterations, states_to_graph): graphs = {state:[] for state in states_to_graph} for iteration in range(1,no_of_iterations+1): run_single_trial(agent_program, mdp) for state in states_to_graph: graphs[state].append((iteration, agent_program.U[state])) for state, value in graphs.items(): state_x, state_y = zip(*value) plt.plot(state_x, state_y, label=str(state)) plt.ylim([0,1.2]) plt.legend(loc='lower right') plt.xlabel('Iterations') plt.ylabel('U') ``` Here is a plot of state (2,2). ``` agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n)) graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2)]) ``` It is also possible to plot multiple states on the same plot. ``` graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2), (3,2)]) ``` ## Active Reinforcement Learning Unlike Passive Reinforcement Learning in Active Reinforcement Learning we are not bound by a policy pi and we need to select our actions. In other words the agent needs to learn an optimal policy. The fundamental tradeoff the agent needs to face is that of exploration vs. exploitation. ### QLearning Agent The QLearningAgent class in the rl module implements the Agent Program described in **Fig 21.8** of the AIMA Book. In Q-Learning the agent learns an action-value function Q which gives the utility of taking a given action in a particular state. Q-Learning does not required a transition model and hence is a model free method. Let us look into the source before we see some usage examples. ``` %psource QLearningAgent ``` The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the __ call __ method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a mdp similar to the PassiveTDAgent. Let us use the same GridMDP object we used above. **Figure 17.1 (sequential_decision_environment)** is similar to **Figure 21.1** but has some discounting as **gamma = 0.9**. The class also implements an exploration function **f** which returns fixed **Rplus** untill agent has visited state, action **Ne** number of times. This is the same as the one defined on page **842** of the book. The method **actions_in_state** returns actions possible in given state. It is useful when applying max and argmax operations. Let us create our object now. We also use the **same alpha** as given in the footnote of the book on **page 837**. We use **Rplus = 2** and **Ne = 5** as defined on page 843. **Fig 21.7** ``` q_agent = QLearningAgent(sequential_decision_environment, Ne=5, Rplus=2, alpha=lambda n: 60./(59+n)) ``` Now to try out the q_agent we make use of the **run_single_trial** function in rl.py (which was also used above). Let us use **200** iterations. ``` for i in range(200): run_single_trial(q_agent,sequential_decision_environment) ``` Now let us see the Q Values. The keys are state-action pairs. Where differnt actions correspond according to: north = (0, 1) south = (0,-1) west = (-1, 0) east = (1, 0) ``` q_agent.Q ``` The Utility **U** of each state is related to **Q** by the following equation. **U (s) = max <sub>a</sub> Q(s, a)** Let us convert the Q Values above into U estimates. ``` U = defaultdict(lambda: -1000.) # Very Large Negative Value for Comparison see below. for state_action, value in q_agent.Q.items(): state, action = state_action if U[state] < value: U[state] = value U ``` Let us finally compare these estimates to value_iteration results. ``` print(value_iteration(sequential_decision_environment)) ```
github_jupyter
# Lorentz ODE Solver in JAX Alex Alemi # Cloud TPU Setup ``` from jax.tools import colab_tpu colab_tpu.setup_tpu() ``` # Imports ``` import io import os from functools import partial import numpy as np import jax import jax.numpy as jnp from jax import vmap, jit, grad, ops, lax, config from jax import random as jr # The following is required to use TPU Driver as JAX's backend. config.FLAGS.jax_xla_backend = "tpu_driver" config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR'] import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.cm as cm from IPython.display import display_png mpl.rcParams['savefig.pad_inches'] = 0 plt.style.use('seaborn-dark') %matplotlib inline ``` # Plotting Utilities These just provide fast, better antialiased line plotting than typical matplotlib plotting routines. ``` @jit def drawline(im, x0, y0, x1, y1): """An implementation of Wu's antialiased line algorithm. This functional version was adapted from here: https://en.wikipedia.org/wiki/Xiaolin_Wu's_line_algorithm """ ipart = lambda x: jnp.floor(x).astype('int32') round_ = lambda x: ipart(x + 0.5).astype('int32') fpart = lambda x: x - jnp.floor(x) rfpart = lambda x: 1 - fpart(x) def plot(im, x, y, c): return ops.index_add(im, ops.index[x, y], c) steep = jnp.abs(y1 - y0) > jnp.abs(x1 - x0) cond_swap = lambda cond, x: lax.cond(cond, x, lambda x: (x[1], x[0]), x, lambda x: x) (x0, y0) = cond_swap(steep, (x0, y0)) (x1, y1) = cond_swap(steep, (x1, y1)) (y0, y1) = cond_swap(x0 > x1, (y0, y1)) (x0, x1) = cond_swap(x0 > x1, (x0, x1)) dx = x1 - x0 dy = y1 - y0 gradient = jnp.where(dx == 0.0, 1.0, dy/dx) # handle first endpoint xend = round_(x0) yend = y0 + gradient * (xend - x0) xgap = rfpart(x0 + 0.5) xpxl1 = xend # this will be used in main loop ypxl1 = ipart(yend) def true_fun(im): im = plot(im, ypxl1, xpxl1, rfpart(yend) * xgap) im = plot(im, ypxl1+1, xpxl1, fpart(yend) * xgap) return im def false_fun(im): im = plot(im, xpxl1, ypxl1 , rfpart(yend) * xgap) im = plot(im, xpxl1, ypxl1+1, fpart(yend) * xgap) return im im = lax.cond(steep, im, true_fun, im, false_fun) intery = yend + gradient # handle second endpoint xend = round_(x1) yend = y1 + gradient * (xend - x1) xgap = fpart(x1 + 0.5) xpxl2 = xend # this will be used in the main loop ypxl2 = ipart(yend) def true_fun(im): im = plot(im, ypxl2 , xpxl2, rfpart(yend) * xgap) im = plot(im, ypxl2+1, xpxl2, fpart(yend) * xgap) return im def false_fun(im): im = plot(im, xpxl2, ypxl2, rfpart(yend) * xgap) im = plot(im, xpxl2, ypxl2+1, fpart(yend) * xgap) return im im = lax.cond(steep, im, true_fun, im, false_fun) def true_fun(arg): im, intery = arg def body_fun(x, arg): im, intery = arg im = plot(im, ipart(intery), x, rfpart(intery)) im = plot(im, ipart(intery)+1, x, fpart(intery)) intery = intery + gradient return (im, intery) im, intery = lax.fori_loop(xpxl1+1, xpxl2, body_fun, (im, intery)) return (im, intery) def false_fun(arg): im, intery = arg def body_fun(x, arg): im, intery = arg im = plot(im, x, ipart(intery), rfpart(intery)) im = plot(im, x, ipart(intery)+1, fpart(intery)) intery = intery + gradient return (im, intery) im, intery = lax.fori_loop(xpxl1+1, xpxl2, body_fun, (im, intery)) return (im, intery) im, intery = lax.cond(steep, (im, intery), true_fun, (im, intery), false_fun) return im def img_adjust(data): oim = np.array(data) hist, bin_edges = np.histogram(oim.flat, bins=256*256) bin_centers = (bin_edges[:-1] + bin_edges[1:]) / 2 cdf = hist.cumsum() cdf = cdf / float(cdf[-1]) return np.interp(oim.flat, bin_centers, cdf).reshape(oim.shape) def imify(arr, vmin=None, vmax=None, cmap=None, origin=None): arr = img_adjust(arr) sm = cm.ScalarMappable(cmap=cmap) sm.set_clim(vmin, vmax) if origin is None: origin = mpl.rcParams["image.origin"] if origin == "lower": arr = arr[::-1] rgba = sm.to_rgba(arr, bytes=True) return rgba def plot_image(array, **kwargs): f = io.BytesIO() imarray = imify(array, **kwargs) plt.imsave(f, imarray, format="png") f.seek(0) dat = f.read() f.close() display_png(dat, raw=True) def pack_images(images, rows, cols): shape = np.shape(images) width, height, depth = shape[-3:] images = np.reshape(images, (-1, width, height, depth)) batch = np.shape(images)[0] rows = np.minimum(rows, batch) cols = np.minimum(batch // rows, cols) images = images[:rows * cols] images = np.reshape(images, (rows, cols, width, height, depth)) images = np.transpose(images, [0, 2, 1, 3, 4]) images = np.reshape(images, [rows * width, cols * height, depth]) return images ``` # Lorentz Dynamics Implement Lorentz' attractor ``` sigma = 10. beta = 8./3 rho = 28. @jit def f(state, t): x, y, z = state return jnp.array([sigma * (y - x), x * (rho - z) - y, x * y - beta * z]) ``` # Runge Kutta Integrator ``` @jit def rk4(ys, dt, N): @jit def step(i, ys): h = dt t = dt * i k1 = h * f(ys[i-1], t) k2 = h * f(ys[i-1] + k1/2., dt * i + h/2.) k3 = h * f(ys[i-1] + k2/2., t + h/2.) k4 = h * f(ys[i-1] + k3, t + h) ysi = ys[i-1] + 1./6 * (k1 + 2 * k2 + 2 * k3 + k4) return ops.index_update(ys, ops.index[i], ysi) return lax.fori_loop(1, N, step, ys) ``` # Solve and plot a single ODE Solution using jitted solver and plotter ``` N = 40000 # set initial condition state0 = jnp.array([1., 1., 1.]) ys = jnp.zeros((N,) + state0.shape) ys = ops.index_update(ys, ops.index[0], state0) # solve for N steps ys = rk4(ys, 0.004, N).block_until_ready() # plotting size and region: xlim, zlim = (-20, 20), (0, 50) xN, zN = 800, 600 # fast, jitted plotting function @partial(jax.jit, static_argnums=(2,3,4,5)) def jplotter(xs, zs, xlim, zlim, xN, zN): im = jnp.zeros((xN, zN)) xpixels = (xs - xlim[0])/(1.0 * (xlim[1] - xlim[0])) * xN zpixels = (zs - zlim[0])/(1.0 * (zlim[1] - zlim[0])) * zN def body_fun(i, im): return drawline(im, xpixels[i-1], zpixels[i-1], xpixels[i], zpixels[i]) return lax.fori_loop(1, xpixels.shape[0], body_fun, im) im = jplotter(ys[...,0], ys[...,2], xlim, zlim, xN, zN) plot_image(im[:,::-1].T, cmap='magma') ``` # Parallel ODE Solutions with Pmap ``` N_dev = jax.device_count() N = 4000 # set some initial conditions for each replicate ys = jnp.zeros((N_dev, N, 3)) state0 = jr.uniform(jr.PRNGKey(1), minval=-1., maxval=1., shape=(N_dev, 3)) state0 = state0 * jnp.array([18,18,1]) + jnp.array((0.,0.,10.)) ys = ops.index_update(ys, ops.index[:, 0], state0) # solve each replicate in parallel using `pmap` of rk4 solver: ys = jax.pmap(rk4)(ys, 0.004 * jnp.ones(N_dev), N * jnp.ones(N_dev, dtype=np.int32) ).block_until_ready() # parallel plotter using lexical closure and pmap'd core plotting function def pplotter(_xs, _zs, xlim, zlim, xN, zN): N_dev = _xs.shape[0] im = jnp.zeros((N_dev, xN, zN)) @jax.pmap def plotfn(im, xs, zs): xpixels = (xs - xlim[0])/(1.0 * (xlim[1] - xlim[0])) * xN zpixels = (zs - zlim[0])/(1.0 * (zlim[1] - zlim[0])) * zN def body_fun(i, im): return drawline(im, xpixels[i-1], zpixels[i-1], xpixels[i], zpixels[i]) return lax.fori_loop(1, xpixels.shape[0], body_fun, im) return plotfn(im, _xs, _zs) xlim, zlim = (-20, 20), (0, 50) xN, zN = 200, 150 # above, plot ODE traces separately ims = pplotter(ys[...,0], ys[...,2], xlim, zlim, xN, zN) im = pack_images(ims[..., None], 4, 2)[..., 0] plot_image(im[:,::-1].T, cmap='magma') # below, plot combined ODE traces ims = pplotter(ys[...,0], ys[...,2], xlim, zlim, xN*4, zN*4) plot_image(jnp.sum(ims, axis=0)[:,::-1].T, cmap='magma') ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ### Agent Testing - Multiple Job Sets In this notebook we test the performance of the agent trained with multiple job sets. Here we compare the agent performance to the performance of the agent trained with a single job set, using the same unseen job set for testing. Then we show that this agent can generalize well, and its performance on the unseen job set is much better. This lab was tested with Ray version 0.8.5. Please make sure you have this version installed in your Compute Instance. ``` !pip install ray[rllib]==0.8.5 ``` Import the necessary packages. ``` import sys, os sys.path.insert(0, os.path.join(os.getcwd(), '../agent_training/training_scripts/environment')) os.environ.setdefault('PYTHONPATH', os.path.join(os.getcwd(), '../agent_training/training_scripts/environment')) import ray import ray.rllib.agents.pg as pg from ray.rllib.models.torch.torch_modelv2 import TorchModelV2 from ray.rllib.models import ModelCatalog from ray.rllib.utils.annotations import override from ray.tune.registry import register_env import gym from gym import spaces from environment import Parameters, Env import torch import torch.nn as nn import numpy as np ``` Here we define the RL environment class according to the Gym specification in the same way that was done in the agent training script. The difference is that here we add two new methods, *observe* and *plot_state_img*, allowing us to visualize the states of the environment as the agent acts. Details about how to work with custom environment in RLLib can be found [here](https://docs.ray.io/en/master/rllib-env.html#configuring-environments). We also introduce a new parameter to the environment constructor, *unseen*, which is a flag telling the environment to use unseen job sets, meaning job sets different than the ones used for training. ``` class CustomEnv(gym.Env): def __init__(self, env_config): simu_len = env_config['simu_len'] num_ex = env_config['num_ex'] unseen = env_config['unseen'] pa = Parameters() pa.simu_len = simu_len pa.num_ex = num_ex pa.unseen = unseen pa.compute_dependent_parameters() self.env = Env(pa, render=False, repre='image') self.action_space = spaces.Discrete(n=pa.num_nw + 1) self.observation_space = spaces.Box(low=0, high=1, shape=self.env.observe().shape, dtype=np.float) def reset(self): self.env.reset() obs = self.env.observe() return obs def step(self, action): next_obs, reward, done, info = self.env.step(action) info = {} return next_obs, reward, done, info def observe(self): return self.env.observe() def plot_state_img(self): return self.env.plot_state_img() ``` Define the RL environment constructor and register it for use in RLLib. ``` def env_creator(env_config): return CustomEnv(env_config) register_env('CustomEnv', env_creator) ``` Here we define the custom model for the agent policy. RLLib supports both TensorFlow and PyTorch and here we are using the PyTorch interfaces. The policy model is a simple 2-layer feedforward neural network that maps the environment observation array into one of possible 6 actions. It also defines a value function network as a branch of the policy network, to output a single scalar value representing the expected sum of rewards. This value can be used as the baseline for the policy gradient algorithm. More details about how to work with custom policy models with PyTorch in RLLib can be found [here](https://docs.ray.io/en/master/rllib-models.html#pytorch-models). ``` class CustomModel(TorchModelV2, nn.Module): def __init__(self, obs_space, action_space, num_outputs, model_config, name): TorchModelV2.__init__(self, obs_space, action_space, num_outputs, model_config, name) nn.Module.__init__(self) self.hidden_layers = nn.Sequential(nn.Linear(20*124, 32), nn.ReLU(), nn.Linear(32, 16), nn.ReLU()) self.logits = nn.Sequential(nn.Linear(16, 6)) self.value_branch = nn.Sequential(nn.Linear(16, 1)) @override(TorchModelV2) def forward(self, input_dict, state, seq_lens): obs = input_dict['obs'].float() obs = obs.view(obs.shape[0], 1, obs.shape[1], obs.shape[2]) obs = obs.view(obs.shape[0], obs.shape[1] * obs.shape[2] * obs.shape[3]) self.features = self.hidden_layers(obs) actions = self.logits(self.features) return actions, state @override(TorchModelV2) def value_function(self): return self.value_branch(self.features).squeeze(1) ``` Now we register the custom policy model for use in RLLib. ``` ModelCatalog.register_custom_model('CustomModel', CustomModel) ``` Here we create a copy of the default Policy Gradient configuration in RLLib and set the relevant parameters for testing a trained agent. In this case we only need the parameters related to the custom model and to our environment. ``` config = pg.DEFAULT_CONFIG my_config = config.copy() my_params = { 'use_pytorch' : True, 'model': {'custom_model': 'CustomModel'}, 'env': 'CustomEnv', 'env_config': {'simu_len': 50, 'num_ex': 1, 'unseen': True} } for key, value in my_params.items(): my_config[key] = value ``` Initialize the Ray backend. Here we run Ray locally. ``` ray.init() ``` Instantiate the policy gradient trainer object from RLLib. ``` trainer = pg.PGTrainer(config=my_config) ``` We can verify the policy model architecture by getting a reference to the policy object from the trainer and a reference to the model object from the policy. ``` policy = trainer.get_policy() model = policy.model print(model.parameters) ``` Here we load the model checkpoint, corresponding to the single job set training, into the trainer. ``` checkpoint_path = '../model_checkpoints/multi_jobset/checkpoint-1000' trainer.restore(checkpoint_path=checkpoint_path) ``` And finally we perform a rollout of the trained policy, using an unseen job set, meaning a job set different from the one used for training. We notice here that the agent is able to generalize well given unseen data. ``` import numpy as np from IPython import display import matplotlib.pyplot as plt import time from random import randint env = CustomEnv(env_config = my_params['env_config']) img = env.plot_state_img() plt.figure(figsize = (16,16)) plt.grid(color='w', linestyle='-', linewidth=0.5) plt.text(2, -2, "RESOURCES") plt.text(-4, 10, "CPU") plt.text(-4, 30, "MEM") plt.text(14, -2, "JOB QUEUE #1") plt.text(26, -2, "JOB QUEUE #2") plt.text(38, -2, "JOB QUEUE #3") plt.text(50, -2, "JOB QUEUE #4") plt.text(62, -2, "JOB QUEUE #5") plt.text(76, 20, "BACKLOG") plt.imshow(img, vmax=1, cmap='CMRmap') ax = plt.gca() ax.set_xticks(np.arange(-.5, 100, 1)) ax.set_xticklabels([]) ax.set_yticks(np.arange(-.5, 100, 1)) ax.set_yticklabels([]) ax.tick_params(axis=u'both', which=u'both',length=0) image = plt.imshow(img, vmax=1, cmap='CMRmap') display.display(plt.gcf()) actions = [] rewards = [] done = False s = 0 txt1 = plt.text(0, 45, '') txt2 = plt.text(0, 47, '') obs = env.observe() while not done: a = trainer.compute_action(obs) actions.append(a) obs, reward, done, info = env.step(a) rewards.append(reward) s += 1 txt1.remove() txt2.remove() txt1 = plt.text(0, 44, 'STEPS: ' + str(s), fontsize=14) txt2 = plt.text(0, 46, 'TOTAL AVERAGE JOB SLOWDOWN: ' + str(round(-sum(rewards))), fontsize=14) img = env.plot_state_img() image.set_data(img) display.display(plt.gcf()) display.clear_output(wait=True) ``` Shutdown the Ray backend. ``` ray.shutdown() ```
github_jupyter
<img align="center" style="max-width: 1000px" src="banner.png"> <img align="right" style="max-width: 200px; height: auto" src="hsg_logo.png"> ### Lab 05 - "Convolutional Neural Networks (CNNs)" GSERM'21 course "Deep Learning: Fundamentals and Applications", University of St. Gallen The lab environment of the "Deep Learning: Fundamentals and Applications" GSERM course at the University of St. Gallen (HSG) is based on Jupyter Notebooks (https://jupyter.org), which allow to perform a variety of statistical evaluations and data analyses. In this lab, we will learn how to enhance vanilla Artificial Neural Networks (ANNs) using `PyTorch` to classify even more complex images. Therefore, we use a special type of deep neural network referred to **Convolutional Neural Networks (CNNs)**. CNNs encompass the ability to take advantage of the hierarchical pattern in data and assemble more complex patterns using smaller and simpler patterns. Therefore, CNNs are capable to learn a set of discriminative features 'pattern' and subsequently utilize the learned pattern to classify the content of an image. We will again use the functionality of the `PyTorch` library to implement and train an CNN based neural network. The network will be trained on a set of tiny images to learn a model of the image content. Upon successful training, we will utilize the learned CNN model to classify so far unseen tiny images into distinct categories such as aeroplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. The figure below illustrates a high-level view on the machine learning process we aim to establish in this lab. <img align="center" style="max-width: 900px" src="classification.png"> (Image of the CNN architecture created via http://alexlenail.me/) As always, pls. don't hesitate to ask all your questions either during the lab, post them in our CANVAS (StudyNet) forum (https://learning.unisg.ch), or send us an email (using the course email). ## 1. Lab Objectives: After today's lab, you should be able to: > 1. Understand the basic concepts, intuitions and major building blocks of **Convolutional Neural Networks (CNNs)**. > 2. Know how to **implement and to train a CNN** to learn a model of tiny image data. > 3. Understand how to apply such a learned model to **classify images** images based on their content into distinct categories. > 4. Know how to **interpret and visualize** the model's classification results. ## 2. Setup of the Jupyter Notebook Environment Similar to the previous labs, we need to import a couple of Python libraries that allow for data analysis and data visualization. We will mostly use the `PyTorch`, `Numpy`, `Sklearn`, `Matplotlib`, `Seaborn` and a few utility libraries throughout this lab: ``` # import standard python libraries import os, urllib, io from datetime import datetime import numpy as np ``` Import Python machine / deep learning libraries: ``` # import the PyTorch deep learning library import torch, torchvision import torch.nn.functional as F from torch import nn, optim from torch.autograd import Variable ``` Import the sklearn classification metrics: ``` # import sklearn classification evaluation library from sklearn import metrics from sklearn.metrics import classification_report, confusion_matrix ``` Import Python plotting libraries: ``` # import matplotlib, seaborn, and PIL data visualization libary import matplotlib.pyplot as plt import seaborn as sns from PIL import Image ``` Enable notebook matplotlib inline plotting: ``` %matplotlib inline ``` Create a structure of notebook sub-directories inside of the current **working directory** to store the data and the trained neural network models: ``` # create the data sub-directory data_directory = './data' if not os.path.exists(data_directory): os.makedirs(data_directory) # create the models sub-directory models_directory = './models' if not os.path.exists(models_directory): os.makedirs(models_directory) ``` Set a random `seed` value to obtain reproducible results: ``` # init deterministic seed seed_value = 1234 np.random.seed(seed_value) # set numpy seed torch.manual_seed(seed_value) # set pytorch seed CPU ``` Google Colab provides the use of free GPUs for running notebooks. However, if you just execute this notebook as is, it will use your device's CPU. To run the lab on a GPU, got to `Runtime` > `Change runtime type` and set the Runtime type to `GPU` in the drop-down. Running this lab on a CPU is fine, but you will find that GPU computing is faster. *CUDA* indicates that the lab is being run on GPU. Enable GPU computing by setting the `device` flag and init a `CUDA` seed: ``` # set cpu or gpu enabled device device = torch.device('cuda' if torch.cuda.is_available() else 'cpu').type # init deterministic GPU seed torch.cuda.manual_seed(seed_value) # log type of device enabled print('[LOG] notebook with {} computation enabled'.format(str(device))) ``` Let's determine if we have access to a `GPU` provided by e.g. Google's `Colab` environment: ``` !nvidia-smi ``` ## 3. Dataset Download and Data Assessment The **CIFAR-10 database** (**C**anadian **I**nstitute **F**or **A**dvanced **R**esearch) is a collection of images that are commonly used to train machine learning and computer vision algorithms. The database is widely used to conduct computer vision research using machine learning and deep learning methods: <img align="center" style="max-width: 500px; height: 500px" src="cifar10.png"> (Source: https://www.kaggle.com/c/cifar-10) Further details on the dataset can be obtained via: *Krizhevsky, A., 2009. "Learning Multiple Layers of Features from Tiny Images", ( https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf )."* The CIFAR-10 database contains **60,000 color images** (50,000 training images and 10,000 validation images). The size of each image is 32 by 32 pixels. The collection of images encompasses 10 different classes that represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. Let's define the distinct classs for further analytics: ``` cifar10_classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] ``` Thereby the dataset contains 6,000 images for each of the ten classes. The CIFAR-10 is a straightforward dataset that can be used to teach a computer how to recognize objects in images. Let's download, transform and inspect the training images of the dataset. Therefore, we first will define the directory we aim to store the training data: ``` train_path = data_directory + '/train_cifar10' ``` Now, let's download the training data accordingly: ``` # define pytorch transformation into tensor format transf = torchvision.transforms.Compose([torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.49139968, 0.48215841, 0.44653091), (0.24703223, 0.24348513, 0.26158784))]) # download and transform training images cifar10_train_data = torchvision.datasets.CIFAR10(root=train_path, train=True, transform=transf, download=True) ``` Verify the volume of training images downloaded: ``` # get the length of the training data len(cifar10_train_data) ``` Furthermore, let's investigate a couple of the training images: ``` # set (random) image id image_id = 1800 # retrieve image exhibiting the image id cifar10_train_data[image_id] ``` Ok, that doesn't seem easily interpretable ;) Let's first seperate the image from its label information: ``` cifar10_train_image, cifar10_train_label = cifar10_train_data[image_id] ``` Great, now we are able to visually inspect our sample image: ``` # define tensor to image transformation trans = torchvision.transforms.ToPILImage() print(cifar10_train_image.max() - cifar10_train_image.min()) # set image plot title plt.title('Example: {}, Label: "{}"'.format(str(image_id), str(cifar10_classes[cifar10_train_label]))) # un-normalize cifar 10 image sample cifar10_train_image_plot = cifar10_train_image / 4.0 + 0.5 # plot 10 image sample plt.imshow(trans(cifar10_train_image_plot)) ``` Fantastic, right? Let's now decide on where we want to store the evaluation data: ``` eval_path = data_directory + '/eval_cifar10' ``` And download the evaluation data accordingly: ``` # define pytorch transformation into tensor format transf = torchvision.transforms.Compose([torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.49139968, 0.48215841, 0.44653091), (0.24703223, 0.24348513, 0.26158784))]) # download and transform validation images cifar10_eval_data = torchvision.datasets.CIFAR10(root=eval_path, train=False, transform=transf, download=True) ``` Verify the volume of validation images downloaded: ``` # get the length of the training data len(cifar10_eval_data) ``` ## 4. Neural Network Implementation In this section we, will implement the architecture of the **neural network** we aim to utilize to learn a model that is capable of classifying the 32x32 pixel CIFAR 10 images according to the objects contained in each image. However, before we start the implementation, let's briefly revisit the process to be established. The following cartoon provides a birds-eye view: <img align="center" style="max-width: 900px" src="process.png"> Our CNN, which we name 'CIFAR10Net' and aim to implement consists of two **convolutional layers** and three **fully-connected layers**. In general, convolutional layers are specifically designed to learn a set of **high-level features** ("patterns") in the processed images, e.g., tiny edges and shapes. The fully-connected layers utilize the learned features to learn **non-linear feature combinations** that allow for highly accurate classification of the image content into the different image classes of the CIFAR-10 dataset, such as, birds, aeroplanes, horses. Let's implement the network architecture and subsequently have a more in-depth look into its architectural details: ``` # implement the CIFAR10Net network architecture class CIFAR10Net(nn.Module): # define the class constructor def __init__(self): # call super class constructor super(CIFAR10Net, self).__init__() # specify convolution layer 1 self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1, padding=0) # define max-pooling layer 1 self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) # specify convolution layer 2 self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1, padding=0) # define max-pooling layer 2 self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) # specify fc layer 1 - in 16 * 5 * 5, out 120 self.linear1 = nn.Linear(16 * 5 * 5, 120, bias=True) # the linearity W*x+b self.relu1 = nn.ReLU(inplace=True) # the non-linearity # specify fc layer 2 - in 120, out 84 self.linear2 = nn.Linear(120, 84, bias=True) # the linearity W*x+b self.relu2 = nn.ReLU(inplace=True) # the non-linarity # specify fc layer 3 - in 84, out 10 self.linear3 = nn.Linear(84, 10) # the linearity W*x+b # add a softmax to the last layer self.logsoftmax = nn.LogSoftmax(dim=1) # the softmax # define network forward pass def forward(self, images): # high-level feature learning via convolutional layers # define conv layer 1 forward pass x = self.conv1(images) # conv layer 1 output dimension: ((Dimension-Kernel+2*Padding)/Stride)+1 = ((32-5+2*0)/1)+1 = 28 # define pooling 1 forward pass x = self.pool1(x) # pool 1 output dimension: ((Dimension-Kernel+2*Padding)/Stride)+1 = ((28-2+2*0)/2)+1 = 14 # define conv layer 2 forward pass x = self.conv2(x) # conv layer 2 output dimension: ((Dimension-Kernel+2*Padding)/Stride)+1 = ((14-5+2*0)/1)+1 = 10 # define pooling 2 forward pass x = self.pool2(x) # pool 2 output dimension: ((Dimension-Kernel+2*Padding)/Stride)+1 = ((10-2+2*0)/2)+1 = 5 # feature flattening # reshape image pixels x = x.view(-1, 16 * 5 * 5) # combination of feature learning via non-linear layers # define fc layer 1 forward pass x = self.relu1(self.linear1(x)) # define fc layer 2 forward pass x = self.relu2(self.linear2(x)) # define layer 3 forward pass x = self.logsoftmax(self.linear3(x)) # return forward pass result return x ``` You may have noticed that we applied two more layers (compared to the MNIST example described in the last lab) before the fully-connected layers. These layers are referred to as **convolutional** layers and are usually comprised of three operations, (1) **convolution**, (2) **non-linearity**, and (3) **max-pooling**. Those operations are usually executed in sequential order during the forward pass through a convolutional layer. In the following, we will have a detailed look into the functionality and number of parameters in each layer. We will start with providing images of 3x32x32 dimensions to the network, i.e., the three channels (red, green, blue) of an image each of size 32x32 pixels. ### 4.1. High-Level Feature Learning by Convolutional Layers Let's first have a look into the convolutional layers of the network as illustrated in the following: <img align="center" style="max-width: 600px" src="convolutions.png"> **First Convolutional Layer**: The first convolutional layer expects three input channels and will convolve six filters each of size 3x5x5. Let's briefly revisit how we can perform a convolutional operation on a given image. For that, we need to define a kernel which is a matrix of size 5x5, for example. To perform the convolution operation, we slide the kernel along with the image horizontally and vertically and obtain the dot product of the kernel and the pixel values of the image inside the kernel ('receptive field' of the kernel). The following illustration shows an example of a discrete convolution: <img align="center" style="max-width: 800px" src="convsample.png"> The left grid is called the input (an image or feature map). The middle grid, referred to as kernel, slides across the input feature map (or image). At each location, the product between each element of the kernel and the input element it overlaps is computed, and the results are summed up to obtain the output in the current location. In general, a discrete convolution is mathematically expressed by: <center> $y(m, n) = x(m, n) * h(m, n) = \sum^{m}_{j=0} \sum^{n}_{i=0} x(i, j) * h(m-i, n-j)$, </center> where $x$ denotes the input image or feature map, $h$ the applied kernel, and, $y$ the output. When performing the convolution operation the 'stride' defines the number of pixels to pass at a time when sliding the kernel over the input. While 'padding' adds the number of pixels to the input image (or feature map) to ensure that the output has the same shape as the input. Let's have a look at another animated example: <img align="center" style="max-width: 800px" src="convsample_animated.gif"> (Source: https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53) In our implementation padding is set to 0 and stride is set to 1. As a result, the output size of the convolutional layer becomes 6x28x28, because (32 - 5) + 1 = 28. This layer exhibits ((5 x 5 x 3) + 1) x 6 = 456 parameter. **First Max-Pooling Layer:** The max-pooling process is a sample-based discretization operation. The objective is to down-sample an input representation (image, hidden-layer output matrix, etc.), reducing its dimensionality and allowing for assumptions to be made about features contained in the sub-regions binned. To conduct such an operation, we again need to define a kernel. Max-pooling kernels are usually a tiny matrix of, e.g, of size 2x2. To perform the max-pooling operation, we slide the kernel along the image horizontally and vertically (similarly to a convolution) and compute the maximum pixel value of the image (or feature map) inside the kernel (the receptive field of the kernel). The following illustration shows an example of a max-pooling operation: <img align="center" style="max-width: 500px" src="poolsample.png"> The left grid is called the input (an image or feature map). The middle grid, referred to as kernel, slides across the input feature map (or image). We use a stride of 2, meaning the step distance for stepping over our input will be 2 pixels and won't overlap regions. At each location, the max value of the region that overlaps with the elements of the kernel and the input elements it overlaps is computed, and the results are obtained in the output of the current location. In our implementation, we do max-pooling with a 2x2 kernel and stride 2 this effectively drops the original image size from 6x28x28 to 6x14x14. **Second Convolutional Layer:** The second convolutional layer expects 6 input channels and will convolve 16 filters each of size 6x5x5x. Since padding is set to 0 and stride is set 1, the output size is 16x10x10, because (14 - 5) + 1 = 10. This layer therefore has ((5 x 5 x 6) + 1 x 16) = 24,16 parameter. **Second Max-Pooling Layer:** The second down-sampling layer uses max-pooling with 2x2 kernel and stride set to 2. This effectively drops the size from 16x10x10 to 16x5x5. ### 4.2. Flattening of Learned Features The output of the final-max pooling layer needs to be flattened so that we can connect it to a fully connected layer. This is achieved using the `torch.Tensor.view` method. Setting the parameter of the method to `-1` will automatically infer the number of rows required to handle the mini-batch size of the data. ### 4.3. Learning of Feature Classification Let's now have a look into the non-linear layers of the network illustrated in the following: <img align="center" style="max-width: 600px" src="fullyconnected.png"> The first fully connected layer uses 'Rectified Linear Units' (ReLU) activation functions to learn potential nonlinear combinations of features. The layers are implemented similarly to the fifth lab. Therefore, we will only focus on the number of parameters of each fully-connected layer: **First Fully-Connected Layer:** The first fully-connected layer consists of 120 neurons, thus in total exhibits ((16 x 5 x 5) + 1) x 120 = 48,120 parameter. **Second Fully-Connected Layer:** The output of the first fully-connected layer is then transferred to second fully-connected layer. The layer consists of 84 neurons equipped with ReLu activation functions, this in total exhibits (120 + 1) x 84 = 10,164 parameter. The output of the second fully-connected layer is then transferred to the output-layer (third fully-connected layer). The output layer is equipped with a softmax (that you learned about in the previous lab 05) and is made up of ten neurons, one for each object class contained in the CIFAR-10 dataset. This layer exhibits (84 + 1) x 10 = 850 parameter. As a result our CIFAR-10 convolutional neural exhibits a total of 456 + 2,416 + 48,120 + 10,164 + 850 = 62,006 parameter. (Source: https://www.stefanfiott.com/machine-learning/cifar-10-classifier-using-cnn-in-pytorch/) Now, that we have implemented our first Convolutional Neural Network we are ready to instantiate a network model to be trained: ``` model = CIFAR10Net() ``` Let's push the initialized `CIFAR10Net` model to the computing `device` that is enabled: ``` model = model.to(device) ``` Let's double check if our model was deployed to the GPU if available: ``` !nvidia-smi ``` Once the model is initialized we can visualize the model structure and review the implemented network architecture by execution of the following cell: ``` # print the initialized architectures print('[LOG] CIFAR10Net architecture:\n\n{}\n'.format(model)) ``` Looks like intended? Brilliant! Finally, let's have a look into the number of model parameters that we aim to train in the next steps of the notebook: ``` # init the number of model parameters num_params = 0 # iterate over the distinct parameters for param in model.parameters(): # collect number of parameters num_params += param.numel() # print the number of model paramters print('[LOG] Number of to be trained CIFAR10Net model parameters: {}.'.format(num_params)) ``` Ok, our "simple" CIFAR10Net model already encompasses an impressive number **62'006 model parameters** to be trained. Now that we have implemented the CIFAR10Net, we are ready to train the network. However, before starting the training, we need to define an appropriate loss function. Remember, we aim to train our model to learn a set of model parameters $\theta$ that minimize the classification error of the true class $c^{i}$ of a given CIFAR-10 image $x^{i}$ and its predicted class $\hat{c}^{i} = f_\theta(x^{i})$ as faithfully as possible. In this lab we use again the **'Negative Log-Likelihood (NLL)'** loss. During training the NLL loss will penalize models that result in a high classification error between the predicted class labels $\hat{c}^{i}$ and their respective true class label $c^{i}$. Now that we have implemented the CIFAR10Net, we are ready to train the network. Before starting the training, we need to define an appropriate loss function. Remember, we aim to train our model to learn a set of model parameters $\theta$ that minimize the classification error of the true class $c^{i}$ of a given CIFAR-10 image $x^{i}$ and its predicted class $\hat{c}^{i} = f_\theta(x^{i})$ as faithfully as possible. Let's instantiate the **Negative Log-Likelihood (NLL)** loss via the execution of the following `PyTorch` command: ``` # define the optimization criterion / loss function nll_loss = nn.NLLLoss() ``` Let's also push the initialized `nll_loss` computation to the computing `device` that is enabled: ``` nll_loss = nll_loss.to(device) ``` Based on the loss magnitude of a certain mini-batch PyTorch automatically computes the gradients. But even better, based on the gradient, the library also helps us in the optimization and update of the network parameters $\theta$. We will use the **Stochastic Gradient Descent (SGD) optimization** and set the `learning-rate to 0.001`. Each mini-batch step the optimizer will update the model parameters $\theta$ values according to the degree of classification error (the NLL loss). ``` # define learning rate and optimization strategy learning_rate = 0.001 optimizer = optim.SGD(params=model.parameters(), lr=learning_rate) ``` Now that we have successfully implemented and defined the three CNN building blocks let's take some time to review the `CIFAR10Net` model definition as well as the `loss`. Please, read the above code and comments carefully and don't hesitate to let us know any questions you might have. ## 5. Neural Network Model Training In this section, we will train our neural network model (as implemented in the section above) using the transformed images. More specifically, we will have a detailed look into the distinct training steps as well as how to monitor the training progress. ### 5.1. Preparing the Network Training So far, we have pre-processed the dataset, implemented the CNN and defined the classification error. Let's now start to train a corresponding model for **20 epochs** and a **mini-batch size of 128** CIFAR-10 images per batch. This implies that the whole dataset will be fed to the CNN 20 times in chunks of 128 images yielding to **391 mini-batches** (50.000 training images / 128 images per mini-batch) per epoch. After the processing of each mini-batch, the parameters of the network will be updated. ``` # specify the training parameters num_epochs = 20 # number of training epochs mini_batch_size = 128 # size of the mini-batches ``` Furthermore, lets specifiy and instantiate a corresponding PyTorch data loader that feeds the image tensors to our neural network: ``` cifar10_train_dataloader = torch.utils.data.DataLoader(cifar10_train_data, batch_size=mini_batch_size, shuffle=True) ``` ### 5.2. Running the Network Training Finally, we start training the model. The training procedure for each mini-batch is performed as follows: >1. do a forward pass through the CIFAR10Net network, >2. compute the negative log-likelihood classification error $\mathcal{L}^{NLL}_{\theta}(c^{i};\hat{c}^{i})$, >3. do a backward pass through the CIFAR10Net network, and >4. update the parameters of the network $f_\theta(\cdot)$. To ensure learning while training our CNN model, we will monitor whether the loss decreases with progressing training. Therefore, we obtain and evaluate the classification performance of the entire training dataset after each training epoch. Based on this evaluation, we can conclude on the training progress and whether the loss is converging (indicating that the model might not improve any further). The following elements of the network training code below should be given particular attention: >- `loss.backward()` computes the gradients based on the magnitude of the reconstruction loss, >- `optimizer.step()` updates the network parameters based on the gradient. ``` # init collection of training epoch losses train_epoch_losses = [] # set the model in training mode model.train() # train the CIFAR10 model for epoch in range(num_epochs): # init collection of mini-batch losses train_mini_batch_losses = [] # iterate over all-mini batches for i, (images, labels) in enumerate(cifar10_train_dataloader): # push mini-batch data to computation device images = images.to(device) labels = labels.to(device) # run forward pass through the network output = model(images) # reset graph gradients model.zero_grad() # determine classification loss loss = nll_loss(output, labels) # run backward pass loss.backward() # update network paramaters optimizer.step() # collect mini-batch reconstruction loss train_mini_batch_losses.append(loss.data.item()) # determine mean min-batch loss of epoch train_epoch_loss = np.mean(train_mini_batch_losses) # print epoch loss now = datetime.utcnow().strftime("%Y%m%d-%H:%M:%S") print('[LOG {}] epoch: {} train-loss: {}'.format(str(now), str(epoch), str(train_epoch_loss))) # set filename of actual model model_name = 'cifar10_model_epoch_{}.pth'.format(str(epoch)) # save current model to GDrive models directory torch.save(model.state_dict(), os.path.join(models_directory, model_name)) # determine mean min-batch loss of epoch train_epoch_losses.append(train_epoch_loss) ``` Upon successfull training let's visualize and inspect the training loss per epoch: ``` # prepare plot fig = plt.figure() ax = fig.add_subplot(111) # add grid ax.grid(linestyle='dotted') # plot the training epochs vs. the epochs' classification error ax.plot(np.array(range(1, len(train_epoch_losses)+1)), train_epoch_losses, label='epoch loss (blue)') # add axis legends ax.set_xlabel("[training epoch $e_i$]", fontsize=10) ax.set_ylabel("[Classification Error $\mathcal{L}^{NLL}$]", fontsize=10) # set plot legend plt.legend(loc="upper right", numpoints=1, fancybox=True) # add plot title plt.title('Training Epochs $e_i$ vs. Classification Error $L^{NLL}$', fontsize=10); ``` Ok, fantastic. The training error converges nicely. We could definitely train the network a couple more epochs until the error converges. But let's stay with the 20 training epochs for now and continue with evaluating our trained model. ## 6. Neural Network Model Evaluation Prior to evaluating our model, let's load the best performing model. Remember, that we stored a snapshot of the model after each training epoch to our local model directory. We will now load the last snapshot saved. ``` # restore pre-trained model snapshot best_model_name = 'https://raw.githubusercontent.com/HSG-AIML/LabGSERM/master/lab_05/models/cifar10_model_epoch_19.pth' # read stored model from the remote location model_bytes = urllib.request.urlopen(best_model_name) # load model tensor from io.BytesIO object model_buffer = io.BytesIO(model_bytes.read()) # init pre-trained model class best_model = CIFAR10Net() # load pre-trained models best_model.load_state_dict(torch.load(model_buffer, map_location=torch.device('cpu'))) ``` Let's inspect if the model was loaded successfully: ``` # set model in evaluation mode best_model.eval() ``` In order to evaluate our trained model, we need to feed the CIFAR10 images reserved for evaluation (the images that we didn't use as part of the training process) through the model. Therefore, let's again define a corresponding PyTorch data loader that feeds the image tensors to our neural network: ``` cifar10_eval_dataloader = torch.utils.data.DataLoader(cifar10_eval_data, batch_size=10000, shuffle=False) ``` We will now evaluate the trained model using the same mini-batch approach as we did when training the network and derive the mean negative log-likelihood loss of all mini-batches processed in an epoch: ``` # init collection of mini-batch losses eval_mini_batch_losses = [] # iterate over all-mini batches for i, (images, labels) in enumerate(cifar10_eval_dataloader): # run forward pass through the network output = best_model(images) # determine classification loss loss = nll_loss(output, labels) # collect mini-batch reconstruction loss eval_mini_batch_losses.append(loss.data.item()) # determine mean min-batch loss of epoch eval_loss = np.mean(eval_mini_batch_losses) # print epoch loss now = datetime.utcnow().strftime("%Y%m%d-%H:%M:%S") print('[LOG {}] eval-loss: {}'.format(str(now), str(eval_loss))) ``` Ok, great. The evaluation loss looks in-line with our training loss. Let's now inspect a few sample predictions to get an impression of the model quality. Therefore, we will again pick a random image of our evaluation dataset and retrieve its PyTorch tensor as well as the corresponding label: ``` # set (random) image id image_id = 777 # retrieve image exhibiting the image id cifar10_eval_image, cifar10_eval_label = cifar10_eval_data[image_id] ``` Let's now inspect the true class of the image we selected: ``` cifar10_classes[cifar10_eval_label] ``` Ok, the randomly selected image should contain a two (2). Let's inspect the image accordingly: ``` # define tensor to image transformation trans = torchvision.transforms.ToPILImage() # set image plot title plt.title('Example: {}, Label: {}'.format(str(image_id), str(cifar10_classes[cifar10_eval_label]))) # un-normalize cifar 10 image sample cifar10_eval_image_plot = cifar10_eval_image / 4.0 + 0.5 # plot cifar 10 image sample plt.imshow(trans(cifar10_eval_image_plot)) ``` Ok, let's compare the true label with the prediction of our model: ``` best_model(cifar10_eval_image.unsqueeze(0)) ``` We can even determine the likelihood of the most probable class: ``` cifar10_classes[torch.argmax(best_model(Variable(cifar10_eval_image.unsqueeze(0))), dim=1).item()] ``` Let's now obtain the predictions for all the CIFAR-10 images of the evaluation data: ``` predictions = torch.argmax(best_model(iter(cifar10_eval_dataloader).next()[0]), dim=1) ``` Furthermore, let's obtain the overall classification accuracy: ``` metrics.accuracy_score(cifar10_eval_data.targets, predictions.detach()) ``` Let's also inspect the confusion matrix of the model predictions to determine major sources of misclassification: ``` # determine classification matrix of the predicted and target classes mat = confusion_matrix(cifar10_eval_data.targets, predictions.detach()) # initialize the plot and define size plt.figure(figsize=(8, 8)) # plot corresponding confusion matrix sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False, cmap='YlOrRd_r', xticklabels=cifar10_classes, yticklabels=cifar10_classes) # set plot title plt.title('CIFAR-10 classification matrix') # set plot axis lables plt.xlabel('[true label]') plt.ylabel('[predicted label]'); ``` Ok, we can easily see that our current model confuses images of cats and dogs as well as images of trucks and cars quite often. This is again not surprising since those image categories exhibit a high semantic and therefore visual similarity. ## 7. Lab Summary: In this lab, a step by step introduction into **design, implementation, training and evaluation** of convolutional neural networks CNNs to classify tiny images of objects is presented. The code and exercises presented in this lab may serves as a starting point for developing more complex, deeper and more tailored CNNs.
github_jupyter
# Entropy Estimator - Histogram ``` import sys, os from pyprojroot import here # spyder up to find the root pysim_root = "/home/emmanuel/code/pysim" # append to path sys.path.append(str(pysim_root)) import numpy as np import jax import jax.numpy as jnp # MATPLOTLIB Settings import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' # SEABORN SETTINGS import seaborn as sns import corner sns.set_context(context="talk", font_scale=0.7) %load_ext autoreload %autoreload 2 # %load_ext lab_black ``` ## Demo Data - Gaussian ``` from pysim.data.information.studentt import generate_studentt_data from pysim.data.information.gaussian import generate_gaussian_data # parameters n_samples = 1_000 n_features = 1 df = 10 # create seed (trial number) # res_tuple = generate_studentt_data(n_samples=n_samples, n_features=n_features, df=df) res_tuple = generate_gaussian_data(n_samples=n_samples, n_features=n_features) H_true = res_tuple.H print(f"True Estimator: {H_true:.4f} nats") fig = corner.corner(res_tuple.X, bins=50) ``` ## Histogram ``` # import numpy as np # from scipy.stats import rv_histogram # # histogram parameters # nbins = "auto" # data = res_tuple.X.copy() # data_marginal = data[:, 0] # # get histogram # histogram = np.histogram(data_marginal, bins=nbins) # # create histogram random variable # hist_dist = rv_histogram(histogram) ``` Many times we call for an empirical estimator: $$ \hat{H}_{MLE}(p_N) = - \sum_{i}^{m}p_{N,i} \log p_{N,i} $$ where $\hat{p}_k=\frac{h_k}{n}$ are the maximum likelihood estimates of each probability $p_k$ and $h_k=\sum_{i}^n\boldsymbol{1}_{\{X_i=k\}}$ **Resources**: * Antos & Kontoyiannis (2001) - "plug-in" estimator * Strong et. al. (1998) - "naive" estimator Fortunately, the scipy method already does this for us. #### From Scratch #### 1. Histogram ``` # histogram parameters nbins = "auto" data = res_tuple.X.copy() # get hist counts and bin edges data_min = data.min() #- 0.1 data_max = data.max() #+ 0.1 n_samples = data.shape[0] bins = int(jnp.sqrt(n_samples)) counts, bin_edges = jnp.histogram(data, bins=bins, range=(data_min, data_max), density=False) ``` **Note**: It's always good practice to leave a bit of room for the boundaries. #### 2. Get Bin Centers In the numpy implementation, we are only given the `bin_edges` and we need to `bin_centers`. It's a minor thing but it's important in order to get the width between each of the ``` # get the bin centers bin_centers = jnp.mean(jnp.vstack((bin_edges[0:-1], bin_edges[1:])), axis=0) delta = bin_centers[3] - bin_centers[2] # visualize fig, ax = plt.subplots() ax.hist(data, bins=10, density=True) ax.scatter(bin_centers, 0.01 * np.ones_like(bin_centers), marker="*", s=100, zorder=4, color='red', label="Bin Centers") ax.scatter(bin_edges, np.zeros_like(bin_edges), marker="|", s=500, zorder=4, color='black', label="Bin Edges") plt.legend() plt.show() ``` #### 4. Get Normalized Density ``` # get the normalized density pk = 1.0 * jnp.array(counts) / jnp.sum(counts) fig, ax = plt.subplots(ncols=2, figsize=(10, 3)) ax[0].hist(data, bins=10, density=True) ax[0].legend(["Density"]) ax[1].hist(data, bins=10, density=False) ax[1].legend(["Counts"]) plt.show() ``` #### 5. Calculate Entropy given the probability ``` # manually H = 0.0 for ip_k in pk: if ip_k > 0.0: H += - ip_k * jnp.log(ip_k) H += jnp.log(delta) # H += np.log(delta) print(f"MLE Estimator: {H:.4f} nats") print(f"True Estimator: {H_true:.4f} nats") # refactored from jax.scipy.special import entr H_vec = entr(pk) H_vec = jnp.sum(H_vec) H_vec += jnp.log(delta) np.testing.assert_almost_equal(H, H_vec) ``` #### Refactor - Scipy ``` from scipy.stats import rv_histogram histogram = np.histogram(data, bins=bins, range=(data_min, data_max), density=False) hist_dist = rv_histogram(histogram) H_mle = hist_dist.entropy() np.testing.assert_almost_equal(H_mle, H_vec, decimal=6) print(f"Scipy Estimator: {H_mle:.4f} nats") print(f"My Estimator: {H:.4f} nats") print(f"True Estimator: {H_true:.4f} nats") ``` It's known in the community that this will under estimate the probability distribution. **Resources**: * [Blog Post](http://www.nowozin.net/sebastian/blog/estimating-discrete-entropy-part-1.html) - Sebasian Nowozin (2015) ### Corrections #### Miller-Maddow $$ \hat{H}_{MM}(p_N) = \hat{H}_{MLE}(p_N) + \frac{\hat{m}-1}{2N} $$ where $\hat{m}$ are the number of bins with non-zero $p_N$ probability. ``` # get histogram counts hist_counts = histogram[0] total_counts = np.sum(hist_counts) total_nonzero_counts = np.sum(hist_counts > 0) N = data.shape[0] # get correction mm_correction = 0.5 * (np.sum(hist_counts > 0) - 1) / np.sum(hist_counts) print(mm_correction) total_counts, total_nonzero_counts H_mm = H + mm_correction print(f"My Estimator:\n{H:.4f} nats") print(f"Miller-Maddow Estimator:\n{H_mm:.4f} nats") print(f"True Estimator:\n{H_true:.4f} nats") ``` ### Custom Function ``` from chex import Array from typing import Callable, Tuple, Union def get_domain_extension( data: Array, extension: Union[float, int], ) -> Tuple[float, float]: """Gets the extension for the support Parameters ---------- data : Array the input data to get max and minimum extension : Union[float, int] the extension Returns ------- lb : float the new extended lower bound for the data ub : float the new extended upper bound for the data """ # case of int, convert to float if isinstance(extension, int): extension = float(extension / 100) # get the domain domain = jnp.abs(jnp.max(data) - jnp.min(data)) # extend the domain domain_ext = extension * domain # get the extended domain lb = jnp.min(data) - domain_ext up = jnp.max(data) + domain_ext return lb, up def histogram_jax_entropy(data: Array, bin_est_f: Callable, extension: Union[float, int]=10): # get extension lb, ub = get_domain_extension(data, extension) # histogram bin width bin_width = bin_est_f(data) # histogram bins nbins = get_num_bins(data, bin_width, lb, ub) # histogram counts, bin_edges = jnp.histogram(data, bins=nbins, density=False) # get the normalized density pk = 1.0 * jnp.array(counts) / jnp.sum(counts) # get delta delta = bin_edges[3] - bin_edges[2] # calculate entropy H = entr(pk) H = jnp.sum(H) H += jnp.log(delta) return H from chex import Array import math def hist_bin_scott(x: Array) -> Array: """Optimal histogram bin width based on scotts method. Uses the 'normal reference rule' which assumes the data is Gaussian Parameters ---------- x : Array The input array, (n_samples) Returns ------- bin_width : Array The optimal bin width, () """ n_samples = x.shape[0] # print(3.5 * np.std(x) / (n_samples ** (1/3))) return (24.0 * math.pi ** 0.5 / n_samples) ** (1.0 / 3.0) * jnp.std(x) def get_num_bins(data, bin_width, data_min, data_max): nbins = jnp.ceil((data_max - data_min) / bin_width) nbins = jnp.maximum(1, nbins).astype(jnp.int32) bins = data_min + bin_width * jnp.arange(0, nbins+1, 1) return nbins import jax.numpy as jnp from jax.scipy.special import entr def histogram_entropy(data, bins=None): """Estimate univariate entropy with a histogram Notes ----- * uses scott's method * entropy is in nats """ # histogram bin width (scotts) bin_width = 3.5 * jnp.std(data) / (data.shape[0] ** (1/3)) if bins is None: # histogram bins nbins = jnp.ceil((data.max() - data.min()) / bin_width) nbins = nbins.astype(jnp.int32) # get bins with linspace bins = jnp.linspace(data.min(), data.max(), nbins) # bins with arange (similar to astropy) bins = data_min + bin_width * jnp.arange(0, nbins+1, 1) # histogram counts, bin_edges = jnp.histogram(data, bins=bins, density=False) # normalized the bin counts for a density pk = 1.0 * jnp.array(counts) / jnp.sum(counts) # calculate entropy H = entr(pk) H = jnp.sum(H) # add correction for continuous case delta = bin_edges[3] - bin_edges[2] H += jnp.log(delta) return H import numpy as np import jax data = np.random.randn(1_000) data = jnp.array(data, dtype=jnp.float32) histogram_entropy(jnp.array(data).ravel(), 10) f = jax.jit(jax.partial(histogram_entropy, bins=None)) f(data.ravel()) ``` ## Bin Width ``` def get_bins(data, bin_width, data_min, data_max): nbins = jnp.ceil((data_max - data_min) / bin_width) nbins = jnp.maximum(1, nbins) bins = jnp.linspace(data_min, data_max, nbins+1) # bins = data_min + bin_width * jnp.arange(start=0.0, stop=nbins + 1) return bins nbins = jnp.ceil((data_max - data_min) / bin_width) nbins = jnp.maximum(1, nbins) print(nbins) # data_min + bin_wijnp.arange(start=0.0, stop=nbins+1) def get_histogram_entropy(data, bins): histogram = jnp.histogram(data, bins=bins,density=False) hist_dist = rv_histogram(histogram) H_mle = hist_dist.entropy() print(f"MLE Estimator: {H_mle:.4f} nats") bins = get_bins(data, 0.5, data_min, data_max) get_histogram_entropy(data, bins) ``` ### Scotts $$ \Delta_b = 3.5\sigma n^{-\frac{1}{3}} $$ where $\sigma$ is the standard deviation and $n$ is the number of samples. ``` from chex import Array import math def hist_bin_scott(x: Array) -> Array: """Optimal histogram bin width based on scotts method. Uses the 'normal reference rule' which assumes the data is Gaussian Parameters ---------- x : Array The input array, (n_samples) Returns ------- bin_width : Array The optimal bin width, () """ n_samples = x.shape[0] # print(3.5 * np.std(x) / (n_samples ** (1/3))) return (24.0 * math.pi ** 0.5 / n_samples) ** (1.0 / 3.0) * jnp.std(x) bin_width = hist_bin_scott(data) bins = get_bins(data, bin_width, data_min, data_max) get_histogram_entropy(data, bins) ``` ### Freedman ``` def hist_bin_freedman(x: Array) -> Array: """Optimal histogram bin width based on scotts method. Uses the 'normal reference rule' which assumes the data is Gaussian Parameters ---------- x : Array The input array, (n_samples) Returns ------- bin_width : Array The optimal bin width, () """ n_samples = x.shape[0] # print(3.5 * np.std(x) / (n_samples ** (1/3))) return (24.0 * math.pi ** 0.5 / n_samples) ** (1.0 / 3.0) * jnp.std(x) ``` ### Silverman ### Gaussian #### ### Volume ``` def volume_unit_ball(d_dimensions: int, norm=2) -> float: """Volume of the unit l_p-ball in d-dimensional Parameters ---------- d_dimensions : int Number of dimensions to estimate the volume norm : int, default=2 The type of ball to get the volume. * 2 : euclidean distance * 1 : manhattan distance * 0 : chebyshev distance Returns ------- vol : float The volume of the d-dimensional unit ball References ---------- [1]: Demystifying Fixed k-Nearest Neighbor Information Estimators - Gao et al (2016) """ # get ball if norm == 0: return 1.0 elif norm == 1: raise NotImplementedError() elif norm == 2: b = 2.0 else: raise ValueError(f"Unrecognized norm: {norm}") numerator = gamma(1.0 + 1.0 / b) ** d_dimensions denomenator = gamma(1.0 + d_dimensions / b) vol = 2 ** d_dimensions * numerator / denomenator return vol ```
github_jupyter
# Stack Semantics in Trax: Ungraded Lab In this ungraded lab, we will explain the stack semantics in Trax. This will help in understanding how to use layers like `Select` and `Residual` which operates on elements in the stack. If you've taken a computer science class before, you will recall that a stack is a data structure that follows the Last In, First Out (LIFO) principle. That is, whatever is the latest element that is pushed into the stack will also be the first one to be popped out. If you're not yet familiar with stacks, then you may find this [short tutorial](https://www.tutorialspoint.com/python_data_structure/python_stack.htm) useful. In a nutshell, all you really need to remember is it puts elements one on top of the other. You should be aware of what is on top of the stack to know which element you will be popping. You will see this in the discussions below. Let's get started! ## Imports ``` import numpy as np # regular ol' numpy from trax import layers as tl # core building block from trax import shapes # data signatures: dimensionality and type from trax import fastmath # uses jax, offers numpy on steroids ``` ## 1. The tl.Serial Combinator is Stack Oriented. To understand how stack-orientation works in [Trax](https://trax-ml.readthedocs.io/en/latest/), most times one will be using the `Serial` layer. We will define two simple [Function layers](https://trax-ml.readthedocs.io/en/latest/notebooks/layers_intro.html?highlight=fn#With-the-Fn-layer-creating-function.): 1) Addition and 2) Multiplication. Suppose we want to make the simple calculation (3 + 4) * 15 + 3. `Serial` will perform the calculations in the following manner `3` `4` `add` `15` `mul` `3` `add`. The steps of the calculation are shown in the table below. The first column shows the operations made on the stack and the second column the output of those operations. Moreover, the rightmost element in the second column represents the top of the stack (e.g. in the second row, `Push(3)` pushes `3 ` on top of the stack and `4` is now under it). <div style="text-align:center" width="50px"><img src="Stack1.png" /></div> After processing all the stack contains 108 which is the answer to our simple computation. From this, the following can be concluded: a stack-based layer has only one way to handle data, by taking one piece of data from atop the stack, termed popping, and putting data back atop the stack, termed pushing. Any expression that can be written conventionally, can be written in this form and thus be amenable to being interpreted by a stack-oriented layer like `Serial`. ### Coding the example in the table: **Defining addition** ``` def Addition(): layer_name = "Addition" # don't forget to give your custom layer a name to identify # Custom function for the custom layer def func(x, y): return x + y return tl.Fn(layer_name, func) # Test it add = Addition() # Inspect properties print("-- Properties --") print("name :", add.name) print("expected inputs :", add.n_in) print("promised outputs :", add.n_out, "\n") # Inputs x = np.array([3]) y = np.array([4]) print("-- Inputs --") print("x :", x, "\n") print("y :", y, "\n") # Outputs z = add((x, y)) print("-- Outputs --") print("z :", z) ``` **Defining multiplication** ``` def Multiplication(): layer_name = ( "Multiplication" # don't forget to give your custom layer a name to identify ) # Custom function for the custom layer def func(x, y): return x * y return tl.Fn(layer_name, func) # Test it mul = Multiplication() # Inspect properties print("-- Properties --") print("name :", mul.name) print("expected inputs :", mul.n_in) print("promised outputs :", mul.n_out, "\n") # Inputs x = np.array([7]) y = np.array([15]) print("-- Inputs --") print("x :", x, "\n") print("y :", y, "\n") # Outputs z = mul((x, y)) print("-- Outputs --") print("z :", z) ``` **Implementing the computations using Serial combinator.** ``` # Serial combinator serial = tl.Serial( Addition(), Multiplication(), Addition() # add 3 + 4 # multiply result by 15 ) # Initialization x = (np.array([3]), np.array([4]), np.array([15]), np.array([3])) # input serial.init(shapes.signature(x)) # initializing serial instance print("-- Serial Model --") print(serial, "\n") print("-- Properties --") print("name :", serial.name) print("sublayers :", serial.sublayers) print("expected inputs :", serial.n_in) print("promised outputs :", serial.n_out, "\n") # Inputs print("-- Inputs --") print("x :", x, "\n") # Outputs y = serial(x) print("-- Outputs --") print("y :", y) ``` The example with the two simple adition and multiplication functions that where coded together with the serial combinator show how stack semantics work in `Trax`. ## 2. The tl.Select combinator in the context of the Serial combinator Having understood how stack semantics work in `Trax`, we will demonstrate how the [tl.Select](https://trax-ml.readthedocs.io/en/latest/trax.layers.html?highlight=select#trax.layers.combinators.Select) combinator works. ### First example of tl.Select Suppose we want to make the simple calculation (3 + 4) * 3 + 4. We can use `Select` to perform the calculations in the following manner: 1. `4` 2. `3` 3. `tl.Select([0,1,0,1])` 4. `add` 5. `mul` 6. `add`. The `tl.Select` requires a list or tuple of 0-based indices to select elements relative to the top of the stack. For our example, the top of the stack is `3` (which is at index 0) then `4` (index 1) and we Select to add in an ordered manner to the top of the stack which after the command is `3` `4` `3` `4`. The steps of the calculation for our example are shown in the table below. As in the previous table each column shows the contents of the stack and the outputs after the operations are carried out. <div style="text-align:center" width="20px"><img src="Stack2.png" /></div> After processing all the inputs the stack contains 25 which is the answer we get above. ``` serial = tl.Serial(tl.Select([0, 1, 0, 1]), Addition(), Multiplication(), Addition()) # Initialization x = (np.array([3]), np.array([4])) # input serial.init(shapes.signature(x)) # initializing serial instance print("-- Serial Model --") print(serial, "\n") print("-- Properties --") print("name :", serial.name) print("sublayers :", serial.sublayers) print("expected inputs :", serial.n_in) print("promised outputs :", serial.n_out, "\n") # Inputs print("-- Inputs --") print("x :", x, "\n") # Outputs y = serial(x) print("-- Outputs --") print("y :", y) ``` ### Second example of tl.Select Suppose we want to make the simple calculation (3 + 4) * 4. We can use `Select` to perform the calculations in the following manner: 1. `4` 2. `3` 3. `tl.Select([0,1,0,1])` 4. `add` 5. `tl.Select([0], n_in=2)` 6. `mul` The example is a bit contrived but it demonstrates the flexibility of the command. The second `tl.Select` pops two elements (specified in n_in) from the stack starting from index 0 (i.e. top of the stack). This means that `7` and `3 ` will be popped out because `n_in = 2`) but only `7` is placed back on top because it only selects `[0]`. As in the previous table each column shows the contents of the stack and the outputs after the operations are carried out. <div style="text-align:center" width="20px"><img src="Stack3.png" /></div> After processing all the inputs the stack contains 28 which is the answer we get above. ``` serial = tl.Serial( tl.Select([0, 1, 0, 1]), Addition(), tl.Select([0], n_in=2), Multiplication() ) #如果没有 tl.Select([0], n_in=2), 结果是y : (array([21]), array([4])) """ tl.Select n_in – Number of input elements to pop from the stack, and replace with those specified by indices. If not specified, its value will be calculated as max(indices) + 1. tl.Select([0], n_in=2) 表示从stack顶部拿出2个,把其中index 为0的放回stack, 就是7 tl.Select([1], n_in=2) 放回的是3 tl.Select([2], n_in=2) Error, 因为取出2个, 最大index为1 tl.Select([0,0], n_in=2) 放回的是7,7 """ # Initialization x = (np.array([3]), np.array([4])) # input serial.init(shapes.signature(x)) # initializing serial instance print("-- Serial Model --") print(serial, "\n") print("-- Properties --") print("name :", serial.name) print("sublayers :", serial.sublayers) print("expected inputs :", serial.n_in) print("promised outputs :", serial.n_out, "\n") # Inputs print("-- Inputs --") print("x :", x, "\n") # Outputs y = serial(x) print("-- Outputs --") print("y :", y) ``` **In summary, what Select does in this example is a copy of the inputs in order to be used further along in the stack of operations.** ## 3. The tl.Residual combinator in the context of the Serial combinator ### tl.Residual [Residual networks](https://arxiv.org/pdf/1512.03385.pdf) are frequently used to make deep models easier to train and you will be using it in the assignment as well. Trax already has a built in layer for this. The [Residual layer](https://trax-ml.readthedocs.io/en/latest/trax.layers.html?highlight=residual#trax.layers.combinators.Residual) computes the element-wise *sum* of the *stack-top* input with the output of the layer series. Let's first see how it is used in the code below: 都是把x1 加到operation 结果后面 - `tl.Residual(Addition())` : x1 + (x1 + x2) - `tl.Residual(Multiplication())`: x1 + (x1 * x2) ``` # Let's define a Serial network serial = tl.Serial( # Practice using Select again by duplicating the first two inputs tl.Select([0, 1, 0, 1]), # Place a Residual layer that skips over the Fn: Addition() layer tl.Residual(Addition()) ) print("-- Serial Model --") print(serial, "\n") print("-- Properties --") print("name :", serial.name) print("expected inputs :", serial.n_in) print("promised outputs :", serial.n_out, "\n") ``` Here, we use the Serial combinator to define our model. The inputs first goes through a `Select` layer, followed by a `Residual` layer which passes the `Fn: Addition()` layer as an argument. What this means is the `Residual` layer will take the stack top input at that point and add it to the output of the `Fn: Addition()` layer. You can picture it like the diagram the below, where `x1` and `x2` are the inputs to the model: <img src="residual_example_add.png" width="400"/></div> Now, let's try running our model with some sample inputs and see the result: ``` # Inputs x1 = np.array([3]) x2 = np.array([4]) print("-- Inputs --") print("(x1, x2) :", (x1, x2), "\n") # Outputs y = serial((x1, x2)) print("-- Outputs --") print("y :", y) ``` As you can see, the `Residual` layer remembers the stack top input (i.e. `3`) and adds it to the result of the `Fn: Addition()` layer (i.e. `3 + 4 = 7`). The output of `Residual(Addition()` is then `3 + 7 = 10` and is pushed onto the stack. On a different note, you'll notice that the `Select` layer has 4 outputs but the `Fn: Addition()` layer only pops 2 inputs from the stack. This means the duplicate inputs (i.e. the 2 rightmost arrows of the `Select` outputs in the figure above) remain in the stack. This is why you still see it in the output of our simple serial network (i.e. `array([3]), array([4])`). This is useful if you want to use these duplicate inputs in another layer further down the network. ### Modifying the network To strengthen your understanding, you can modify the network above and examine the outputs you get. For example, you can pass the `Fn: Multiplication()` layer instead in the `Residual` block: ``` # model definition serial = tl.Serial( tl.Select([0, 1, 0, 1]), tl.Residual(Multiplication()) ) print("-- Serial Model --") print(serial, "\n") print("-- Properties --") print("name :", serial.name) print("expected inputs :", serial.n_in) print("promised outputs :", serial.n_out, "\n") ``` This means you'll have a different output that will be added to the stack top input saved by the Residual block. The diagram becomes like this: <img src="residual_example_multiply.png" width="400"/></div> And you'll get `3 + (3 * 4) = 15` as output of the `Residual` block: ``` # Inputs x1 = np.array([3]) x2 = np.array([4]) print("-- Inputs --") print("(x1, x2) :", (x1, x2), "\n") # Outputs y = serial((x1, x2)) print("-- Outputs --") print("y :", y) ``` #### Congratulations! In this lab, we described how stack semantics work with Trax layers such as Select and Residual. You will be using these in the assignment and you can go back to this lab in case you want to review its usage.
github_jupyter
# Linear Regression https://github.com/yunjey/pytorch-tutorial ## Artificial dataset ``` import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt # hyper parameters input_size = 1 output_size = 1 num_epochs = 100 learning_rate = 0.001 # toy dataset # 15 samples, 1 features x_train = np.array([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167, 7.042, 10.791, 5.313, 7.997, 3.1], dtype=np.float32) y_train = np.array([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221, 2.827, 3.465, 1.65, 2.904, 1.3], dtype=np.float32) x_train = x_train.reshape(15, 1) y_train = y_train.reshape(15, 1) # linear regression model class LinearRegression(nn.Module): def __init__(self, input_size, output_size): super(LinearRegression, self).__init__() self.linear = nn.Linear(input_size, output_size) def forward(self, x): out = self.linear(x) return out model = LinearRegression(input_size, output_size) # loss and optimizer criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) # train the model for epoch in range(num_epochs): inputs = torch.from_numpy(x_train) targets = torch.from_numpy(y_train) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() if (epoch + 1) % 10 == 0: print('Epoch [%d/%d], Loss: %.4f' % (epoch + 1, num_epochs, loss.item())) torch.save(model.state_dict(), 'model.pkl') # plot the graph predicted = model(torch.from_numpy(x_train)).detach().numpy() plt.plot(x_train, y_train, 'ro', label='Original data') plt.plot(x_train, predicted, label='Fitted line') plt.legend() plt.show() ``` ## Boston house price dataset - https://machinelearningmastery.com/regression-tutorial-keras-deep-learning-library-python/ - https://medium.com/@haydar_ai/learning-data-science-day-9-linear-regression-on-boston-housing-dataset-cd62a80775ef ``` import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt %matplotlib inline # hyper parameters input_size = 13 output_size = 1 num_epochs = 5000 learning_rate = 0.01 boston = load_boston() X = boston.data y = boston.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 5) # データの標準化 scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) y_train = np.expand_dims(y_train, axis=1) y_test = np.expand_dims(y_test, axis=1) # linear regression model class LinearRegression(nn.Module): def __init__(self, input_size, output_size): super(LinearRegression, self).__init__() self.linear = nn.Linear(input_size, output_size) def forward(self, x): out = self.linear(x) return out model = LinearRegression(input_size, output_size) # loss and optimizer criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) def train(X_train, y_train): inputs = torch.from_numpy(X_train).float() targets = torch.from_numpy(y_train).float() optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() return loss.item() def valid(X_test, y_test): inputs = torch.from_numpy(X_test).float() targets = torch.from_numpy(y_test).float() outputs = model(inputs) val_loss = criterion(outputs, targets) return val_loss.item() # train the model loss_list = [] val_loss_list = [] for epoch in range(num_epochs): # data shuffle perm = np.arange(X_train.shape[0]) np.random.shuffle(perm) X_train = X_train[perm] y_train = y_train[perm] loss = train(X_train, y_train) val_loss = valid(X_test, y_test) if epoch % 200 == 0: print('epoch %d, loss: %.4f val_loss: %.4f' % (epoch, loss, val_loss)) loss_list.append(loss) val_loss_list.append(val_loss) # plot learning curve plt.plot(range(num_epochs), loss_list, 'r-', label='train_loss') plt.plot(range(num_epochs), val_loss_list, 'b-', label='val_loss') plt.legend() ```
github_jupyter
# Experiment -I --- # Stratosphere's Benign Dataset vs Our Benign Dataset Generated --- ### 1. Imports ``` import warnings warnings.filterwarnings('ignore') import numpy as np #operaciones matriciales y con vectores import pandas as pd #tratamiento de datos import random import matplotlib.pyplot as plt #gráficos import seaborn as sns import joblib from sklearn import preprocessing ``` --- --- ### 2. Load Own Benign Dataset ``` b_research = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Originals\Benigns\BenignTimeWindowsOurLab.csv", delimiter = ",") b_research.head() b_research.loc[b_research.Type == "Benign", ['Type']] = 'Own' b_research.head(2) ``` --- ### 3. Load Stratosphere Dataset ``` b_stratosphere = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Originals\Benigns\BeningTimeWindowsStratosphere.csv", delimiter = ",") b_stratosphere.head() b_stratosphere.loc[b_stratosphere.Type == "Benign", ['Type']] = 'Stratosphere' b_stratosphere.head(2) ``` --- ### 4. Join ``` b_research = b_research.sample(n=b_stratosphere.shape[0]) b_research.shape frames = [b_research,b_stratosphere] full_dataset = pd.concat(frames) full_dataset.head() full_dataset.shape full_dataset = full_dataset.drop(['Name'], axis=1) full_dataset = full_dataset[['first_sp','Avg_bps','p1_ib','duration','number_dp','Bytes' ,'number_sp','First_Protocol','p2_ib' ,'first_dp','p3_ib','Netflows','p3_d','Second_Protocol','Type']] full_dataset.columns full_dataset['First_Protocol'] = full_dataset['First_Protocol'].replace(np.nan,"None",regex=True) full_dataset['Second_Protocol'] = full_dataset['Second_Protocol'].replace(np.nan,"None",regex=True) full_dataset.info() ``` --- ### 5. Dictionary Let's instantiate the **Encoder**: ``` le = joblib.load("./Tools/label_encoder_first_protocol_exp1.encoder") ``` --- In the column of **First_Protocol**: ``` full_dataset.First_Protocol.unique() first_protocol_column_codified = le.transform(full_dataset.First_Protocol) le.classes_ le_name_mapping_first_protocol = dict(zip(le.classes_, le.transform(le.classes_))) print(le_name_mapping_first_protocol) full_dataset.First_Protocol = le.transform(full_dataset.First_Protocol) ``` --- In the column of **Second_Protocol**: ``` le = joblib.load("./Tools/label_encoder_second_protocol_exp1.encoder") full_dataset.Second_Protocol.unique() second_protocol_column_codified = le.transform(full_dataset.Second_Protocol) le.classes_ le_name_mapping_second_protocol = dict(zip(le.classes_, le.transform(le.classes_))) print(le_name_mapping_second_protocol) full_dataset.Second_Protocol = le.transform(full_dataset.Second_Protocol) ``` --- --- In the column of **Type**: ``` le = preprocessing.LabelEncoder() full_dataset.Type.unique() Type_protocol_column_codified = le.fit_transform(full_dataset.Type) le.classes_ le_name_mapping_type = dict(zip(le.classes_, le.transform(le.classes_))) print(le_name_mapping_type) full_dataset.Type = le.transform(full_dataset.Type) ``` --- ### 6. Plots ``` invalid = ["Name","First_Protocol","Second_Protocol","Third_Protocol","Type"] for column in full_dataset.columns: if column not in invalid: fig, (ax1) = plt.subplots(ncols=1, figsize =(20,10)) ax1.set_title("Distribution in column %s per Type" %column) sns.kdeplot(full_dataset[full_dataset.Type == 0][column], color="green", shade=True) sns.kdeplot(full_dataset[full_dataset.Type == 1][column], color="blue", shade=True) plt.legend(['Own', 'Stratosphere']) ``` --- ``` column = "First_Protocol" temp = full_dataset[[column,"Type"]] le_name_mapping_type fig, (ax1) = plt.subplots(ncols=1, figsize =(20,20)) ax1.set_title("Distribution in column %s" %column) sns.countplot(x=column, hue="Type",data=full_dataset[[column,"Type"]]) le_name_mapping_first_protocol ``` --- --- ``` column = "Second_Protocol" temp = full_dataset[[column,"Type"]] le_name_mapping_type fig, (ax1) = plt.subplots(ncols=1, figsize =(20,20)) ax1.set_title("Distribution in column %s" %column) sns.countplot(x=column, hue="Type",data=full_dataset[[column,"Type"]]) le_name_mapping_second_protocol ``` ---
github_jupyter
# Challenge 2 ## Object Tracker By Color Build Object Tracker By Color Function will ask the user about the object to track User feed string input e.g. "Blue Shirt" Function would use the first word to choose the corresponding lower and upper limit in range method Make the function ready for at least 3 colors ( Red , Green , Blue ) ``` import cv2 import numpy as np def lower_upper(color_name): # define range of blue color in HSV lower_blue = np.array([105,50,50]) upper_blue = np.array([130,255,255]) # define range of green color in HSV lower_green = np.array([45,50,50]) upper_green = np.array([75,255,255]) # define range of red color in HSV lower_red = np.array([0,50,50]) upper_red = np.array([10,255,255]) # Color Testing ct = color_name[0:3] if ct.lower()=='blu': lower = lower_blue upper = upper_blue elif ct.lower()=='gre': lower = lower_green upper = upper_green elif ct.lower()=='red': lower = lower_red upper = upper_red else: # User Input is not correct lower = np.array([0,0,0]) upper = np.array([0,0,0]) return lower, upper, ct.lower(); def object_tracker(): while(1): user_color = input('Kindly write object name to track it: ') lower, upper, ct = lower_upper(user_color) if lower[1] != 0: print('Lower Range: (' + str(lower[0]) + ', ' + str(lower[1]) + ', ' + str(lower[2]) + ')') print('Upper Range: (' + str(upper[0]) + ', ' + str(upper[1]) + ', ' + str(upper[2]) + ')') break while(1): user_video = input('Write your selection (Video / Live): ') if user_video.lower() == 'video': # Select one of the following 2 code lines for: (HDD Video ,or Online Video) # Video Stored in your HDD cap = cv2.VideoCapture('Smurfs.mp4') # Online Video #cap = cv2.VideoCapture('http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4') break elif user_video.lower() == 'live': cap = cv2.VideoCapture(0) break while(1): x, frame = cap.read() hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) # Threshold the HSV image to get only blue colors if ct == 'red': lower_2 = np.array([170,50,50]) upper_2 = np.array([179,255,255]) mask_1 = cv2.inRange(hsv, lower, upper) mask_2 = cv2.inRange(hsv, lower_2, upper_2) mask = mask_1 + mask_2 else: mask = cv2.inRange(hsv, lower, upper) res = cv2.bitwise_and(frame,frame, mask= mask) cv2.imshow('frame',frame) cv2.imshow('mask',mask) cv2.imshow('res',res) if cv2.waitKey(10) & 0xFF == ord('q'): break cv2.destroyAllWindows() cap.release() # Call Main Function object_tracker() ``` # Helping & Testing Code: ``` # To Determine Color Value in HSV blue = np.uint8([[[255,0,0]]]) hsv_blue = cv2.cvtColor(blue, cv2.COLOR_BGR2HSV) print('HSV of Blue: ' + str(hsv_blue)) green = np.uint8([[[0,255,0 ]]]) hsv_green = cv2.cvtColor(green, cv2.COLOR_BGR2HSV) print('HSV of Green: ' + str(hsv_green) ) red = np.uint8([[[0,0,255]]]) hsv_red = cv2.cvtColor(red, cv2.COLOR_BGR2HSV) print('HSV of Red: ' + str(hsv_red)) ``` ### H&deg; Color Range in HSV Color Space: The below image shows the H&deg; color range in HSV Color Space.\ Note that: the image show 360&deg; but OpenCV uses 180&deg;, so you should divide any color degree by 2.\ For Example: + If you need green color (120&deg; from image), + you'll divide it by 2, + and write 60&deg; in OpenCV H value of HSV range. ![alt text](hsvColorRange.png "HSV Color Range") # References: + Electro Pi Computer Vision offline Course at: https://electro-pi.com/ + Free online mp4 Videos: https://gist.github.com/jsturgis/3b19447b304616f18657
github_jupyter
# Navigating the Notebook - Instructor Script To familiarise participants with the notebook environment, build up a simple notebook from scratch demonstrating the following operations: - Insert & delete cells - Change cell type (& know different cell types) - Run a single cell from taskbar & keyboard shortcut (shift + Enter) - Run multiples cells, all cells - Re-order cells - Split & merge cells - Stop a cell *Let's get started...* Create a new notebook... The notebook is built up from separate editable areas, or *cells*. A new notebook contains a single *code* cell. Add a line of code and execute it by: - *clicking the run button*, or - click in the cell, and press `shift-return` ``` print('hello world') ``` ## Navigating and Selecting Cells To select a cell, click on it. The selected cell will be surrounded by a box with the left hand side highlighted. Move the selection focus to the cell above/below using the keyboard up/down arrow keys. Additionally select adjacent cells using `SHIFT-UP ARROW` or `SHIFT-DOWN ARROW`. ## Managing Cells - Add, Delete, Reorder __Add__ a new cell to the notebook by: - click the + button on the toolbar - `Insert -> Insert Cell Above` or `ESC-A` - `Insert -> Insert Cell Below` or `ESC-B` __Delete__ a cell by selecting it and: - click the scissors button on the toolbar - `Edit -> Delete cells` or `ESC-X` __Undelete__ the last deleted cell: - `Edit -> Undo Delete cells` or `ESC-Z` Each cell has a __cell history__ associated with it. Use `CMD-Z` to step back through previous cell contents. __Reorder__ cells by: - moving them up and down the notebook using the up and down arrows on the toolbar - `Edit -> Move Cell Up` or `Edit -> Move Cell Down` - cutting and pasting them: - `Edit - >Cut` or `Edit->Paste Cells Above` or `Edit->Paste Cells Below` - on the toolbar, `Cut selected cells` then `Paste selected cells` You can also copy selected cells from the toolbar, `Edit -> Copy Cells` or `ESC-C`. ## Managing Cells - Merging and Splitting If a cell is overlong, you can split it at the cursor point: `Edit -> Split Cell` You can merge two cells that are next two each other. Select one cell and then `Edit -> Merge Cell Above` or `Edit -> Merge Cell Below`. The cell type (markdown, code, etc) of the merged cell will be the same as the originally selected cell. ## Cell outputs If the last line of code produces an output, the output will be embedded in the notebook below the code cell: ``` a=1 b=2 a+b ``` ## We Can Run a Cell to Multiple Times Each time the cell us run, the state of the underlying python process is updated, even if the visual display of other cells in the notebook is not. ``` print(a) #Run this cell multiple times a=a+1 a ``` ## Code Libraries can be imported via a Code Cell ``` import numpy as np np.pi ``` # Clearing Cell Outputs Clear the output of a selected cell: `Cell -> Current Output -> Clear` Clear the output of all cells in the notebook: `Cell -> All Output -> Clear` Note the the state of the underlying kernel __will not__ be affected - only the rendered display in the notebook. # Expanding the narrative - Markdown Cells As well as code cells, we can have cells that contain narrative text. Change the cell type using the drop down list in the toolbar, or by using the `ESC-M` keyboard shortcut. To "open" or select a markdown cell for editing, double click the cell. View the rendered markdown by running the cell: - hit the play button on the toolbar - use the `SHIFT-RETURN` keyboard shortcut. # Markdown cells can contain formatted headings Prefix a line of text in a markdown cell by one or more # signs, *followed by a space*, to specify the level of the heading required. ```` # Heading 1 ## Heading 2 ... ###### Heading 6 ```` ## Markdown cells can contain formatted text inline Prefix a heading with one or more # signs, followed by a space. *Emphasise* a word or phrase by wrapping it (no spaces!) with a single * on either side. __Strongly emphasise__ a word or phrase by wrapping it (no spaces!) with two underscores, __, either side. ## Markdown cells can contain lists Create an unnumbered list by prefixing each list item with a -, followed by a space, with each list item on a separate line: - list item 1 - list item 2 Create a number list by prefixing each list item with a number, followed by a ., followed by a space, with each list item on a separate line: 1. numbered item 1 2. numbered item 2 Add sublists by indenting sublisted items with a space: - list item - sublist item ## Markdown cells can contain embedded links and images Add a link using the following pattern: `[link text](URL_or_relative_path)` For example, `[Data Carpentry](https://datacarpentry.org)` gives the clickable link: [Data Carpentry](https://datacarpentry.org). Add an image using the following pattern: `[image alt text](URL_or_path)` For example, `[Jupyter logo](./jupyter-logo.png)` embeds the following image: ![Jupyter logo](./jupyter-logo.png) ## Markdown cells can contain inline styled (non-executable) code Style inline code by including the code in backticks: *\`code style\`* gives `code style` inline. Create a block of code by wrapping it at the start and end with four backticks: ```` def mycode(): ''' Here is my non-executable code ''' pass ```` ## Markdown cells can include Latex Expressions Mathematical expessions can be rendered inline by wrapping a LaTeX expression (no spaces) with a $ either side. For example, `$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$` is rendered as the inline $e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$ expression. Wrapping the expression with `$$` either side forces it to be rendered on a new line in the centre of the cell: $$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$$ ## Working Code Cells Harder As well as running code in the kernel, code cells can be used in a couple of other ways. ### As a command prompt: - as a route to the command line on the desktop of the machine the Jupyter server is running on: - start the code cell with a ! and then enter you command line command. - eg Mac/Linux: `!pwd` - eg WIndows: `!cd` ### Cell magic: IPython has a notion of cell magics, commands prefixed by a % or %% that run a particular command or wrap content of the cell in some way before executing it. - `%matplotlib inline`: enable the inline display of matplotlib generated graphics - `%whos`: display a list of variables and their values as set in the kernel - `%env`: display a list of environment variables and their current values in the host environment. ## Code cells can produce rich output too Cell outputs can include tables and images ``` import pandas as pd pd.DataFrame({'col1':[1,2],'col2':['x','y']}) %matplotlib inline import matplotlib.pyplot as plt # Create 1000 evenly-spaced values from 0 to 2 pi x = np.linspace(0, 2*np.pi, 1000) #Plot a sine wave over those values y = np.sin(x) plt.plot(x, y) #You can prevent the display of object details returned from the plot by: ## - adding a semi-colon (;) at the end of the final statement ``` ## Notebooks Can be Support Interactive Widgets that Help You Explore a Dataset If you reate a function that accepts one or more parameters, you may be able to use it as the basis of an automatically generated application. For example, suppose we have a function that will plot a sine wave over the range 0..2 pi for a specified frequency, passed into the function as a parameter: ``` #If no frequency value is specified, use the default setting: f=1 def sinplot(f=1): #Define a range of x values x = np.linspace(0, 2*np.pi, 1000) #Plot a sine wave with the specified frequency over that range y = np.sin(f*x) #Plot the chart plt.plot(x, y) sinplot(f=3) ``` ## Using ipywidgets `interact()` Pass the name of your function, and the default values of the parameters, to the *ipywidgets* `interact()` function to automatically create interactive widgets to control the parameter values. ``` from ipywidgets import interact interact(sinplot, f=5) ``` ## Specify the Range of Values Applied to an `interact()` slider Passing in a single number associated with a numerical parameter sets the default (mid-point) of a slider range. Passing in two values sets the range. The defualt value will be the default value set in the function definition. ``` interact(sinplot, f=[0,20]) ``` ## Specify the Step Size of a Slider Passing three values in as a list for a numerical parameter, and they define the minumim, maximum and step size values for the slider. ``` interact(sinplot, f=[0,20,5]) ``` ## Navigating Inside a Notebook - Deep Links and Code Line Numbers HTML code an also be included in a markdown cell. Adding an empty anchor tag allows you to create named anchor links that can act as deeplinks into particular parts of the notebook: `<a name='navigation'></a>` Create a complementary relative link to that section in the same notebook: `[Relative link to "Navigation" section](#navigation)` Create a link to a similarly anchor named section in different notebook: `[Deeplink into another notebook](http://example.com/example.ipynb#navigation)` To reference lines of code more exactly, toggle line numbering off and on within a cell using: `ESC-L` # Running Multiple Code Cells As well as running one code cell at a time, you can run multiple cells: - `Cells -> Run All Above` - `Cells -> Run All Below` - `Cells -> Run All` Note that this will run cells taking into account the current state of the underlying kernel. # Saving, Checkpointing and Reverting the Notebook The notebook wil autosave every few minutes. You can also create a checkpoint using the floppy/save icon on the toolbar or `File -> Save and Checkpoint`. You can revert the notebook to a saved checkpoint using `File -> Revert to Saved Checkpoint`. # Checking Reproducibility One of the aims of using notebooks is to produce an executable document that can be rerun to reproduce the results. To run cells from scratch (i.e. from a fresh kernel), `Kernel -> Restart and Clear Output` and then run the cells you want. To run all the cells in the notebook from scratch: `Kernel -> Restart and Run All` # Troubleshooting - Permanently Running Cells Code cells that are running (or queued for running) display an asterisk in the cell `In []` indicator. To stop execution of a running cell (and prevent queued cells from executing): - press the stop button on the toolbar - `Kernel -> Interrupt` If the notebook is still hanging, you may need to restart the kernel: `Kernel -> Restart` # Troubleshooting - Getting Help Code cells support autocomplete - so start typing and then tab to see what options are available... Access documentation for a function - add a `?` and run the cell: `pd.DataFrame?`
github_jupyter
#Step 0: Data Preparation ``` %pylab inline import math sin_wave = np.array([math.sin(x) for x in np.arange(200)]) plt.plot(sin_wave[:50]) X = [] Y = [] seq_len = 50 num_records = len(sin_wave) - seq_len for i in range(num_records - 50): X.append(sin_wave[i:i+seq_len]) Y.append(sin_wave[i+seq_len]) X = np.array(X) X = np.expand_dims(X, axis=2) Y = np.array(Y) Y = np.expand_dims(Y, axis=1) X.shape, Y.shape X_val = [] Y_val = [] for i in range(num_records - 50, num_records): X_val.append(sin_wave[i:i+seq_len]) Y_val.append(sin_wave[i+seq_len]) X_val = np.array(X_val) X_val = np.expand_dims(X_val, axis=2) Y_val = np.array(Y_val) Y_val = np.expand_dims(Y_val, axis=1) ``` #Step 1: Create the Architecture for our RNN model ``` learning_rate = 0.0001 nepoch = 25 T = 50 # length of sequence hidden_dim = 100 output_dim = 1 bptt_truncate = 5 min_clip_value = -10 max_clip_value = 10 U = np.random.uniform(0, 1, (hidden_dim, T)) W = np.random.uniform(0, 1, (hidden_dim, hidden_dim)) V = np.random.uniform(0, 1, (output_dim, hidden_dim)) def sigmoid(x): return 1 / (1 + np.exp(-x)) ``` #Step 2: Train the Model ``` for epoch in range(nepoch): # check loss on train loss = 0.0 # do a forward pass to get prediction for i in range(Y.shape[0]): x, y = X[i], Y[i] # get input, output values of each record prev_s = np.zeros((hidden_dim, 1)) # here, prev-s is the value of the previous activation of hidden layer; which is initialized as all zeroes for t in range(T): new_input = np.zeros(x.shape) # we then do a forward pass for every timestep in the sequence new_input[t] = x[t] # for this, we define a single input for that timestep mulu = np.dot(U, new_input) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) prev_s = s # calculate error loss_per_record = (y - mulv)**2 / 2 loss += loss_per_record loss = loss / float(y.shape[0]) # check loss on val val_loss = 0.0 for i in range(Y_val.shape[0]): x, y = X_val[i], Y_val[i] prev_s = np.zeros((hidden_dim, 1)) for t in range(T): new_input = np.zeros(x.shape) new_input[t] = x[t] mulu = np.dot(U, new_input) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) prev_s = s loss_per_record = (y - mulv)**2 / 2 val_loss += loss_per_record val_loss = val_loss / float(y.shape[0]) print('Epoch: ', epoch + 1, ', Loss: ', loss, ', Val Loss: ', val_loss) for i in range(Y.shape[0]): x, y = X[i], Y[i] layers = [] prev_s = np.zeros((hidden_dim, 1)) dU = np.zeros(U.shape) dV = np.zeros(V.shape) dW = np.zeros(W.shape) dU_t = np.zeros(U.shape) dV_t = np.zeros(V.shape) dW_t = np.zeros(W.shape) dU_i = np.zeros(U.shape) dW_i = np.zeros(W.shape) # forward pass for t in range(T): new_input = np.zeros(x.shape) new_input[t] = x[t] mulu = np.dot(U, new_input) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) layers.append({'s':s, 'prev_s':prev_s}) prev_s = s # derivative of pred dmulv = (mulv - y) # backward pass for t in range(T): dV_t = np.dot(dmulv, np.transpose(layers[t]['s'])) dsv = np.dot(np.transpose(V), dmulv) ds = dsv dadd = add * (1 - add) * ds dmulw = dadd * np.ones_like(mulw) dprev_s = np.dot(np.transpose(W), dmulw) for i in range(t-1, max(-1, t-bptt_truncate-1), -1): ds = dsv + dprev_s dadd = add * (1 - add) * ds dmulw = dadd * np.ones_like(mulw) dmulu = dadd * np.ones_like(mulu) dW_i = np.dot(W, layers[t]['prev_s']) dprev_s = np.dot(np.transpose(W), dmulw) new_input = np.zeros(x.shape) new_input[t] = x[t] dU_i = np.dot(U, new_input) dx = np.dot(np.transpose(U), dmulu) dU_t += dU_i dW_t += dW_i dV += dV_t dU += dU_t dW += dW_t if dU.max() > max_clip_value: dU[dU > max_clip_value] = max_clip_value if dV.max() > max_clip_value: dV[dV > max_clip_value] = max_clip_value if dW.max() > max_clip_value: dW[dW > max_clip_value] = max_clip_value if dU.min() < min_clip_value: dU[dU < min_clip_value] = min_clip_value if dV.min() < min_clip_value: dV[dV < min_clip_value] = min_clip_value if dW.min() < min_clip_value: dW[dW < min_clip_value] = min_clip_value # update U -= learning_rate * dU V -= learning_rate * dV W -= learning_rate * dW preds = [] for i in range(Y.shape[0]): x, y = X[i], Y[i] prev_s = np.zeros((hidden_dim, 1)) # Forward pass for t in range(T): mulu = np.dot(U, x) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) prev_s = s preds.append(mulv) preds = np.array(preds) plt.plot(preds[:, 0, 0], 'g') plt.plot(Y[:, 0], 'r') plt.show() ``` #Step 3: Get predictions ``` preds = [] for i in range(Y_val.shape[0]): x, y = X_val[i], Y_val[i] prev_s = np.zeros((hidden_dim, 1)) # For each time step... for t in range(T): mulu = np.dot(U, x) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) prev_s = s preds.append(mulv) preds = np.array(preds) plt.plot(preds[:, 0, 0], 'g') plt.plot(Y_val[:, 0], 'r') plt.show() ```
github_jupyter
## Introduction The Deltaflow language offers the ability to subdivide algorithms into their base parts, and provides the runtime facility to efficiently implement them on relevant hardware. Adaptive, NISQ algorithms such as the [Accelerated Variational Quantum Eigensolver](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.140504) or [Engineered Likelihood Functions](https://arxiv.org/abs/2006.09350) have been specifically developed with fast communication between hardware components in mind. However, even certain implementations of simpler algorithms such as [VQE](https://www.nature.com/articles/ncomms5213) can benefit from in-situ calculations between quantum circuit executions. Here we explore the first steps towards implementing an algorithm like VQE in the Deltaflow language. This notebook is intended to show how one would move from a python implementation of an algorithm to a Deltaflow implementation. In reality, certain components of the instructions below would be supplanted by real quantum hardware or FPGA nodes. We abstract these details out in order to showcase a use-case of the Deltaflow language. We will start with the definition of our problem and a simple python impementation, before extending this to be incorporated into `DeltaGraph`. We believe only minor modifications would be needed to use the example below for a much more complex problem, however integrating these to interact with real hardware requires more advanced node workings. $$\newcommand{\braket}[2]{\left\langle {#1} \middle| {#2} \right\rangle}$$ $$\newcommand{\brakett}[3]{\left\langle {#1} \middle| {#2} \middle| {#3} \right\rangle}$$ $$\newcommand{\ket}[1]{\left| {#1} \right\rangle}$$ ## Theory ### Variational Quantum Eigensolver (VQE) <div> <img src="VQE_structure_original.jpg" width="500"/> </div> Current quantum hardware suffers from low qubit coherence times and compiling error rates in deep circuits. While the hardware is continually improving, realising fault-tolerant quantum computers is not likely within the next two decades. In order to extract useful calculations from these noisy, low-depth quantum machines the variational quantum eigensolver was devised. Using the variational principle, this algorithm utilises low-depth quantum circuits to complement a classical optimisation procedure, as can be seen in the diagram above (kindly borrowed from the original reference). Specifically, if we consider a problem Hamiltonian $H = \sum_i a_iP_i$ where $a_i$ are known complex coefficients and $P_i$ are Pauli matrix strings, the goal is to find the ground state of energy of $H$. Using a physically-inspired ansatz wavefunction $\ket{\psi(\lambda)}$, parameterized by a real-valued $\lambda$ and generated by preparation circuit $R$ such that $\ket{\psi(\lambda)}\equiv R(\lambda)\ket{0}$, we can write the energy of the system as: $$\langle H\rangle = E(\lambda) = \sum_ia_i\brakett{\psi(\lambda)}{ P_i}{\psi(\lambda)}.$$ Such that we need only calculate individual expectation values $A_i = \brakett{\psi(\lambda)}{P_i}{\psi(\lambda)}$. VQE calculates these values $A_i$ through it's subroutine Quantum Expectation Estimation, briefly explained below. Alternatives such as AVQE or ELF, as mentioned in the introduction, achieve the same goal through implementing different quantum subroutines. That is to say the input ansatz circuit and desired Pauli, as well as the output measurement outcome(s), do not vary across algorithms aiming to calculate the ground state energy of some Hamiltonian. In VQE these subroutine circuits have depth $O(1)$. However, to reach a precision $p < 1$ they each have to be executed $O(p^{-2})$ times which can be restrictive. Once each $A_i$ has been calculated, the energy $E(\lambda)$ is calculated and passed to a classical optimisation routine; this updates the value of $\lambda$ and the algorithm loops. The exit criteria is when we have reached a pre-determined precision on $E$. The classical outer-loop grants a degree of freedom in how the optimisation is actually performed over the $\lambda$ parameter space, however the implementation of the quantum subroutine - known as Quantum Expectation Estimation - requires more in-depth focus. ### Quantum Expectation Estimation (QEE) The quantum subroutine of VQE utilises Hamiltonian averaging to generate information about the target quantity, the overlap $A=\brakett{\psi(\lambda)}{P}{\psi(\lambda)}$ where the subsystem index $i$ has been dropped. For a single qubit, the example we consider here, this method is quite straightforward. Given a prepared quantum state $|\psi(\lambda)\rangle$ we want to know the outcome when measuring Pauli $P$. Converting into the eigenbasis of $P$ we can write: $$\ket{\psi(\lambda)} = \left(\frac{I+P}{2}\right)\ket{\psi} + \left(\frac{I-P}{2}\right)\ket{\psi}.$$ The probability of observing an eigenvalue of $+1$ or $-1$ is then the absolute square value of the first and second terms in the eigen-decomposition above, respectively: $$p_{+1} = \frac{1 + A}{2}, \hspace{12pt} p_{-1} = \frac{1-A}{2}$$ Therefore, by repeatedly preparing this ansatz state, measuring the $P$ matrix and recording the outcomes we can make an accurate prediction of these probabilities and then simply re-arrange for the value of $A$. It is important to note that an eigenvalue of $+1$ corresponds to observing the eigenvector $\ket{0}$ in the $Z$-basis --- the computational basis for quantum computers. The quantum simulator we employ here, [ProjectQ](https://projectq.ch/), returns a bitstring of 0's and 1's corresponding to the quantum states observed. For a single qubit, if ProjectQ returns a $0$ we must increase our $p_{+1}$ defined above. ## Implementations ### Example 1: QEE in Python To gain some intuition about the behaviour of this subroutine, here we will show a python implementation and then the same code modified to accommodate the Deltaflow language. For this purpose, we will consider only a basic single-gate ansatz circuit acting on one qubit. For a single qubit wavefunction $|\psi\rangle = R_x(\theta)|0\rangle$ measuring the Pauli matrix $Z$ gives us an expectation value of $$E = \brakett{\psi}{R_x^\dagger(\theta) Z R_x(\theta)}{\psi} = \cos(\theta)$$ Note: The `p_one` variable used throughout corresponds to the probability of seeing an *eigenvector* $\ket{1}$ from (simulated) quantum hardware. From our definitions above, `p_one`$\equiv p_{-1} = \left(1-A\right)/2$. Therefore the estimated expectation value `estimate` will be calculated as A = `1 - 2*p_one`. We use `ProjectQ` as a backend for this implementation, but you can try to replace this node by any other quantum circuit simulator. *Let's also save ourseves from repeating the same code twice and decorate this function with `DeltaBlock` right away, as it will by reused in the Deltalanguage implementation in the next section.* ``` from projectq import MainEngine from projectq.ops import Rx, Z, Measure import deltalanguage as dl @dl.DeltaBlock() def circuit(theta: float) -> int: """A simple quantum circuit with 1 parametrized gate.""" engine = MainEngine() qubit = engine.allocate_qubit() Rx(theta) | qubit Z | qubit Measure | qubit engine.flush() return int(qubit) ``` We need to repeat the above circuit a number of times to achieve the required precision $p$ - typically $O(1/p^2)$ times. Let us crack on and see how the QEE implementation will look like: ``` import numpy as np def QEE_py(precision, theta): """Quantum Expectation Estimation in Python.""" print(f"Starting QEE routine with theta={theta}, precision={precision}:") p_one = 0 estimates = [] runs = 0 while True: runs += 1 # Run quantum cirquit and gather statistic measurement_outcome = circuit(theta) p_one += measurement_outcome estimate = 1 - 2*p_one/runs estimates.append(estimate) std = np.std(estimates) # Accumulate statistic of at least 100 runs if runs < 100: continue if runs%500 == 0: print(f' Run {runs:5d}, std={std:.5f}') if std <= precision: break print(f'Exiting after {runs} runs with std={std:.5f}') return np.mean(estimates), std, runs ``` Choose the parameters, by increasing/decreasing `p` we will decrease/increase the simulation time, e.g. $p = 10^{-3}$ will need $O(1/p^2) = 10^6$ circuit executions before exiting. ``` # These constants will be used throughout this example THETA = np.pi/3 PRECISION = 0.1 mean, std, runs = QEE_py(PRECISION, THETA) true_value = np.cos(THETA) print(f'\nTrue value = {true_value:.5f} for theta = {THETA:.5f}') print(f'Final estimation value: {mean:.5f}') print(f'Error: {abs(true_value - mean):.5f}') print(f'Standard Deviation of measurements: {std:.5f}') ``` ### Example 2: QEE in Deltalanguage Given the Python code above, how would we go about implementing this in Deltalanguage? First, take a look at the necessary components that will be used below: - `DeltaBlock`, `Interactive` allow us to create Deltaflow nodes; - `DeltaGraph` is necessary for us to wire up the nodes; - `placeholder_node_factory` is needed to resolve circular dependency in the graph; - `StateSaver` allows us to store and save the result for further comparison; - `DeltaPySimulator` and `DeltaRuntimeExit` allow us to simulate and exit graphs respectively. The original QEE function used in the example above can be used in this implementation as well, however it will require rethinking the data flow in the graph and will be just unnecessarily complicated. Instead we give a new definition via the `Interactive` Deltaflow wrapper, which is perfectly suited for such tasks. This new node will be sending data to two other nodes, thus initially we define the return `type` and `class` pair: - the first output will contain the final result bundled together in a tuple: - mean, `float` - standard deviation, `float` - the total number of runs, `int` - the second output is used for the circuit parametrization, `float` Now let's define the function itself, note that each time data is received we should use the node.receive method and when it's sent out, node.send. Everything else is the same as in the original QEE! ``` @dl.Interactive(inputs=[("precision", float), ("theta", float), ("measurement", int)], outputs=[('result', dl.Tuple([float, float, int])), ('theta', float)]) def QEE_df(node): """Quantum Expectation Estimation in Deltalanguage.""" # NEW: receive QEE parameters in the infinite loop while True: precision = node.receive("precision") theta = node.receive("theta") print(f"Starting QEE: theta/pi={theta/np.pi:.5f}, precision={precision}:") p_one = 0 estimates = [] runs = 0 while True: runs += 1 # NEW: send angle to the quantum circuit and wait for results node.send(result=None, theta=theta) measurement = node.receive("measurement") p_one += measurement estimate = 1 - 2*p_one/runs estimates.append(estimate) std = np.std(estimates) # Accumulate statistic of at least 100 runs if runs < 100: continue if runs%500 == 0: print(f' Run {runs:5d}, std={std:.5f}') if std <= precision: break print(f'Exiting after {runs} runs with std={std:.5f}','\n') # NEW: send out the final result for this QEE parameters node.send(result=(np.mean(estimates), std, runs)) ``` We can now build `DeltaGraph`, making use of `placeholder_factory_node` to ensure proper connectivity. The `StateSaver` instance will record the result in a list that can be accessed once the Deltaflow program has completed; it also ensures that the graph exits once a result has been found. ``` s = dl.lib.StateSaver(object, verbose=False) with dl.DeltaGraph() as graph: qee_ph = dl.placeholder_node_factory() circuit_out = circuit(theta=qee_ph.theta) s.save_and_exit(qee_ph.result) qee_ph.specify_by_node(QEE_df.call(precision=PRECISION, theta=THETA, measurement=circuit_out)) print(graph) ``` It might be more intuitive to look at the graph's visualisation: ``` graph.draw(seed=1) ``` Now that the graph has been compiled, we're ready to evaluate it. We use a simple python simulator that does not provide high performance but ensures the correctness on the execution: ``` rt = dl.DeltaPySimulator(graph) rt.run() # Extract the latest saved results: mean, std, runs = s.saved[-1] assert std < PRECISION ``` ### Example 3: VQE in Deltalanguage Now we have a Deltaflow graph that implements QEE (a subroutine of VQE), we can complete the entire algorithm. All that remains is to wire up an optimisation outer-loop to pass various ansatz parameters to the subroutine. To illustrate how this could be done, we will forgo the optimisation step and instead simply pass individual ansatz parameters, finding the expectation value at each. This new node takes the results of QEE as an input, aggregate the mean and standard deviation values, and save them for future plotting. Besides that we provide a set of parameters to define the range of `theta` as global constants. ``` THETA_MIN = 0 THETA_MAX = 2*np.pi THETA_NUM = 15 @dl.Interactive(inputs=[("result", dl.Tuple([float, float, int]))], outputs=[('results', dl.Tuple([dl.Array(float, dl.Size(THETA_NUM)), dl.Array(float, dl.Size(THETA_NUM))])), ('theta', float)]) def ansatz(node): maen_list, std_list, runs_list = [], [], [] for theta in np.linspace(THETA_MIN, THETA_MAX, THETA_NUM): node.send(results=None, theta=theta) mean, std, runs = node.receive("result") maen_list.append(mean) std_list.append(std) runs_list.append(runs) node.send(results=(maen_list, std_list)) ``` Let's connect the graph: ``` with dl.DeltaGraph() as graph: qee_ph = dl.placeholder_node_factory() circuit_out = circuit(theta=qee_ph.theta) anzatz_out = ansatz.call(result=qee_ph.result) qee_ph.specify_by_node(QEE_df.call(precision=PRECISION, theta=anzatz_out.theta, measurement=circuit_out)) s.save_and_exit(anzatz_out.results) print(graph) ``` The flow of data becomes ever slightly more complicated: ``` graph.draw(seed=9) ``` Now we can simulate the graph with provided parameters: ``` rt = dl.DeltaPySimulator(graph) rt.run() ``` The slow performance around $\theta = \pi/2$ and $3\pi/2$ is expected as this corresponds to an expectation value of $\langle A\rangle=0$. The algorithm struggles to converge on these points as the probability is exactly $1/2$ for both outcomes from the quantum simulator. Conversely, when $\theta = 0$ ($\pi$) the outcome will always be 1 (0) result in zero variance. Let's assert some basic tests: ``` mean_list, std_list = s.saved[-1] assert len(mean_list) == THETA_NUM and len(std_list) == THETA_NUM assert all(map(lambda x: x < PRECISION, std_list)) ``` And compare the resul t with the true values: ``` import matplotlib.pyplot as plt theta_list = np.linspace(THETA_MIN, THETA_MAX, THETA_NUM) plt.plot(theta_list/np.pi, np.cos(theta_list), linewidth = 2, label='Exact') plt.errorbar(theta_list/np.pi, mean_list, std_list, linestyle=':', label='VQE') plt.ylabel('Expectation') plt.xlabel(r'$\theta/\pi$') plt.grid(True) plt.legend() plt.show() ``` ## Conclusions The above examples showed how a basic VQE code can be adopted to the shape of a distributed graph of asyncronous processes, which makes it portable to real experimental setups via Deltaruntime or advanced simulation platforms via Deltasimulator. The crucial point to get from this is that by going from a procedural language, such as Python, to a dataflow language, such as Deltalanguage, requires redesign of the algorithm structure. We hope that by providing this implementation we have helped with these initial steps. Implementing a more complicated ansatz with more parameters, an optimisation routine, and a more complex Hamiltonian with multiple Pauli terms are all minor modifications. A more complex algorithm, such as Accelerated VQE, would require another function to perform intermediate calculations in between circuit executions, adding another node to our graph.
github_jupyter
Windy Grid World --- <img style="float:left" src="board.png" alt="drawing" width="600"/> --- Column specifies wind strength and it always blows the agent up. ``` import numpy as np class State: def __init__(self, state=(3, 0), rows=7, cols=10): self.END_STATE = (3, 7) self.WIND = [0, 0, 0, 1, 1, 1, 2, 2, 1, 0] self.ROWS = 7 self.COLS = 10 self.state = state # starting point self.isEnd = True if self.state == self.END_STATE else False def giveReward(self): if self.state == self.END_STATE: return 1 else: return 0 def nxtPosition(self, action): """ action: up, down, left, right ------------------ 0 | 1 | 2| 3| ... 1 | 2 | ...| return next position on board based on wind strength of that column (according to the book, the number of steps shifted upward is based on the current state) """ currentWindy = self.WIND[self.state[1]] if action == "up": nxtState = (self.state[0]-1-currentWindy, self.state[1]) elif action == "down": nxtState = (self.state[0]+1-currentWindy, self.state[1]) elif action == "left": nxtState = (self.state[0]-currentWindy, self.state[1]-1) else: nxtState = (self.state[0]-currentWindy, self.state[1]+1) # if next state is legal positionRow, positionCol = 0, 0 if (nxtState[0] >= 0) and (nxtState[0] <= (self.ROWS - 1)): positionRow = nxtState[0] else: positonRow = self.state[0] if (nxtState[1] >= 0) and (nxtState[1] <= (self.COLS - 1)): positionCol = nxtState[1] else: positionCol = self.state[1] # if bash into walls return (positionRow, positionCol) def showBoard(self): self.board = np.zeros([self.ROWS, self.COLS]) self.board[self.state] = 1 self.board[self.END_STATE] = -1 for i in range(self.ROWS): print('-----------------------------------------') out = '| ' for j in range(self.COLS): if self.board[i, j] == 1: token = 'S' if self.board[i, j] == -1: token = 'G' if self.board[i, j] == 0: token = '0' out += token + ' | ' print(out) print('-----------------------------------------') s = State() s.showBoard() s.state = s.nxtPosition("right") s.showBoard() class Agent: def __init__(self, lr=0.2, exp_rate=0.3): self.END_STATE = (3, 7) self.START_STATE = (3, 0) self.ROWS = 7 self.COLS = 10 self.states = [] # record position and action taken at the position self.actions = ["up", "down", "left", "right"] self.State = State() self.lr = lr self.exp_rate = exp_rate # initial Q values self.Q_values = {} for i in range(self.ROWS): for j in range(self.COLS): self.Q_values[(i, j)] = {} for a in self.actions: self.Q_values[(i, j)][a] = 0 # Q value is a dict of dict def chooseAction(self): # choose action with most expected value mx_nxt_reward = 0 action = "" if np.random.uniform(0, 1) <= self.exp_rate: action = np.random.choice(self.actions) else: # greedy action for a in self.actions: current_position = self.State.state nxt_reward = self.Q_values[current_position][a] if nxt_reward >= mx_nxt_reward: action = a mx_nxt_reward = nxt_reward # print("current pos: {}, greedy aciton: {}".format(self.State.state, action)) return action def takeAction(self, action): position = self.State.nxtPosition(action) # update State return State(state=position) def reset(self): self.states = [] self.State = State() def play(self, rounds=10): i = 0 while i < rounds: # to the end of game back propagate reward if self.State.isEnd: if i % 5 == 0: print("round", i) # back propagate reward = self.State.giveReward() for a in self.actions: self.Q_values[self.State.state][a] = reward print("Game End Reward", reward) for s in reversed(self.states): current_q_value = self.Q_values[s[0]][s[1]] reward = current_q_value + self.lr*(reward - current_q_value) self.Q_values[s[0]][s[1]] = round(reward, 3) self.reset() i += 1 else: action = self.chooseAction() # append trace self.states.append([(self.State.state), action]) # print("current position {} action {}".format(self.State.state, action)) # by taking the action, it reaches the next state self.State = self.takeAction(action) # print("nxt state", self.State.state) # print("---------------------") ag = Agent(exp_rate=0.3) ag.play(50) ``` #### Find the best route <img style="float:left" src="board.png" alt="drawing" width="600"/> ``` ag_op = Agent(exp_rate=0) ag_op.Q_values = ag.Q_values while not ag_op.State.isEnd: action = ag_op.chooseAction() print("current state {}, action {}".format(ag_op.State.state, action)) ag_op.State = ag_op.takeAction(action) ```
github_jupyter
# M&P2 EM Python Activity 2022: Applying electrostatics to build a computational model In this activity, you'll build up a simple model and then apply it to two chemistry-relevant situations. The structure of the activity is: 1. Write functions to calculate E field and potential due to a point charge, and to evaluate the E field, potential and potential energy due to a system of charges. - Test that these functions work by comparing to two systems for which you know the analytic result: the finite line charge and the symmetric ring of charge (both considered in problem sheets). 2. Apply these functions to consider simplified scenarios relating to electrochemistry: - the hydration shell around a sodium ion dissolved in water - behaviour of water molecules close to an electrode The functions you define in part 1 will be useful in part 2, and you'll need the functions from part 1 in part 2. To help you concentrate on the physics elements of the coding, a large proportion of the code you need will be provided for you. Note that the Jupyter notebook already released (look in Content and Resources on Blackboard) shows you how to make fieldline plots, 3D vector-field plots and contour plots in 2D and 2D, and you should make free use of the code there, supplemented by online searches for any further modifications you would like to make. ### Coding tips Some of you will have more experience with coding than others. Here are some tips, particularly for those of you with less experience: When writing code, an important element of good practice is to check at different stages that the code is generating the results you expect, so you should always try to compare code output to known cases. This is what you will be doing with the ring of charge and also by plotting the water molecule orientations in part 2 to help visualise what the code is doing. When you are generating arrays, when first running the code, check each one contains something like what you are expecting by printing the values of some elements. Your code might run without crashing and reporting an error (no "compile-time" errors), but this does not necessarily mean that it is free from ("run-time") errors (where it does something different from what you intended), so it's important to do other checks so you can be confident it is working as intended. A second element of sensible coding practice is to make use of online searches to find out how to use code syntax etc. Of course you can also refer back to the Python course and coding you have already done. With online search results, be selective about which results to pay attention to. Although you can find dictionary/manual-style definitions in the official documentation, to start off with it is usually more helpful to see an example piece of code where the syntax is used. Well put-together tutorial-style sources are usually good for this; beware random un-thought-out posts on say stackexchange - the solutions these present are not always the easiest approach. If you're not very familiar with Jupyter notebooks, here are a few short tips: - There are two types of cell, 'code' and 'markdown' (this text is in a markdown cell). To change between the two, select the cell by clicking to its left (bar turns blue) and then choose Y (for code) or (M for markdown). - Use shift-Enter to finish editing a markdown cell or to run a code cell. Double-click a markdown cell to start editing it. - Look at the Help menu to find more keyboard shortcuts and other help. - The kernel 'remembers' between different code cells, so, for example, once you have run a code cell which defines a function, it remembers the definition for any other cell you execute and similarly, once you have imported a package, it remembers this. If you re-open a notebook or restart the kernel, it's a new kernel so you'll need to run the code again. - To avoid confusion, write your code so that it will work if run in sequence from the top - e.g. by using the Cell menu, i.e., avoid writing your code so it only works if you run particular cells in a particular order. (You can quickly end up getting confused if you do this!) If you find yourself copying and pasting code from one part of the program to re-use it in another, consider defining a function. This can often make your life much easier and simpler. Debugging code to remove errors is perhaps the key skill of coding in practice. It will be much easier to track down errors in your code if you do the following: - Include comments in your code (so you remember what it does later) - be specific and note the units of quantities used - Use appropriate/descriptive variable names which correspond to what the variable is - Print the values of variables / check the dimensions of arrays (e.g. with len or np.shape) at different points in the code to check their values are what they should be / plot graphs of arrays (label axes and include titles). This can be particularly useful to trace a problem back to its source. - When modifying code during debugging, it is often a good idea to keep the previous version of the line you are changing rather than deleting it: duplicate the line you are changing on the line above and comment this out, just in case you decide later you want to change back to the earlier version ## Part 1: Building a simple molecular model to calculate E fields and potentials One useful feature of numerical methods is that they enable you to find solutions where analytic solutions either do not exist or are complicated, often with very little additional coding. This is what we will try to take advantage of through this code. ### 1A. Modelling a set of charges: Potential, electric field and potential energy Consider a situation where there is a charge q1 at position r1 (that is, $\vec{r_1}$): Write a Python function ```potl``` taking r0, r1 and q1 as inputs. The function should return the potential at position r0 (that is, $\vec{r_0}$). The first step is to import numpy, since you will need this in several places, and also a plotting package: ``` #%matplotlib notebook #you might need to uncomment this in order to rotate the 3d plots we will use later #%matplotlib widget #%matplotlib inline import numpy as np #import the numpy package since this will be useful import matplotlib.pyplot as plt #plotting package from ipywidgets import interact N=5 e=1.602e-19 #in Coulombs eps0=8.85e-12 K=1/(4*np.pi*eps0) def dist(r0,r1): if len(r0)==3: return np.sqrt(abs((r0[0]-r1[0])**2+(r0[1]-r1[1])**2+(r0[2]-r1[2])**2)) elif len(r0)==2: return np.sqrt(abs((r0[0]-r1[0])**2+(r0[1]-r1[1])**2)) else: print('Dimension error') def unit_v(r0,r1): return (-r1+r0) / np.linalg.norm((r1-r0)) #Here is a reminder of the syntax for declaring a function in Python: def potl(r0,r1,q1): #print(dist(r0,r1)) V=K * (q1*e/dist(r0,r1)) #edit this equation! return V #test here and compare with the result you get if you use pen and paper and a calculator ra=np.array([0,0,0]) rb=np.array([1,1,1]) qb=1 rc=np.array([2,1,1]) qc=1.5 #decide what units you are going to use for r and q and make sure you are consistent with these print('For charges of 1 [qC] at (0,0,0) and (1,1,1) [meter] and 1.5 [meter] at (2,1,1) [meter] \ \nPotential I find by calculating manually using a calculator is 8.32e-10 [Volt] and 8.82e-10 [Volt]') #As illustrated above, testing and printing values of variables at particular points in your code to compare # to what you expect to find is a good way to debug # #Including relevant precise description will often help you a lot with debugging # #Including comments in your code & naming variables with relevant names (rather than a,b,c etc) is also very helpful when debugging #Note the syntax below for how to format text strings in Python using % # and also how to use the \ character to break input over additional lines potl1=potl(ra,rb,qb) print('potl() function result: Potential is %.2e [Volt] at (%.1e,%.1e,%.1e) A for charge %d [qC]\ at (%.1e,%.1e,%.1e) [meter]' % (potl1,ra[0],ra[1],ra[2],rb[0],rb[1],rb[2],qb)) potl2=potl(ra,rc,qc) print('potl() function result: Potential is %.2e [Volt] at (%.1e,%.1e,%.1e) A for charge %d [qC]\ at (%.1e,%.1e,%.1e) [meter]' % (potl2,ra[0],ra[1],ra[2],rc[0],rc[1],rc[2],qc)) ``` Write another Python function ```Efield``` taking r0, r1 and q1 as inputs, which calculates the electric field (vector) at position r0. ``` def Efield(r0,r1,q1): #r0 and r1 should be numpy arrays with 3 elements E=K* (q1*e/dist(r0,r1)**2)*unit_v(r0,r1) #add calculation of E here return E #test out that your function works Ea=Efield(ra,rb,qb) print('E field is (%.2e,%.2e,%.2e) [N/C] at (%.1e,%.1e,%.1e) [meter] for charge %d [qC] \ at (%.1e,%.1e,%.1e) [meter]' % (Ea[0],Ea[1],Ea[2],ra[0],ra[1],ra[2],rb[0],rb[1],rb[2],qb)) ``` For lists of charges $q_i$ at positions $r_i$, use the functions you have already written to create two new functions, taking r0, ri and qi as inputs: Write a function to calculate the total potential at r0 ``` def potl_sum(r0,ri,qi): V_sum = 0 for i in range(len(ri)): V = potl(r0,ri[i],qi[i]) V_sum += V return V_sum #return the total potential ``` Write a function to calculate the net electric field at r0 ``` def Efield_sum(r0,ri,qi): E_sum = 0 for i in range(len(ri)): E = Efield(r0,ri[i],qi[i]) E_sum += E return E_sum #return the total potential ``` Now write a function which calculates the energy of a whole system of charges $q_i$ with positions $\vec{r_i}$. One way to work this out would be to think about adding all the charges to they system by bringing each in from infinity, one at a time. The code below does this for you! Remember that when adding a new charge you need to account for the effect of *each* of the charges which are already present. ``` #adapt the function below or write your own! def potl_energy_sum(ri,qi): U=0 #initialise the total potential energy chargeadded=[] #list of indices of charges already added for loop in range(len(qi)): #loop over each charge in turn #....incomplete code below which you can use as a starting point for j in chargeadded: #loop over charges already added (bringing charge loop towards charge j) Uij=e*qi[j] * potl(ri[j],ri[loop],qi[loop]) #print('Adding PE %.2e [units] of bringing charge %d towards charge %d' % (Uij,loop,j)) U=U+Uij chargeadded.append(loop) #add the index of the added charge to the list #print('Total PE is %.2e [units]' % U) return U #return the total potential energy ``` **In your report**, include a *brief* description of the physics (the equations and any other important details) you have used. ##### Code check: Charges round a ring In problem sheet 2 question 4, you calculated the total energy for charges uniformly spaced round a ring. The code below is provided so that you can check that the function you have written above gives the same answer as you know from that. If it does, your code is probably correct. If it doesn't, your code is not fully correct, so go back and look for the error... ``` N=5 e=1.602e-19 #in Coulombs eps0=8.85e-12 q=1 #charge round outside in qC units q0=-1 #charge at centre in qC units a=1e-10 #radius of ring in m U5ps=1.88*(q*e*-q0*e)/(4*np.pi*eps0*a) #answer for N=5 charges print('Problem sheet 2 Q4, N=5, q=e, q0=-e, a=0.1nm: U=1.88q.q0/(4pi.eps0r)=%.2e J' % (U5ps)) #generate positions of charges round the ring ind=np.arange(N) #charge indices: 0,1,2,...,N-1 x=np.cos(2*np.pi*ind/N) #charge x-coordinates y=np.sin(2*np.pi*ind/N) #charge y-coordinates z=np.zeros(N) #all charges in plane z=0 qi=q*np.ones(N) #all charges have same value, q x=np.concatenate(([0.0],x)) #add a first element at the origin, which is the coordinate of the central charge y=np.concatenate(([0.0],y)) z=np.concatenate(([0.0],z)) ri=np.transpose([x,y,z])*1e-10 qi=np.concatenate(([q0],qi)) #central charge has charge q0 #r0=[0,0,0] #print(ri) #print(qi) U=potl_energy_sum(ri,qi) print('Python code calculated value for N=%d, a=%.2e [m]: U=%.2e [J]' % (N,a,U)) ``` ### Electric field and potential around a finite line charge Now consider a line charge of *finite length* between $(-L,0,0)$ and $(+L,0,0)$. Write some code to find numerically the potential and electric field and any coordinate $(x,y,z)$. Hint: you can build this up by breaking down the line charge into many equal discrete charges equally spaced along the line. To check your code is working, compare your answer with $E(z)$ as given by the analytic expression for electric field along the line $(0,0,z)$ for an *infinite* line charge with the same charge per unit length $\lambda$. You should have obtained this on the problem sheet. Include the following things in your report: - Plot your computed $E(z)$ along the line $(0,0,z)$ for the *finite* line charge. On the same plot, also plot $E(z)$ as given by the analytic expression for electric field along the line $(0,0,z)$ for an *infinite* line charge with the same charge per unit length $\lambda$. Discuss whether you would expect them to agree. - Plot your computed $V(z)$ along the line $(0,0,z)$ for the *finite* line charge. On the same plot, also plot $V(z)$ as given by the analytic expression for electric field along the line $(0,0,z)$ for an *infinite* line charge with the same charge per unit length $\lambda$. Discuss whether you would expect them to agree. - Use your code to numerically find the E field in the region $(x,0,z)$, for $x>0$ and $z>0$ and plot out the electric field in this plane. - Use your code to numerically find the potential in the same region and make a contour plot of this. - Make some comments about the form of these plots (how the form relates to the physics etc) ``` L = 4 # in [m] nq = 200 #total number of charge from -L to +L lambda_q = 30 # parameter, charge per unit length, in [qC/m] #Thus, qi (charge per discrete charge) can be calculated, in [qC] qi = np.full(nq,lambda_q*2*L/nq) #charge per discrete charge, in [qC] #Generate the ri, for all the charges ri = [] for i in np.linspace(-L,+L,nq): ri.append([i,0,0]) ri = np.array(ri) Efield_z = [] num_samp = 50 # number of sampling along z axis z = 2 #sampling length along z axis for i in np.linspace(-z,+z,num_samp): Efield_z.append([0,0,i]) Efield_z_only = np.linspace(-z,+z,num_samp) #calculate the sampling point under the effect of the finite line and infinite one E_finiteLine = [] E_infiniteLine = [] for i in Efield_z: E_finiteLine.append(Efield_sum(i,ri,qi)[2]) #minus sign means the vector should point from the charge along the line to the test point sampling along z axis #print(-Efield_sum(i,ri,qi)) E_infiniteLine.append(2*K*lambda_q*e/i[2]) #seperate the positive and negtive parts E_finL_posi = [] E_finL_neg = [] E_infinL_posi = [] E_infinL_neg = [] z_finL_posi = [] z_finL_neg = [] z_infinL_posi = [] z_infinL_neg = [] for i in range(len(Efield_z_only)): if E_finiteLine[i]>0: E_finL_posi.append(E_finiteLine[i]) z_finL_posi.append(Efield_z_only[i]) else: E_finL_neg.append(E_finiteLine[i]) z_finL_neg.append(Efield_z_only[i]) if E_infiniteLine[i]>0: E_infinL_posi.append(E_infiniteLine[i]) z_infinL_posi.append(Efield_z_only[i]) else: E_infinL_neg.append(E_infiniteLine[i]) z_infinL_neg.append(Efield_z_only[i]) #Plot the electric field strength against z axis plt.figure(figsize=(8,5.5)) plt.title('Electric field strength along z axis',fontsize=14) plt.plot(z_finL_posi, E_finL_posi, marker='', label='Efield in finite line',color='r') plt.plot(z_finL_neg, E_finL_neg, marker='',color='r') plt.plot(z_infinL_posi, E_infinL_posi, marker='', label='Efield in infinite line',color='b') plt.plot(z_infinL_neg, E_infinL_neg, marker='',color='b') plt.xlabel('z') plt.ylabel('Efield (N/C)') plt.axvline(0,linewidth=0.7,color='black') plt.axhline(0,linewidth=0.7,color='black') plt.legend() plt.show() #calculate the sampling point under the effect of the finite line and infinite one V_finiteLine = [] V_infiniteLine = [] for i in Efield_z: V_finiteLine.append(potl_sum(i,ri,qi)) V_infiniteLine.append(2*K*lambda_q*e*np.log(1/abs(i[2]))) #print(np.log(abs(i[2]))) plt.figure(figsize=(8,5.5)) plt.title('Electric field potential along z axis',fontsize=14) plt.plot(Efield_z_only, V_finiteLine, marker='', label='potl in finite line') plt.plot(Efield_z_only, V_infiniteLine, marker='', label='potl in infinite line') plt.xlabel('z') plt.ylabel('Potential (V)') plt.legend() plt.show() #Set all the test points and vectors for each points ri_2D = [] for i in np.linspace(-L,+L,nq): ri_2D.append([i,0]) ri_2D = np.array(ri_2D) #generate the mesh points for Efield x_list = [] z_list = [] num_samp_x = 80 # number of sampling along x,z axis len_x =5 #sampling length along x axis num_samp_z = 80 # number of sampling along z axis len_z = 5 #sampling length along z axis x_samp=np.linspace(-len_x,+len_x,num_samp_x) z_samp=np.linspace(-len_z,+len_z,num_samp_z) xx_samp, zz_samp = np.meshgrid(x_samp, z_samp) coord_samp =np.stack((xx_samp, zz_samp),axis=-1) #polt contour for Efield z_Efield = coord_samp z_Efield_magni = [] for i in range(len(coord_samp)): aList=[] for j in range(len(coord_samp[i])): aList.append(np.linalg.norm(-Efield_sum(coord_samp[i][j],ri_2D,qi))) z_Efield_magni.append(aList) #print(z_Efield_magni) plt.contourf(xx_samp, zz_samp, z_Efield_magni) plt.axis('scaled') plt.colorbar() plt.show() #polt potl for Efield z_potl_magni = [] for i in range(len(coord_samp)): aList=[] for j in range(len(coord_samp[i])): aList.append(potl_sum(coord_samp[i][j],ri,qi)) z_potl_magni.append(aList) #print(z_Efield_magni) plt.contourf(xx_samp, zz_samp, z_potl_magni) plt.axis('scaled') plt.colorbar() plt.show() ``` ## Part 2: Building a model for water molecules Now you'll apply the functions you've written to deal specifically with water molecules, ready to use these in the final part to consider some chemistry-relevant applications. In chemistry, often you are dealing with molecules rather than isolated charges. Since you know the relative positions of the atoms in the molecules, rather than describing a configuration of molecules in space by specifying the position of every single atom, it's sufficient to specify the location of one atom in each molecule and also the *orientation* of that molecule. Since it is an orientation in 3D, two angles are required for this (representing angles of rotation away from a starting or base orientation), and a convenient way is to make use of spherical polar coordinates. You have seen these before in chemistry (e.g., check pages 9 and 10 of the Introduction to Spectroscopy lecture-6 slides) or look (for example) [here](https://www.quora.com/How-can-I-change-spherical-coordinates-of-a-point-into-Cartesian-coordinates) for a page online which gives a brief summary which should be all the information you need. Below is some code you can use to visualise the positions and orientations of one or several molecules. It could be used for any molecule. The code is provided for a linear molecule, CO$_2$. You can run it without making any changes, to see how it works. Further details of the functions are provided later. ``` #need to import numpy and pyplot packages before running this cell #coordinates (r,phi,theta) in spherical polars are, in Cartesians: #(x,y,z)=(r*sin(phi)cos(theta),r*cos(phi)*sin(theta),r*cos(theta)) #using physics conventions for theta and phi (beware that mathematicians sometimes define these the other way round) #theta is the angle measured rotating away from the +z axis #phi is the angle measured rotating around the z axis, in the xy plane, measured anticlockwise from the +x axis. Natomslist=[3,3,1] #list numbers of atoms in each type of molecule#. type 0 is CO2, type 1 is water totalatomtypes=sum(Natomslist) atomindices=[[0,1,2],[3,4,5],[6,]] #give each distinct atom in each molecule a different index. The three atoms in type 0 are numbered 0,1,2 etc #this will be useful so we can plot the different atoms with different colours cols='krrryyb' #define the colour of each atom type #Python allows some shorthand syntax which you can use to save space if you are setting lots of variables to a value: # #a,b,c=1.0,2.0=3.0 # #is the same as #a=1.0 #b=2.0 #c=3.0 # #this is used in the next function def atom_base_positions(type): #return the coordinates of each atom of the molecule of this type #in spherical polar coordinates, relative to an 'anchor' at the origin if type==0: #definitions for CO2 d=160e-3 #C-O length in nm p=0.1*3.3356e-30 #C-O dipole moment magnitude in SI units Cm (using value for CO; C-O bond dipole in CO2 hard to measure as the two dipoles cancel out and the molecule has no net dipole) qeff=(p/(d*1e-9))/1.602e-19 #effective charge in units of e Na=3 r,phi,theta,q=np.empty(Na),np.empty(Na),np.empty(Na),np.empty(Na) r[0],phi[0],theta[0],q[0]=0.0,0.0 ,0.0 ,+2*qeff #place the C atom at the origin r[1],phi[1],theta[1],q[1]=d ,0.0 ,0.0 ,-qeff #O atom 1, a distance d from the origin at angle theta=0, phi=0 (so along the +z axis) r[2],phi[2],theta[2],q[2]=d ,0.0 ,np.pi ,-qeff #O atom 2, a distance d from the origin at angle theta=pi, phi=0 (so along the -z axis) elif type==1: #definitions for water (incomplete) d=0.09578 #O-H bond length in nm p=1.85*3.3356e-30 #O-H dipole moment magnitude in SI units Cm qeff=(p/(d*1e-9))/1.602e-19 #effective charge in units of e Na=3 r,phi,theta,q=np.empty(Na),np.empty(Na),np.empty(Na),np.empty(Na) r[0],phi[0],theta[0],q[0]=0.0,0.0 ,0.0 ,-2*qeff #place the O atom at the origin r[1],phi[1],theta[1],q[1]=d ,0.0 ,151/720*np.pi ,+qeff #O atom 1, a distance d from the origin at angle theta=0, phi=0 (so along the +z axis) r[2],phi[2],theta[2],q[2]=d ,0.0 ,((1-151/720)*np.pi) ,+qeff #O atom 2, a distance d from the origin at angle theta=pi, phi=0 (so along the -z axis) #your code here elif type==2: #definitions for Na+ Na=1 r,phi,theta,q=np.empty(Na),np.empty(Na),np.empty(Na),np.empty(Na) r[0],phi[0],theta[0],q[0]=0.0,0.0 ,0.0 ,+1 #place the Na atom at the origin else: r,phi,theta,q=np.array([]),np.array([]),np.array([]),np.array([]) #if type is not set then return empty lists return r,phi,theta,q ``` Further details of what the function above does are given in the next paragraph. You can skip these until you have tried out using the function. The atom_base_positions function returns arrays of position vectors and charges describing the positions of every atom in the molecule and its charge, relative to an anchor position (such as the position of one of the atoms in the molecules). The input is the molecule type. The code provided for a CO$_2$ molecule (type 0) is included. Later, you will need to complete the code so that it returns the positions of the atoms in a water molecule for type 1. The atom_base_positions function above is used in the functions **provided for you** by the code below. You should not need to modify the code below. Feel free not to even read the code below but just use it. It defines: mol_plot(moltypes,xa,ya,za,phia,thetaa). This makes a 3D plot of the positions of molecules with the type in the array moltypes (0 for CO2 or 1 for water etc) and with anchors at locations stored in xa, ya and za and orientiations store in phia and thetaa. It returns handles to the plot so that you can modify things like the axis labels. There is an example of how to use it in the cell below. allatomposns(moltypes,xa,ya,za,phir,thetar) has the same input arguments. It returns the coordinates of all the atoms in the molecules and their charges. You should be able to use these outputs with the functions you already have in order to find the electric field and potential. There is an example of how to use it in the cell below. The code below also defines a function rotmol_atomposns(moltype,rmtranslation,phimrot,thetamrot). This is called by each of the other two functions. You probably won't need to call it directly yourself. It returns the positions of atoms in a particular molecule according to the position of the anchor and orientation of the molecule. **PROVIDED CODE FOR YOU TO USE - NO NEED TO MODIFY** (it is not important to understand the code) ``` def rotmol_atomposns(moltype,rmtranslation,phimrot,thetamrot): #get base coordinates of atoms in molecule (relative to anchor, prior to rotation around anchor point) rb,phib,thetab,qb=atom_base_positions(moltype) #type 1 for water outr=[] #list of all vector positions of atoms in molecule outq=[] #list of atom charges for batom in range(len(rb)): #cycle through atoms in molecule and generate rotated coordinates rcarts=rb[batom]*np.array([np.sin(thetab[batom]-thetamrot)*np.cos(phib[batom]+phimrot), np.sin(thetab[batom]-thetamrot)*np.sin(phib[batom]+phimrot), np.cos(thetab[batom]-thetamrot)]) rvector=rmtranslation+rcarts outr.append(rvector) outq.append(qb[batom]) return outr,qb def mol_plot(moltypes,xa,ya,za,phia,thetaa): #plot out molecule positions and create associated plots. returns total potential #create the figure f2=plt.figure() ax2=plt.subplot(1,1,1,projection='3d') # ploth=[] #create list to store plot handles #initialise one x array, one y array and one z array for each type of atom xplot=[] yplot=[] zplot=[] for loop in range(totalatomtypes): xplot.append([]) yplot.append([]) zplot.append([]) #the Python syntax in the next line using zip() allows you to cycle through several arrays (of the same size) at once for x,y,z,moltype,phi,theta in zip(xa,ya,za,moltypes,phia,thetaa): #loop through molecules rpositions,qvalues=rotmol_atomposns(moltype,[x,y,z],phi,theta) #generate arrays of all atom positions #the Python syntax in the next line using enumerate() gives you a counter starting at zero as well as cycling through the elements in an array like a normal Python loop does for counter, atomindex in enumerate(atomindices[moltype]): #loop through the atoms in this molecule rthisatom=rpositions[counter] #add the atom coordinates to the relevant dataseries for the plot xplot[atomindex].append(rthisatom[0]) yplot[atomindex].append(rthisatom[1]) zplot[atomindex].append(rthisatom[2]) #now draw all the atom dataseries onto the plot for atomindex in range(totalatomtypes): #loop over all the types of atom #ploth.append(ax2.scatter3D(xplot[atomindex],yplot[atomindex],zplot[atomindex],c=cols[atomindex])) #plot atom and store plot handle #print(atomindex) #print(xplot[atomindex]) ax2.scatter3D(xplot[atomindex],yplot[atomindex],zplot[atomindex],c=cols[atomindex]) #plot atom and store plot handle #add lines to represent the bonds between atoms in the molecule #plot these as just one data series #the code below interleaves NaN values to ensure no line between different molecules padar=np.empty(len(xplot[0])) padar[:]=np.NaN #interleave values into a 1D array listx = [xplot[0], xplot[1], padar, xplot[0], xplot[2], padar] xi=[val for tup in zip(*listx) for val in tup] #look at the results of this line if you want to work out what it is listy = [yplot[0], yplot[1], padar, yplot[0], yplot[2], padar] yi=[val for tup in zip(*listy) for val in tup] listz = [zplot[0], zplot[1], padar, zplot[0], zplot[2], padar] zi=[val for tup in zip(*listz) for val in tup] ax2.plot3D(xi,yi,zi,'k') #add the lines joining the atoms of type 0 #repeat for type 1 #this assume that there are bonds between the 1st and 2nd atoms in the coordinates list #and between the 1st and 3rd #(edit the indices in the below code if you set up your atom_base_posns functions differently) padar=np.empty(len(xplot[3])) padar[:]=np.NaN #interleave values into a 1D array listx = [xplot[3], xplot[4], padar, xplot[3], xplot[5], padar] xi=[val for tup in zip(*listx) for val in tup] listy = [yplot[3], yplot[4], padar, yplot[3], yplot[5], padar] yi=[val for tup in zip(*listy) for val in tup] listz = [zplot[3], zplot[4], padar, zplot[3], zplot[5], padar] zi=[val for tup in zip(*listz) for val in tup] ax2.plot3D(xi,yi,zi,'r') #add the lines joining the atoms of type 1 ##add labels #ax2.set_xlabel('x (add unit)') #ax2.set_ylabel('y (add unit)') #ax2.set_zlabel('z (add unit)') #ax2.set_title('molecule positions') #ax2.set_aspect('auto') ##ax2.view_init(elev=10., azim=30.) #adjust 'camera angle' with this command if desired - angles are in degrees #plt.show() return f2,ax2 #returns the figure axis handle. This could be useful if you want to edit the figure outside of the function def allatomposns(moltypes,xa,ya,za,phia,thetaa): rlist=[] qlist=[] #the Python syntax in the next line using zip() allows you to cycle through several arrays (of the same size) at once for x,y,z,moltype,phi,theta in zip(xa,ya,za,moltypes,phia,thetaa): #loop through molecules rpositions,qvalues=rotmol_atomposns(moltype,[x,y,z],phi,theta) #generate arrays of all atom positions for rp,qv in zip(rpositions,qvalues): rlist.append(rp) qlist.append(qv) return rlist,qlist ``` [END OF PROVIDED CODE] Just to repeat: you can use the functions defined by the code above without modifying the code. Feel free not to read the code but just use it. You pass details to the functions as follows... mol_plot(moltypes,xa,ya,za,phia,thetaa). This makes a 3D plot of the positions of molecules with the type in the array moltypes (0 for CO2 or 1 for water etc) and with anchors at locations stored in xa, ya and za and orientiations store in phia and thetaa. It returns handles to the plot so that you can modify things like the axis labels. There is an example of how to use it in the cell below. allatomposns(moltypes,xa,ya,za,phir,thetar) has the same input arguments. It returns the coordinates of all the atoms in the molecules and their charges. You should be able to use these outputs with the functions you already have in order to find the electric field and potential. There is an example of how to use it in the cell below. The code above also defines a function rotmol_atomposns(moltype,rmtranslation,phimrot,thetamrot). This is called by each of the other two functions. You probably won't need to call it directly yourself. It returns the positions of atoms in a particular molecule according to the position of the anchor and orientation of the molecule. **Use the code in the next two cells to understand how to use the functions mol_plot() and allatomposns()**: ``` #define molecule anchor positions and orientations here #below values are for just two molecules; to add more molecules to the list, add new elements to each array #this example defines three CO2 molecules, positioned at (0,0,0), (0.2,0.2,0.2), (0.4,0.4,0.4) in the units used and with no rotation xa=[0.0,0.2,0.4] #x coordinates of molecule anchors ya=[0.0,0.2,0.4] #y coordinates of molecule anchors za=[0.0,0.2,0.4] #z coordinates of molecule anchors thetar=[0.0,0.0,0.0] #no theta rotation for any of the molecules phir=[0.0,0.0,0.0] #no phi rotation for any of the molecules moltypes=[1,0,2] #code to indicate what type of molecule each one is. type=0 for CO2, 1 for water #%matplotlib notebook f1,ax1=mol_plot(moltypes,xa,ya,za,phir,thetar) #add labels ax1.set_xlabel('x (nm)') ax1.set_ylabel('y (nm)') ax1.set_zlabel('z (nm)') ax1.set_title('molecule positions') ax1.set_aspect('auto') #ax1.view_init(elev=10., azim=30.) #adjust 'camera angle' with this command if desired - angles are in degrees f1.show() #Example of calling function allatomposns() to get the coordinates and charges of all atoms in a list rall,qall=allatomposns(moltypes,xa,ya,za,phir,thetar) #print('Positions: ',rall) #print('Charges: ',qall) #print('Position of atom 0:',rall[0]) #print('Charge of atom 0:',qall[0]) ``` You can control the camera angle for the 3d plot either using the command provided in the code or (most conveniently) interactively by dragging with the mouse on the figure. If this does not work interactively in the Jupyter notebook, try uncommenting the 'magic command' at the very top of the notebook and restarting the kernel and re-running the code, or you could also download the code as a .py and run it in Spyder or another Python interface. To understand how the molecule rotation angles work, make the following changes to the molecule orientations and check that the resulting positions are as you expect: - molecule at the origin, rotation of theta=pi/8; phi=0. - molecule at the origin, rotation of phi=pi/8; theta=0. - The previous check should give the same result as the initial code: since the orientation of the molecule is along the z-axis, it looks the same after a rotation in the xy plane, around the z-axis (which is what a phi rotation is). Try changing the definition of the base molecule so that the molecule is instead along the x-axis and repeat the last two checks. - molecule at [1,1,1], rotation of theta=pi/8; phi=0. - more than one molecule at different positions; try different rotations for different molecules. You don't need to include these in your report; they are just to help you check you understand how the code works. ``` # checked ``` Now modify the code by editing the incomplete atom_base_posns function so that it works for water as well, as follows: The water molecule is quite polar. Each O-H bond has a dipole moment and its overall dipole moment is 1.85 Debyes. To model the water molecule, consider the effective charge on each atom to be such as to give the dipole moment of each O-H bond - i.e., the magnitude of the charge on the O atom will be less than 2e! This is a simplified way of describing the molecule, but we will use it here. Take the angle between the two O-H bonds to be fixed at 104.5$^\circ$ and take each O-H bond to have a fixed length of 96pm. ``` # Done! ``` Now distribute 6 water molecules so that the position of the O atom in each is completely random within a cube with side length 1nm. Make the orientation of each molecule random as well. Hint: make use of numpy's random function (see below). Use the mol_plot function above to display the positions of the water molecules on a 3D plot. Try re-running your code several times to check it's working as you expect. ``` #define molecule anchor positions and orientations here #below values are for just two molecules; to add more molecules to the list, add new elements to each array #this example defines three CO2 molecules, positioned at (0,0,0), (0.2,0.2,0.2), (0.4,0.4,0.4) in the units used and with no rotation xa=np.random.rand(6) #x coordinates of molecule anchors ya=np.random.rand(6) #y coordinates of molecule anchors za=np.random.rand(6) #z coordinates of molecule anchors thetar=np.random.rand(6)*np.pi*2 #no theta rotation for any of the molecules phir=np.random.rand(6)*np.pi*2 #no phi rotation for any of the molecules moltypes=[1,1,1,1,1,1] #code to indicate what type of molecule each one is. type=0 for CO2, 1 for water #%matplotlib notebook f1,ax1=mol_plot(moltypes,xa,ya,za,phir,thetar) #add labels ax1.set_xlabel('x (nm)') ax1.set_ylabel('y (nm)') ax1.set_zlabel('z (nm)') ax1.set_title('molecule positions') ax1.set_aspect('auto') #ax1.view_init(elev=10., azim=30.) #adjust 'camera angle' with this command if desired - angles are in degrees f1.show() #Example of calling function allatomposns() to get the coordinates and charges of all atoms in a list rall,qall=allatomposns(moltypes,xa,ya,za,phir,thetar) #print('Positions: ',rall) #print('Charges: ',qall) #print('Position of atom 0:',rall[0]) #print('Charge of atom 0:',qall[0]) # Use the function provided above to get the position coordinates of all 18 atoms and the corresponding charge r_randMolecules = rall q_randMolecules = qall ``` Use the function provided above to get the position coordinates of all 18 atoms and the corresponding charge ``` # Check the above code cell, functionlised as below '''r_randMolecules = rall q_randMolecules = qall''' ``` Using the functions you already have, write some code which calculates the potential at a set of points along a line parallel to the x-axis, i.e. along (x,y$_1$,z$_1$) where y$_1$ and z$_1$ are constants and plot this out. ``` # Generate the test point align with x-axis @interact def make_plot(samp_y=(0, 1, 0.02), samp_z=(0, 1, 0.02)): num_samp = 100 # number of sampling along z axis samp_x = np.linspace(0,1,num_samp) #sampling length along z axis coord_samp = np.stack((samp_x, np.full(num_samp,samp_y),np.full(num_samp,samp_z)),axis=-1) #calculate potl for each test points potl_molecules = [] for i in coord_samp: potl_molecules.append(potl_sum(i,r_randMolecules,q_randMolecules)) #plot potl for each test point plt.figure(figsize=(8,5.5)) plt.title('Electric field potential along x axis',fontsize=14) plt.plot(samp_x, potl_molecules, marker='') plt.xlabel('x') plt.ylabel('Potential (V)') plt.show(); ``` Comment in your report how the form of the potential variation along the x-axis depends on the locations of the molecules and relate this to physics. Include in your report a 1D plot showing the variation of the potential through the centre of the cube parallel to the x-axis to illustrate what you are describing. For each of these plots, include also a plot of the corresponding configuration of molecules (as mol_plot generates) ## Investigations Investigate (as described below) each of the following arrangements of molecules (and ions) in a 1nm-cube with sides parallel to the x,y and z axes: (a) Six water molecules (b) Six water molecules and a Na+ ion. (c) Six water molecules in a uniform electric field in the +x direction (such as they might experience close to a planar charged sheet like an electrode in the yz plane). For each case: Investigate how the potential energy of the collection depends on their location and orientation. Find the lowest potential energy arrangement you can. For this arrangement: - find the energy of this arrangement in SI units, relative to the energy with all of the water molecules/ions separated by a large distance from one another. - plot the configuration which leads to this lowest energy (and include it in your report) Hint: a crude way to try to find the lowest energy would be to could repeatedly evaluate the potential energy for many randomly chosen configurations and see which one has the lowest energy. This is not the only or best way though. For (b), in reality, the electron clouds around the oxygen and sodium nuclei would limit how close the oxygen atom and Na+ ion are allowed to approach (a quantum effect related to the Pauli exclusion principle). Since here you are making a classical model, you might need to introduce a sensible minimum distance of approach of the two nuclei. For (c), one way to approach the problem would be to consider the dipoles. ``` '''[0.5, 0.5, 0.5, 0.5, 0.0, 0.0, 0.0, 0.0] [0.164609, 0.835391, 0.833444, 0.166556, 0.664609, 0.335391, 0.333444, 0.666556] [0.439751, 0.939751, 0.565409, 0.065409, 0.439751, 0.939751, 0.565409, 0.065409] -1.7237695774930018e-63 -1.7493207101374834e-63''' #Degugging Part 1/3 #this example defines three CO2 molecules, positioned at (0,0,0), (0.2,0.2,0.2), (0.4,0.4,0.4) in the units used and with no rotation xa=[0.5] #x coordinates of molecule anchors ya=[0.5] #y coordinates of molecule anchors za=[0.5] #z coordinates of molecule anchors thetar=np.full(len(xa),0) #no theta rotation for any of the molecules phir=[0] #no phi rotation for any of the molecules moltypes=np.full(len(xa),1) #code to indicate what type of molecule each one is. type=0 for CO2, 1 for water rall,qall=allatomposns(moltypes,xa,ya,za,phir,thetar) r_randMolecules = rall q_randMolecules = qall print(potl_energy_sum(rall,qall)) def yieldPotlEfield(num_samp,start_coord,end_coord): samp_x= np.linspace(start_coord,end_coord,num_samp) #sampling length along z axis samp_y= np.linspace(start_coord,end_coord,num_samp) samp_z= np.linspace(start_coord,end_coord,num_samp) samp_yy, samp_zz, samp_xx = np.meshgrid(samp_y, samp_z,samp_x) #calculate potl, Efield for each test points potl_molecules = [] Efield_molecule2D = [] for i in range(len(samp_xx)): pinlistV1=[] pinlistE1_2D=[] for j in range(len(samp_xx[i])): pinlistV2=[] pinlistE2_2D=[] for k in range(len(samp_xx[i][j])): EfieldVectorL2D=[] r0 = np.array([samp_xx[i][j][k],samp_yy[i][j][k],samp_zz[i][j][k]]) #Calcluate 1D potl for each in 3D pinlistV2.append(potl_sum(r0,r_randMolecules,q_randMolecules)) #Calcluate 2D Efield (x,z) for each in 3D EfieldVector = Efield_sum(r0,r_randMolecules,q_randMolecules) EfieldVectorL2D.append(EfieldVector[0]) EfieldVectorL2D.append(EfieldVector[1]) pinlistE2_2D.append(EfieldVectorL2D) pinlistV1.append(pinlistV2) pinlistE1_2D.append(pinlistE2_2D) potl_molecules.append(pinlistV1) Efield_molecule2D.append(pinlistE1_2D) return samp_x, samp_y, samp_z, samp_yy, samp_zz, samp_xx, np.array(Efield_molecule2D), np.array(potl_molecules) print(rall,qall) #Debugging Part 2/3 samp_x, samp_y, samp_z, samp_yy, samp_zz, samp_xx, Efield_molecule2D,potl_molecules = yieldPotlEfield(13,-0.5,1.5) @interact def make_plot_2(cmap=(0, 31, 1),elevpara=(-27,90,1),azimpara=(-117,0,1)): f1,ax1=mol_plot(moltypes,xa,ya,za,phir,thetar) ax1.set_xlabel('x (nm)') ax1.set_ylabel('y (nm)') ax1.set_zlabel('z (nm)') ax1.set_title('molecule positions') ax1.set_aspect('auto') ax1.view_init(elev=elevpara, azim=azimpara) #adjust 'camera angle' with this command if desired - angles are in degrees f1.show(); cmapAll = [ 'prism', 'ocean', 'gist_earth', 'terrain', 'gist_stern', 'gnuplot', 'gnuplot2', 'CMRmap', 'cubehelix', 'brg', 'gist_rainbow', 'rainbow', 'jet', 'turbo','nipy_spectral','flag', 'gist_ncar','Pastel1', 'Pastel2', 'Paired', 'Accent', 'Dark2', 'Set1', 'Set2', 'Set3', 'tab10', 'tab20', 'tab20b', 'tab20c','twilight', 'twilight_shifted', 'hsv'] cmapHere = cmapAll[cmap] # Creating figure fig = plt.figure() ax = plt.axes(projection="3d") # Creating plot img = ax.scatter3D(samp_xx, samp_yy, samp_zz, c=potl_molecules, alpha=0.3, marker='.',cmap=cmapHere) ax.set_xlabel('x (nm)') ax.set_ylabel('y (nm)') ax.set_zlabel('z (nm)') ax.set_title('Molecule Potential') ax.view_init(elev=elevpara, azim=azimpara) fig.colorbar(img) plt.show(); #Debugging Part 3/3 samp_x, samp_y, samp_z, samp_yy, samp_zz, samp_xx, Efield_molecule2D,potl_molecules = yieldPotlEfield(50,-0.5,1.5) @interact def plot_2(Z_Index=(0, len(potl_molecules), 1),Line_Width=(1,15,1)): #Slice the 3D value matrix to 2D Efield_molecule2D_2D = Efield_molecule2D[Z_Index] potl_molecules_2D = potl_molecules[Z_Index] #plot the first FACTOR = 5e-1 levels = np.linspace(np.nanmin(potl_molecules_2D) * FACTOR, np.nanmax(potl_molecules_2D) * FACTOR, 50) fig, ax = plt.subplots(figsize=(5,5)) ax.contour(samp_x, samp_y, potl_molecules_2D, levels) lw = np.linalg.norm(Efield_molecule2D_2D, axis=2) lw /= lw.max() ax.streamplot(samp_x, samp_y, Efield_molecule2D_2D[:, :, 0], Efield_molecule2D_2D[:, :, 1], linewidth=Line_Width*lw, density=2,color='sienna') ax.set_xlabel('x (nm)') ax.set_ylabel('y (nm)') ax.set_title('Molecule Potential') ax.axis('equal') #plot the second fig, ax = plt.subplots() mappable = ax.pcolormesh(samp_x, samp_y, potl_molecules_2D) ax.set_xlabel('x (nm)') ax.set_ylabel('y (nm)') ax.set_title('Molecule Potential') plt.colorbar(mappable); ```
github_jupyter
``` import os import re import pandas as pd import numpy as np import matplotlib.pyplot as plt import lightgbm as lgbm labels = pd.read_csv('../data/processed/labels_matched.csv') clin = pd.read_csv("../data/raw/clinical.csv", parse_dates=['Case Start', 'Case End']) clin = clin[clin.Operation.notnull()] df = labels[['CaseID']].merge(clin, how='left', on='CaseID') clin['Age'].isnull().sum() features = ['Age', 'Height', 'Weight', 'BMI', 'PreopHb', 'PreopBUN', 'PreopCr', 'PreopPT', 'PreopPTT', 'PreopGlu'] ff = [] for f in features: if f in df.columns: ff.append(f) df = df[['CaseID'] + ff].merge(labels[['CaseID', 'stroke']]) target = df['stroke'] df = df.drop(['stroke'], axis=1) from sklearn.model_selection import train_test_split aa = pd.read_csv("/Users/suzinyou/Downloads/clinial_0922.csv") qq = [] for col in aa.columns: any_ = aa[col].apply(is_numeric).any() if any_: qq.append(col) aa[qq[10:-2]].head() aa = aa.merge(labels, how='inner', on='CaseID') aa[aa.stroke == 1].shape ``` ## Which columns have invalid samples? ``` aa = aa.set_index('CaseID') features_problematic = qq[10:-2] pd.set_option('display.max_columns', 999) overlap = [] od = dict() for col in features_problematic: mask_ = aa[col].apply(is_not_numeric) intersection = np.intersect1d(aa.index[mask_].values, labels.CaseID.values) if len(intersection) > 0: display(Markdown("### {}".format(col))) display(Markdown("#### Dirty samples")) display(aa[mask_]) display(Markdown("#### Normal samples")) display(aa[col][~mask_].sample(6)) overlap.append(col) od[col] = intersection ``` ### Processing rules 1. Don't use anesthesia duration. Highly correlated with surgery duration 2. PreopPlt: take the first present number 3. PreopGPT: '<1' --> '1' ``` from IPython.display import display, HTML, Markdown overlap od['Age'] def process_age(x): if x.endswith('개월'): return float(x[:-2]) / 12 else: return float(x) def process_preop_plt(x): if isinstance(x, (float, int)): return x else: try: return int(x.split()[0]) except: try: return int(x.split('p')[0]) except: return int(x.split('c')[0]) def process_preop_gpt(x): if x == '<1': return 1 else: return x clin = aa.copy() clin = clin.merge(labels, how='left', on='CaseID') clin.loc[:, 'stroke'] = clin.stroke.fillna(0) clin.loc[:, 'brain_event'] = clin.brain_event.fillna(0) clin.loc[:, 'Age'] = clin['Age'].apply(process_age) clin.loc[:, 'PreopPlt'] = clin['PreopPlt'].apply(process_preop_plt) clin.loc[:, 'PreopGPT'] = clin['PreopGPT'].apply(process_preop_gpt) clin = clin.drop(['source_CaseID', 'brain_event'], axis=1) clin.shape clin.isnull().sum() clin.to_csv("../data/interim/clinical_0935.csv", index=False) clin.columns def is_not_numeric(el): try: float(str(el)) except ValueError as e: return True return False ```
github_jupyter
# Image Scene Classification using ViT-S/32 Medium Augmentation ## Setup ``` !nvidia-smi ``` ## Data Gathering ``` !wget -q http://data.vision.ee.ethz.ch/ihnatova/camera_scene_detection_train.zip !unzip -qq camera_scene_detection_train.zip ``` ## Imports ``` import numpy as np import matplotlib.pyplot as plt from imutils import paths from pprint import pprint from collections import Counter from sklearn.preprocessing import LabelEncoder import tensorflow as tf from tensorflow import keras import tensorflow_hub as hub SEEDS = 42 tf.random.set_seed(SEEDS) np.random.seed(SEEDS) ``` ## Data Parsing ``` image_paths = list(paths.list_images("training")) np.random.shuffle(image_paths) image_paths[:5] ``` ## Counting number of images for each classes ``` labels = [] for image_path in image_paths: label = image_path.split("/")[1] labels.append(label) class_count = Counter(labels) pprint(class_count) ``` ## Splitting the dataset ``` TRAIN_SPLIT = 0.9 i = int(len(image_paths) * TRAIN_SPLIT) train_paths = image_paths[:i] train_labels = labels[:i] validation_paths = image_paths[i:] validation_labels = labels[i:] print(len(train_paths), len(validation_paths)) ``` ## Define Hyperparameters ``` BATCH_SIZE = 128 AUTO = tf.data.AUTOTUNE EPOCHS = 10 IMG_SIZE = 224 RESIZE_TO = 260 NUM_CLASSES = 30 TOTAL_STEPS = int((len(train_paths) / BATCH_SIZE) * EPOCHS) WARMUP_STEPS = 10 INIT_LR = 0.03 WAMRUP_LR = 0.006 ``` ## Encoding labels ``` label_encoder = LabelEncoder() train_labels_le = label_encoder.fit_transform(train_labels) validation_labels_le = label_encoder.transform(validation_labels) print(train_labels_le[:5]) ``` ## Determine the class-weights ``` trainLabels = keras.utils.to_categorical(train_labels_le) classTotals = trainLabels.sum(axis=0) classWeight = dict() # loop over all classes and calculate the class weight for i in range(0, len(classTotals)): classWeight[i] = classTotals.max() / classTotals[i] ``` ## Convert the data into TensorFlow `Dataset` objects ``` train_ds = tf.data.Dataset.from_tensor_slices((train_paths, train_labels_le)) val_ds = tf.data.Dataset.from_tensor_slices((validation_paths, validation_labels_le)) ``` ## Define the preprocessing function ``` @tf.function def preprocess_train(image_path, label): image = tf.io.read_file(image_path) image = tf.image.decode_jpeg(image, channels=3) image = tf.image.resize(image, (RESIZE_TO, RESIZE_TO)) image = tf.image.random_crop(image, [IMG_SIZE, IMG_SIZE, 3]) image = tf.cast(image, tf.float32) / 255.0 return (image, label) @tf.function def preprocess_test(image_path, label): image = tf.io.read_file(image_path) image = tf.image.decode_jpeg(image, channels=3) image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) image = tf.cast(image, tf.float32) / 255.0 return (image, label) ``` ## Data Augmentation ``` data_augmentation = tf.keras.Sequential( [ tf.keras.layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"), tf.keras.layers.experimental.preprocessing.RandomRotation(factor=0.02), tf.keras.layers.experimental.preprocessing.RandomZoom( height_factor=0.2, width_factor=0.2 ), ], name="data_augmentation", ) ``` ## Create the Data Pipeline ``` pipeline_train = ( train_ds .shuffle(BATCH_SIZE * 100) .map(preprocess_train, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .map(lambda x, y: (data_augmentation(x), y), num_parallel_calls=AUTO) .prefetch(AUTO) ) pipeline_validation = ( val_ds .map(preprocess_test, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) ``` ## Visualise the training images ``` image_batch, label_batch = next(iter(pipeline_train)) plt.figure(figsize=(10, 10)) for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(image_batch[i].numpy()) label = label_batch[i] plt.title(label_encoder.inverse_transform([label.numpy()])[0]) plt.axis("off") ``` ## Load model into KerasLayer ``` def training_model(vit_model_url): model = keras.Sequential( [ keras.layers.InputLayer((IMG_SIZE, IMG_SIZE, 3)), hub.KerasLayer(vit_model_url, trainable=True), keras.layers.Dense(NUM_CLASSES, activation="softmax"), ] ) return model model = training_model("https://tfhub.dev/sayakpaul/vit_r26_s32_medaug_fe/1") model.summary() ``` ## Learning Rate Scheduling For fine-tuning ``` # Reference: # https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2 class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule): def __init__( self, learning_rate_base, total_steps, warmup_learning_rate, warmup_steps ): super(WarmUpCosine, self).__init__() self.learning_rate_base = learning_rate_base self.total_steps = total_steps self.warmup_learning_rate = warmup_learning_rate self.warmup_steps = warmup_steps self.pi = tf.constant(np.pi) def __call__(self, step): if self.total_steps < self.warmup_steps: raise ValueError("Total_steps must be larger or equal to warmup_steps.") learning_rate = ( 0.5 * self.learning_rate_base * ( 1 + tf.cos( self.pi * (tf.cast(step, tf.float32) - self.warmup_steps) / float(self.total_steps - self.warmup_steps) ) ) ) if self.warmup_steps > 0: if self.learning_rate_base < self.warmup_learning_rate: raise ValueError( "Learning_rate_base must be larger or equal to " "warmup_learning_rate." ) slope = ( self.learning_rate_base - self.warmup_learning_rate ) / self.warmup_steps warmup_rate = slope * tf.cast(step, tf.float32) + self.warmup_learning_rate learning_rate = tf.where( step < self.warmup_steps, warmup_rate, learning_rate ) return tf.where( step > self.total_steps, 0.0, learning_rate, name="learning_rate" ) scheduled_lrs = WarmUpCosine( learning_rate_base=INIT_LR, total_steps=TOTAL_STEPS, warmup_learning_rate=WAMRUP_LR, warmup_steps=WARMUP_STEPS, ) lrs = [scheduled_lrs(step) for step in range(TOTAL_STEPS)] plt.plot(lrs) plt.xlabel("Step", fontsize=14) plt.ylabel("LR", fontsize=14) plt.show() ``` ## Define optimizer and loss ``` optimizer = keras.optimizers.SGD(scheduled_lrs, clipnorm=1.0) loss_fn = keras.losses.SparseCategoricalCrossentropy() ``` ## Compile the model ``` model.compile(optimizer=optimizer, loss=loss_fn, metrics=['accuracy']) ``` ## Setup Callbacks ``` train_callbacks = [ keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=3, restore_best_weights=True), keras.callbacks.CSVLogger('./train-logs.csv'), keras.callbacks.TensorBoard(histogram_freq=1) ] ``` ## Train the model ``` history = model.fit( pipeline_train, batch_size=BATCH_SIZE, epochs= EPOCHS, validation_data=pipeline_validation, class_weight=classWeight, callbacks=train_callbacks) ``` ## Plot the Metrics ``` def plot_hist(hist): plt.plot(hist.history["accuracy"]) plt.plot(hist.history["val_accuracy"]) plt.plot(hist.history["loss"]) plt.plot(hist.history["val_loss"]) plt.title("Training Progress") plt.ylabel("Accuracy/Loss") plt.xlabel("Epochs") plt.legend(["train_acc", "val_acc", "train_loss", "val_loss"], loc="upper left") plt.show() ``` ## Evaluate the model ``` accuracy = model.evaluate(pipeline_validation)[1] * 100 print("Accuracy: {:.2f}%".format(accuracy)) plot_hist(history) ``` ## Upload the TensorBoard logs ``` !tensorboard dev upload --logdir logs --name "ViT-S32-Medium-Augmentation Model" --description "ViT SS32-Medium-Augmentation Model trained on Image-Scene-Dataset" ``` **Link:** https://tensorboard.dev/experiment/35bwOLWxQLqO0E11sdveDQ/
github_jupyter
## Setup ``` import ipywidgets as widgets from ipywidgets import Button, Layout, IntSlider, HBox, VBox, interact, interact_manual import os import csv from IPython.display import display, clear_output, Markdown, Image import rpy2.rinterface import copy %load_ext rpy2.ipython import pandas as pd ## Initializing all relevant variables as global variables global sample_column_id global graph_output_dir global stats_output_dir global amr_count_matrix_filepath global amr_metadata_filepath global megares_annotation_filename global biom_file global tre_file global tax_fasta global taxa_file global microbiome_temp_metadata_file global list_vals_a global list_vals_m ``` ### View metadata file and select column ID ``` data_f = os.listdir(str(os.getcwd()) + "/data") data_f_csv = [x for x in data_f if ".csv" in x] meta_filename = widgets.Dropdown( options=data_f_csv, description='Select CSV File', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) # Show metadata head button col_names_meta = [] def open_meta(b): global meta meta = pd.read_csv(str("data/" + meta_filename.value)) display(meta.head()) col_names_meta = list(meta.columns) show_meta_button = widgets.Button( description="Print head of metadata file", icon = 'open', layout=Layout(width='70%')) show_meta_button.on_click(open_meta) def select_col(): global sample_column_id sample_column_id = widgets.Dropdown( options=list(meta.columns), description='Column Names of Selected File', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) display(sample_column_id) ``` ### Text Entry Widgets ``` graph_output_dir = widgets.Text( value='graphs', placeholder='Name the output directory for where the graphs should be saved', description='Directory Name for Graphs: ', disabled=False, layout=Layout(width='70%'), style = {'description_width': 'initial'} ) stats_output_dir = widgets.Text( value='stats', placeholder='Name the output directory for where the statistics should be saved', description='Directory Name for Stats: ', disabled=False, layout=Layout(width='70%'), style = {'description_width': 'initial'} ) # direct_name_data = VBox([sample_column_id, graph_output_dir, stats_output_dir]) ``` ## File Paths Creating elements to enter in file names in text boxes ``` amr_count_file = widgets.Dropdown( options=os.listdir(str(os.getcwd()) + "/data"), value = "test_AMR_analytic_matrix.csv", description='AMR Count Matrix File:', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) amr_metadata_file = widgets.Dropdown( options=os.listdir(str(os.getcwd()) + "/data"), value = "test_AMR_metadata.csv", description='Metadata File:', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) megares_annot = widgets.Dropdown( options=os.listdir(str(os.getcwd()) + "/data"), value = "test_AMR_megares_annotations_v1.03.csv", description='MEGARes Annotation File:', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) resistome_filenames = VBox([amr_count_file, amr_metadata_file, megares_annot]) biom = widgets.Dropdown( options=os.listdir(str(os.getcwd()) + "/data"), value = "test_16S_otu_table_json.biom", description='Biom File:', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) tre = widgets.Dropdown( options=os.listdir(str(os.getcwd()) + "/data"), value = "test_16S_tree.nwk", description='tree file:', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) tax = widgets.Dropdown( options=os.listdir(str(os.getcwd()) + "/data"), value = "test_16S_dna-sequences.fasta", description='fasta file:', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) taxa = widgets.Dropdown( options=os.listdir(str(os.getcwd()) + "/data"), value = "test_16S_taxonomy.tsv", description='taxonomy file:', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) microbiome_temp = widgets.Dropdown( options=os.listdir(str(os.getcwd()) + "/data"), value = "test_16S_metadata.csv", description='microbiome temp metadata file:', disabled=False, layout=Layout(width='50%'), style = {'description_width': 'initial'} ) microbiome_filenames = VBox([biom, tre, tax, taxa, microbiome_temp]) ## AMR Text Boxes microbiome_filenames = VBox([biom, tre, tax, taxa, microbiome_temp]) ## Function to save all the filepaths to the normal variable expected by the rest of the code def save_filepaths(b): global amr_count_matrix_filepath global amr_metadata_filepath global megares_annotation_filename global biom_file global tre_file global tax_fasta global taxa_file global microbiome_temp_metadata_file amr_count_matrix_filepath = "data/" + str(amr_count_file.value) amr_metadata_filepath = "data/" + str(amr_metadata_file.value) megares_annotation_filename = "data/" + str(megares_annot.value) biom_file = "data/" + str(biom.value) tre_file = "data/" + str(tre.value) tax_fasta = "data/" + str(tax.value) taxa_file = "data/" + str(taxa.value) microbiome_temp_metadata_file = "data/" + str(microbiome_temp.value) print("AMR Count Matrix Filepath: " + str(amr_count_file.value)) print("AMR Metadata Filepath: " + str(amr_metadata_file.value)) print("MEGARes Annotation Filepath: " + str(megares_annot.value)) print("Biom Filepath: " + str(biom.value)) print("Fasta Filepath: " + str(tre.value)) print("Taxonomy Filepath: " + str(taxa.value)) print("Microbiome Temp Metadata Filepath: " + str(microbiome_temp.value)) print() print("All filepaths saved") save_filepath_button = widgets.Button( description="Save the filepaths for analysis", icon = 'save', layout=Layout(width='70%')) save_filepath_button.on_click(save_filepaths) ``` ## AMR Exploratory variables Multiple text input boxes backend code ``` ### Making the function for the slider to choose number of accordions exp_graph_var_amr = widgets.IntSlider( value=5, min=0, max=10, step=1, description='AMR', continuous_update=False, orientation='horizontal', readout=True, readout_format='d' ) exp_graph_var_microbiome = widgets.IntSlider( value=5, min=0, max=10, step=1, description= 'Microbiome', continuous_update=False, orientation='horizontal', readout=True, readout_format='d' ) ### Making the different text boxes for the 4 variables name = widgets.Text( value='', placeholder='Name', disabled=False, layout=Layout(width='70%'), style = {'description_width': 'initial'} ) #widgets.Text(value='', placeholder='Name', disabled=False, layout=Layout(width='70%'), style = {'description_width': 'initial'}) subsets = widgets.Text( value='', placeholder='Subsets', disabled=False, layout=Layout(width='70%'), style = {'description_width': 'initial'} ) exploratory_var = widgets.Text( value='', placeholder='Exploratory Variable', disabled=False, layout=Layout(width='70%'), style = {'description_width': 'initial'} ) order = widgets.Text( value='', placeholder='Order', disabled=False, layout=Layout(width='70%'), style = {'description_width': 'initial'} ) ### Creating a vertical box to store all of the text boxes into a single object menu1 = widgets.VBox([name, subsets, exploratory_var, order]) ### Updating the number of accordion pages based on the value selected in the slider above, then prints it def graph_vars_amr(exp_graph_var): list_widgets_a = [] # This creates all the new variables to store the text boxes in for i in range(exp_graph_var): name_box = "global name_a{}; name_a{} = widgets.Text(placeholder='name',layout=Layout(width='70%'))".format(i,i) subset_box = "global subset_a{}; subset_a{} = widgets.Text(placeholder='Subset',layout=Layout(width='70%'))".format(i,i) exp_box = "global exploratory_a{}; exploratory_a{} = widgets.Text(placeholder='Exploratory Variable',layout=Layout(width='70%'))".format(i,i) order_box = "global order_a{}; order_a{} = widgets.Text(placeholder='Order',layout=Layout(width='70%'))".format(i,i) exec(name_box) exec(subset_box) exec(exp_box) exec(order_box) # This will assign a new menu_ variable for each accordion that needs to be printed for i in range(exp_graph_var): string = "menu_a{} = widgets.VBox([name_a{}, subset_a{}, exploratory_a{}, order_a{}])".format(i, i, i, i, i) exec(string) # This should append all the menu variables into the list_widgest list and pass it into the accordion widget for i in range(0, exp_graph_var): string = "list_widgets_a.append(menu_a{})".format(i) exec(string) # Creates and displays the final accordion widget accordion_a = widgets.Accordion(children=list_widgets_a) return accordion_a def graph_vars_mic(exp_graph_var): list_widgets_m = [] # This creates all the new variables to store the text boxes in for i in range(exp_graph_var): name_box_m = "global name_m{}; name_m{} = widgets.Text(placeholder='name',layout=Layout(width='70%'))".format(i,i) subset_box_m = "global subset_m{}; subset_m{} = widgets.Text(placeholder='Subset',layout=Layout(width='70%'))".format(i,i) exp_box_m = "global exploratory_m{}; exploratory_m{} = widgets.Text(placeholder='Exploratory Variable',layout=Layout(width='70%'))".format(i,i) order_box_m = "global order_m{}; order_m{} = widgets.Text(placeholder='Order',layout=Layout(width='70%'))".format(i,i) exec(name_box_m) exec(subset_box_m) exec(exp_box_m) exec(order_box_m) # This will assign a new menu_ variable for each accordion that needs to be printed for i in range(exp_graph_var): string = "menu_m{} = widgets.VBox([name_m{}, subset_m{}, exploratory_m{}, order_m{}])".format(i, i, i, i, i) exec(string) # This should append all the menu variables into the list_widgest list and pass it into the accordion widget for i in range(0, exp_graph_var): string = "list_widgets_m.append(menu_m{})".format(i) exec(string) # Creates and displays the final accordion widget accordion_m = widgets.Accordion(children=list_widgets_m) return accordion_m ## Store all the variables in amr_exp in order to be written out to a csv file global list_vals_a global list_vals_m list_vals_a = [] list_vals_m = [] def save_print_variables(amr, mic): list_vals_a = [] list_vals_m = [] exp = ["_a", "_m"] num = [amr, mic] for i in range(2): for j in range(num[i]): analysis = exp[i] exec("order_new{}{} = order_format(order{}{}.value)".format(analysis, j, analysis, j)) exec("subset_new{}{} = subset_format(subset{}{}.value)".format(analysis, j, analysis, j)) string = 'list_vals{}.append([name{}{}.value, subset_new{}{}, exploratory{}{}.value, order_new{}{}])'.format(analysis, analysis, j, analysis, j, analysis, j, analysis, j) exec(string) #print(list_vals_a) #print("") #print(list_vals_m) return list_vals_a, list_vals_m # Create button to save variables from exploratory variables # list_vals_a, list_vals_m = save_print_variables(exp_graph_var_amr.value, exp_graph_var_microbiome.value) def save_explore_vars(b): list_vals_a, list_vals_m = save_print_variables(exp_graph_var_amr.value, exp_graph_var_microbiome.value) print("Exploratory variables saved") save_print_vars_button = widgets.Button( description="Save the exploratory variables for analysis", icon = 'save', layout=Layout(width='70%')) save_print_vars_button.on_click(save_explore_vars) # display(save_print_vars_button) ## Create the tabs to enter in variable data. def var_info(amr, mic): tab = widgets.Tab() if mic == 0: tab_contents = ["AMR"] children = [graph_vars_amr(amr)] tab.set_title(0, "AMR") elif amr == 0: tab_contents = ["Microbiome"] children = [graph_vars_mic(mic)] tab.set_title(0, "Microbiome") else: tab_contents = ["AMR", "Microbiome"] children = [graph_vars_amr(amr), graph_vars_mic(mic)] tab.set_title(0, "AMR") tab.set_title(1, "Microbiome") tab.children = children tab.titles = tab_contents return tab # Reformats the order variable def order_format(order_og): # Splits the text into the different items in the list order_list = order_og.split(",") # Removes the white spaces before and after each item order_list = [item.strip() for item in order_list] # Adds the quotation marks around each item order_list = ['"' + item + '"' for item in order_list] # Removing unnecessary characters from string order_list = 'c({})'.format(order_list) order_list = order_list.replace("[", "") order_list = order_list.replace("]", "") order_list = order_list.replace("'", "") order_list = order_list.replace(", ", ",") if order_og == "": order_list = "" return order_list # Reformats the subset variable def subset_format(subset_og): # Splits the text into the different items in the list order_list = subset_og.split(",") # Removes the white spaces before and after each item order_list = [item.strip() for item in order_list] # Adds the quotation marks around each item order_list = ["'" + item + "'" for item in order_list] # Removing unnecessary characters from string order_list = 'list({})'.format(order_list) order_list = order_list.replace("[", "") order_list = order_list.replace("]", "") order_list = order_list.replace('"', "") order_list = order_list.replace(", ", ",") if subset_og == "": order_list = "" return order_list ## USER ENTERED GUI # Function to save all variables into .csv file def vars_to_csv(b): with open('metagenome_analysis_vars.csv','w', newline='') as f: writer = csv.writer(f) # Column names writer.writerow(['sample_column_id', sample_column_id.value]) writer.writerow(['graph_output_dir', graph_output_dir.value]) writer.writerow(['stats_output_dir', stats_output_dir.value]) # Filepaths writer.writerow(['amr_count_matrix_filepath', amr_count_matrix_filepath]) writer.writerow(['amr_metadata_filepath', amr_metadata_filepath]) writer.writerow(['megares_annotation_filename', megares_annotation_filename]) writer.writerow(['biom_file', biom_file]) writer.writerow(['tre_file', tre_file]) writer.writerow(['tax_fasta', tax_fasta]) writer.writerow(['taxa_file', taxa_file]) writer.writerow(['microbiome_temp_metadata_file', microbiome_temp_metadata_file]) # AMR exploratory variables writer.writerow(["AMR_exploratory_analyses"]) writer.writerows(list_vals_a) # Microbiome exploratory variables writer.writerow(["microbiome_exploratory_analyses"]) writer.writerows(list_vals_m) print("CUSTOM variables Exported. Check directory for .csv file") ## DEFAULT INFORMATION def vars_to_csv_default(b): os.popen('cp metagenome_analysis_vars_DEFAULT.csv metagenome_analysis_vars.csv') print("DEFAULT variables Exported. Check directory for .csv file") vars_save_button = widgets.Button( description="Save CUSTOM variables for analysis script", icon = 'save', layout=Layout(width='50%')) vars_save_button_default = widgets.Button( description="Save DEFAULT variables for analysis script", icon = 'save', layout=Layout(width='50%')) vars_save_button.on_click(vars_to_csv) vars_save_button_default.on_click(vars_to_csv_default) ``` ## R Code ``` def stage_script(b): %R -o amr_melted_analytic -o microbiome_melted_analytic -o amr_melted_raw_analytic -o microbiome_melted_raw_analytic %R source("staging_script.R") stage_script_button = widgets.Button( description="Run the Staging Script to initialize objects", icon = 'gear', layout=Layout(width='50%')) stage_script_button.on_click(stage_script) # display(stage_script_button) ## For viewing the various metadata files generated from the script out = widgets.Output(layout={'border': '1px solid black'}) file_open = widgets.Dropdown( options=['amr_melted_analytic', 'amr_melted_raw_analytic', 'microbiome_melted_analytic', 'microbiome_melted_raw_analytic'], description='file:', disabled=False, ) def open_file(b): global df if file_open.value == "amr_melted_analytic": print("amr_melted_analytic") df = amr_melted_analytic display(df) display(out) elif file_open.value == "amr_melted_raw_analytic": print("amr_melted_raw_analytic") display(amr_melted_raw_analytic) df = amr_melted_raw_analytic display(out) elif file_open.value == "microbiome_melted_analytic": print("microbiome_melted_analytic") display(microbiome_melted_analytic) df = microbiome_melted_analytic display(out) elif file_open.value == "microbiome_melted_raw_analytic": print("microbiome_melted_raw_analytic") display(microbiome_melted_raw_analytic) df = microbiome_melted_raw_analytic display(out) open_file_button = widgets.Button( description="Open File", icon = 'open', layout=Layout(width='70%')) open_file_button.on_click(open_file) ``` ## Filter metadata files based on level_ID ``` def filter_level_ID_fun(b): global class_df global a_select global df if file_open.value == "amr_melted_analytic": df = amr_melted_analytic elif file_open.value == "amr_melted_raw_analytic": df = amr_melted_raw_analytic elif file_open.value == "microbiome_melted_analytic": df = microbiome_melted_analytic elif file_open.value == "microbiome_melted_raw_analytic": df = microbiome_melted_raw_analytic items = ['All']+sorted(df['Level_ID'].unique().tolist()) a_slider = widgets.IntSlider(min=0, max=len(df), step=1, value=10000) a_select = widgets.Select(options=items) def update_select_range(*args): if a_select.value=='All': a_slider.max = len(df) else: a_slider.max = len(df[df['Level_ID']==a_select.value]) a_select.observe(update_select_range, 'value') def view1(Level_ID='',Count=3): if Level_ID=='All': return df.head(Count) return df[df['Level_ID']==Level_ID].head(Count) print(file_open.value) widgets.interact(view1,Count=a_slider,Level_ID=a_select) open_file_filter_button = widgets.Button( description="Open File", icon = 'open', layout=Layout(width='50%')) open_file_filter_button.on_click(filter_level_ID_fun) ``` ## Print Figure ``` def print_fig(b): global class_df print("Working...") norm = "" if "raw" not in file_open.value: norm = "Normalized " if a_select.value == "All": print("Level_ID cannot be 'All'. Please select a different Level_ID.") else: class_df = df[df['Level_ID']== str(a_select.value)] import plotly.express as px #df.plot(kind='bar', x='ID', y='Normalized_Count',fill='Name', mode='stack') display(px.bar(x=class_df['ID'], y=class_df['Normalized_Count'] , color=class_df['Name'], labels = {"x" : "Sample ID", "y":str(norm + "Read Count"), "color":"Species"})) print_fig_button = widgets.Button( description="Print Plot", icon = 'open', layout=Layout(width='50%')) print_fig_button.on_click(print_fig) # Create microbiome widgets directory = widgets.Dropdown(options=os.listdir('graphs/Microbiome/')) images = widgets.Dropdown(options=os.listdir('graphs/Microbiome/' + directory.value)) # Updates the image options based on directory value def update_images(*args): images.options = os.listdir(directory.value) # Tie the image options to directory value directory.observe(update_images, 'value') # Show the images def show_images(fdir, file): display(Image(f'{fdir}/{file}')) ``` # Fresh start ``` # Creating a class to select the file for viewing class figures: def __init__(self): self.file_view = widgets.Dropdown( options=['amr_melted_analytic', 'amr_melted_raw_analytic', 'microbiome_melted_analytic', 'microbiome_melted_raw_analytic'], description='file:', disabled=False, ) relative = figures() total = figures() ``` ### Relative Abundance ``` # Relative Abundance Plot # Button function to create the file from the dropdown. Essentially save the dropdown selection and print the table def print_table_rel(b): global relative_df global a_select # exec(f"relative_df = {relative.file_view.value}") if relative.file_view.value == "amr_melted_analytic": relative_df = amr_melted_analytic elif relative.file_view.value == "amr_melted_raw_analytic": relative_df = amr_melted_raw_analytic elif relative.file_view.value == "microbiome_melted_analytic": relative_df = microbiome_melted_analytic elif relative.file_view.value == "microbiome_melted_raw_analytic": relative_df = microbiome_melted_raw_analytic # items is the list of all the different Level_ID's in the given data items = ['All']+sorted(relative_df['Level_ID'].unique().tolist()) # prints a select widget to select the level ID of interest a_select = widgets.Select(options=items) # Prints the final table. the functioin allows for the table to be updated dynamically def view1(Level_ID=''): if Level_ID=='All': return relative_df return relative_df[relative_df['Level_ID']==Level_ID] # actually printing the table widgets.interact(view1,Level_ID=a_select) # Button info open_table_rel = widgets.Button( description="Open File for Relative Abundance Plot", icon = 'open', layout=Layout(width='50%')) open_table_rel.on_click(print_table_rel) # Create functions to graph the relative abundance plot def print_rabundance_fig(b): global r_class_df # Check to make sure that the user has selected a taxonomic level to view before running try: print("Working. . . ") # Updates the dataframe when new Level_ID is selected r_class_df = relative_df[relative_df['Level_ID']== str(a_select.value)] new_df = copy.deepcopy(r_class_df) # copies the dataframe # These 2 lines normalize the values by ID and outputs it into a new column "normalized_weights" group_weights = new_df.groupby('ID').aggregate(sum) new_df['normalized_weights'] = new_df.apply(lambda row: row['Normalized_Count']/group_weights.loc[row['ID']][0],axis=1) # Plots the results into relative abundance plot import plotly.express as px display(px.bar(x=new_df['ID'], y=new_df['normalized_weights'], color=new_df['Name'], title = str("Relative Abundance - " + str(relative.file_view.value) + ": " + str(new_df["Level_ID"][0])), labels = {"x" : "Sample ID", "y":str("Normalized Read Count"), "color":"Species"})) except Exception: print("You likely have not selected a taxonomic level to view.") print("If you have selected a Level_ID that is not 'ALL', then another error has occured.") print("Contact Akhil Gupta (gupta305@umn.edu)") # Code for how the button looks print_rabun_fig_button = widgets.Button( description="Print Relative Abundance Plot", icon = 'open', layout=Layout(width='70%')) print_rabun_fig_button.on_click(print_rabundance_fig) ``` ##### Total Abundance Plot ``` # Total Abundance Plot # Button function to create the file from the dropdown. Essentially save the dropdown selection and print the table def print_table_tot(b): global total_df global t_select if total.file_view.value == "amr_melted_analytic": total_df = amr_melted_analytic elif total.file_view.value == "amr_melted_raw_analytic": total_df = amr_melted_raw_analytic elif total.file_view.value == "microbiome_melted_analytic": total_df = microbiome_melted_analytic elif total.file_view.value == "microbiome_melted_raw_analytic": total_df = microbiome_melted_raw_analytic # items is the list of all the different Level_ID's in the given data items_tot = ['All']+sorted(total_df['Level_ID'].unique().tolist()) # prints a select widget to select the level ID of interest t_select = widgets.Select(options=items_tot) # Prints the final table. the functioin allows for the table to be updated dynamically def view2(Level_ID=''): if Level_ID=='All': return total_df return total_df[total_df['Level_ID']==Level_ID] # actually printing the table widgets.interact(view2,Level_ID=t_select) # Button info open_table_tot = widgets.Button( description="Open File for Total Abundance Plot", icon = 'open', layout=Layout(width='50%')) open_table_tot.on_click(print_table_tot) # Create functions to graph the relative abundance plot def print_tabundance_fig(b): global t_class_df global new_df_tot # Check to make sure that the user has selected a taxonomic level to view before running try: print("Working. . . ") # Updates the dataframe when new Level_ID is selected t_class_df = total_df[total_df['Level_ID']== str(t_select.value)] new_df_tot = copy.deepcopy(t_class_df) # copies the dataframe # Plots the results into Total abundance plot import plotly.express as px display(px.bar(x=new_df_tot['ID'], y=new_df_tot['Normalized_Count'], color=new_df_tot['Name'], title = str("Total Abundance - " + str(relative.file_view.value) + ": " + str(new_df_tot["Level_ID"][0])), labels = {"x" : "Sample ID", "y":str("Read Count"), "color":"Species"})) except Exception: print("You likely have not selected a taxonomic level to view.") print("If you have selected a Level_ID that is not 'ALL', then another error has occured.") print("Contact Akhil Gupta (gupta305@umn.edu)") # Code for how the button looks print_tabun_fig_button = widgets.Button( description="Print Total Abundance Plot", icon = 'open', layout=Layout(width='70%')) print_tabun_fig_button.on_click(print_tabundance_fig) ```
github_jupyter
## The Grid layout The `GridBox` class is a special case of the `Box` widget. The `Box` widget enables the entire CSS flexbox spec, enabling rich reactive layouts in the Jupyter notebook. It aims at providing an efficient way to lay out, align and distribute space among items in a container. Again, the whole grid layout spec is exposed via the `layout` attribute of the container widget (`Box`) and the contained items. One may share the same `layout` attribute among all the contained items. The following flexbox tutorial on the flexbox layout follows the lines of the article [A Complete Guide to Grid](https://css-tricks.com/snippets/css/complete-guide-grid/) by Chris House, and uses text and various images from the article [with permission](https://css-tricks.com/license/). ### Basics and browser support To get started you have to define a container element as a grid with display: grid, set the column and row sizes with grid-template-rows, grid-template-columns, and grid_template_areas, and then place its child elements into the grid with grid-column and grid-row. Similarly to flexbox, the source order of the grid items doesn't matter. Your CSS can place them in any order, which makes it super easy to rearrange your grid with media queries. Imagine defining the layout of your entire page, and then completely rearranging it to accommodate a different screen width all with only a couple lines of CSS. Grid is one of the most powerful CSS modules ever introduced. As of March 2017, most browsers shipped native, unprefixed support for CSS Grid: Chrome (including on Android), Firefox, Safari (including on iOS), and Opera. Internet Explorer 10 and 11 on the other hand support it, but it's an old implementation with an outdated syntax. The time to build with grid is now! ### Important terminology Before diving into the concepts of Grid it's important to understand the terminology. Since the terms involved here are all kinda conceptually similar, it's easy to confuse them with one another if you don't first memorize their meanings defined by the Grid specification. But don't worry, there aren't many of them. **Grid Container** The element on which `display: grid` is applied. It's the direct parent of all the grid items. In this example container is the grid container. ```html <div class="container"> <div class="item item-1"></div> <div class="item item-2"></div> <div class="item item-3"></div> </div> ``` **Grid Item** The children (e.g. direct descendants) of the grid container. Here the item elements are grid items, but sub-item isn't. ```html <div class="container"> <div class="item"></div> <div class="item"> <p class="sub-item"></p> </div> <div class="item"></div> </div> ``` **Grid Line** The dividing lines that make up the structure of the grid. They can be either vertical ("column grid lines") or horizontal ("row grid lines") and reside on either side of a row or column. Here the yellow line is an example of a column grid line. ![grid-line](images/grid-line.png) **Grid Track** The space between two adjacent grid lines. You can think of them like the columns or rows of the grid. Here's the grid track between the second and third row grid lines. ![grid-track](images/grid-track.png) **Grid Cell** The space between two adjacent row and two adjacent column grid lines. It's a single "unit" of the grid. Here's the grid cell between row grid lines 1 and 2, and column grid lines 2 and 3. ![grid-cell](images/grid-cell.png) **Grid Area** The total space surrounded by four grid lines. A grid area may be comprised of any number of grid cells. Here's the grid area between row grid lines 1 and 3, and column grid lines 1 and 3. ![grid-area](images/grid-area.png) ### Properties of the parent **grid-template-rows, grid-template-colums** Defines the columns and rows of the grid with a space-separated list of values. The values represent the track size, and the space between them represents the grid line. Values: - `<track-size>` - can be a length, a percentage, or a fraction of the free space in the grid (using the `fr` unit) - `<line-name>` - an arbitrary name of your choosing **grid-template-areas** Defines a grid template by referencing the names of the grid areas which are specified with the grid-area property. Repeating the name of a grid area causes the content to span those cells. A period signifies an empty cell. The syntax itself provides a visualization of the structure of the grid. Values: - `<grid-area-name>` - the name of a grid area specified with `grid-area` - `.` - a period signifies an empty grid cell - `none` - no grid areas are defined **grid-gap** A shorthand for `grid-row-gap` and `grid-column-gap` Values: - `<grid-row-gap>`, `<grid-column-gap>` - length values where `grid-row-gap` and `grid-column-gap` specify the sizes of the grid lines. You can think of it like setting the width of the gutters between the columns / rows. - `<line-size>` - a length value *Note: The `grid-` prefix will be removed and `grid-gap` renamed to `gap`. The unprefixed property is already supported in Chrome 68+, Safari 11.2 Release 50+ and Opera 54+.* **align-items** Aligns grid items along the block (column) axis (as opposed to justify-items which aligns along the inline (row) axis). This value applies to all grid items inside the container. Values: - `start` - aligns items to be flush with the start edge of their cell - `end` - aligns items to be flush with the end edge of their cell - `center` - aligns items in the center of their cell - `stretch` - fills the whole height of the cell (this is the default) **justify-items** Aligns grid items along the inline (row) axis (as opposed to `align-items` which aligns along the block (column) axis). This value applies to all grid items inside the container. Values: - `start` - aligns items to be flush with the start edge of their cell - `end` - aligns items to be flush with the end edge of their cell - `center` - aligns items in the center of their cell - `stretch` - fills the whole width of the cell (this is the default) **align-content** Sometimes the total size of your grid might be less than the size of its grid container. This could happen if all of your grid items are sized with non-flexible units like `px`. In this case you can set the alignment of the grid within the grid container. This property aligns the grid along the block (column) axis (as opposed to justify-content which aligns the grid along the inline (row) axis). Values: - `start` - aligns the grid to be flush with the start edge of the grid container - `end` - aligns the grid to be flush with the end edge of the grid container - `center` - aligns the grid in the center of the grid container - `stretch` - resizes the grid items to allow the grid to fill the full height of the grid container - `space-around` - places an even amount of space between each grid item, with half-sized spaces on the far ends - `space-between` - places an even amount of space between each grid item, with no space at the far ends - `space-evenly` - places an even amount of space between each grid item, including the far ends **justify-content** Sometimes the total size of your grid might be less than the size of its grid container. This could happen if all of your grid items are sized with non-flexible units like `px`. In this case you can set the alignment of the grid within the grid container. This property aligns the grid along the inline (row) axis (as opposed to align-content which aligns the grid along the block (column) axis). Values: - `start` - aligns the grid to be flush with the start edge of the grid container - `end` - aligns the grid to be flush with the end edge of the grid container - `center` - aligns the grid in the center of the grid container - `stretch` - resizes the grid items to allow the grid to fill the full width of the grid container - `space-around` - places an even amount of space between each grid item, with half-sized spaces on the far ends - `space-between` - places an even amount of space between each grid item, with no space at the far ends - `space-evenly` - places an even amount of space between each grid item, including the far ends **grid-auto-columns, grid-auto-rows** Specifies the size of any auto-generated grid tracks (aka implicit grid tracks). Implicit tracks get created when there are more grid items than cells in the grid or when a grid item is placed outside of the explicit grid. (see The Difference Between Explicit and Implicit Grids) Values: - `<track-size>` - can be a length, a percentage, or a fraction of the free space in the grid (using the `fr` unit) ### Properties of the items *Note: `float`, `display: inline-block`, `display: table-cell`, `vertical-align` and `column-??` properties have no effect on a grid item.* **grid-column, grid-row** Determines a grid item's location within the grid by referring to specific grid lines. `grid-column-start`/`grid-row-start` is the line where the item begins, and `grid-column-end`/`grid-row-end` is the line where the item ends. Values: - `<line>` - can be a number to refer to a numbered grid line, or a name to refer to a named grid line - `span <number>` - the item will span across the provided number of grid tracks - `span <name>` - the item will span across until it hits the next line with the provided name - `auto` - indicates auto-placement, an automatic span, or a default span of one ```css .item { grid-column: <number> | <name> | span <number> | span <name> | auto / <number> | <name> | span <number> | span <name> | auto grid-row: <number> | <name> | span <number> | span <name> | auto / <number> | <name> | span <number> | span <name> | auto } ``` Examples: ```css .item-a { grid-column: 2 / five; grid-row: row1-start / 3; } ``` ![grid-start-end-a](images/grid-start-end-a.png) ```css .item-b { grid-column: 1 / span col4-start; grid-row: 2 / span 2; } ``` ![grid-start-end-b](images/grid-start-end-b.png) If no `grid-column` / `grid-row` is declared, the item will span 1 track by default. Items can overlap each other. You can use `z-index` to control their stacking order. **grid-area** Gives an item a name so that it can be referenced by a template created with the `grid-template-areas` property. Alternatively, this property can be used as an even shorter shorthand for `grid-row-start` + `grid-column-start` + `grid-row-end` + `grid-column-end`. Values: - `<name>` - a name of your choosing - `<row-start> / <column-start> / <row-end> / <column-end>` - can be numbers or named lines ```css .item { grid-area: <name> | <row-start> / <column-start> / <row-end> / <column-end>; } ``` Examples: As a way to assign a name to the item: ```css .item-d { grid-area: header } ``` As the short-shorthand for `grid-row-start` + `grid-column-start` + `grid-row-end` + `grid-column-end`: ```css .item-d { grid-area: 1 / col4-start / last-line / 6 } ``` ![grid-start-end-d](images/grid-start-end-d.png) **justify-self** Aligns a grid item inside a cell along the inline (row) axis (as opposed to `align-self` which aligns along the block (column) axis). This value applies to a grid item inside a single cell. Values: - `start` - aligns the grid item to be flush with the start edge of the cell - `end` - aligns the grid item to be flush with the end edge of the cell - `center` - aligns the grid item in the center of the cell - `stretch` - fills the whole width of the cell (this is the default) ```css .item { justify-self: start | end | center | stretch; } ``` Examples: ```css .item-a { justify-self: start; } ``` ![Example of `justify-self` set to start](images/grid-justify-self-start.png) ```css .item-a { justify-self: end; } ``` ![Example of `justify-self` set to end](images/grid-justify-self-end.png) ```css .item-a { justify-self: center; } ``` ![Example of `justify-self` set to center](images/grid-justify-self-center.png) ```css .item-a { justify-self: stretch; } ``` ![Example of `justify-self` set to stretch](images/grid-justify-self-stretch.png) To set alignment for *all* the items in a grid, this behavior can also be set on the grid container via the `justify-items` property. ``` from ipywidgets import Button, GridBox, Layout, ButtonStyle ``` Placing items by name: ``` header = Button(description='Header', layout=Layout(width='auto', grid_area='header'), style=ButtonStyle(button_color='lightblue')) main = Button(description='Main', layout=Layout(width='auto', grid_area='main'), style=ButtonStyle(button_color='moccasin')) sidebar = Button(description='Sidebar', layout=Layout(width='auto', grid_area='sidebar'), style=ButtonStyle(button_color='salmon')) footer = Button(description='Footer', layout=Layout(width='auto', grid_area='footer'), style=ButtonStyle(button_color='olive')) GridBox(children=[header, main, sidebar, footer], layout=Layout( width='50%', grid_template_rows='auto auto auto', grid_template_columns='25% 25% 25% 25%', grid_template_areas=''' "header header header header" "main main . sidebar " "footer footer footer footer" ''') ) ``` Setting up row and column template and gap ``` GridBox(children=[Button(layout=Layout(width='auto', height='auto'), style=ButtonStyle(button_color='darkseagreen')) for i in range(9) ], layout=Layout( width='50%', grid_template_columns='100px 50px 100px', grid_template_rows='80px auto 80px', grid_gap='5px 10px') ) ```
github_jupyter
<a href="https://colab.research.google.com/github/Santosh-Gupta/NaturalLanguageRecommendations/blob/master/notebooks/inference/DemoNaturalLanguageRecommendationsCPU_Manualfeedback.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> This is our simple Colab demo notebook, which can run on a CPU instance, though it may crash a regular colab CPU instance since it's memory intensive. If that happens, a message should appear on the bottom left asking if you would like to switch to a 25 gb RAM instance, which will be more than enough memory If you would like play with using a GPU or TPU for inference, please see our advanced demo notebook here https://colab.research.google.com/github/Santosh-Gupta/NaturalLanguageRecommendations/blob/master/notebooks/inference/build_index_and_search.ipynb ``` #@title Download and load model, embeddings, and data, will take a several minutes. Double click on this to pop open the hood and checkout the code. !gdown --id "10LV9QbZOkUyOzR4nh8hxesoKJhpmvpM9" # citation vectors # !gdown --id "1-8gmT9cQpOUoZ_HzEaT9Xz6qfeVooAFn" !gdown --id "1-23aNm7j0bnycvyd_OaQfofVYPTewgOI" # abstract vectors !gdown --id "1NyUQwgUNj9bFsiCnZ2TfKmWn5r-Y6wav" # TitlesIdAbstractsEmbedIds !gdown --id "1wIRsAApaE2L7E1fjnDOSSVBG1fY-LT9i" # Model !wget 'https://s3-us-west-2.amazonaws.com/ai2-s2-research/scibert/huggingface_pytorch/scibert_scivocab_uncased.tar' !tar -xvf 'scibert_scivocab_uncased.tar' import zipfile with zipfile.ZipFile('tfworld.zip', 'r') as zip_ref: zip_ref.extractall('') !pip install transformers --quiet %tensorflow_version 2.x import numpy as np import tensorflow as tf from time import time from tqdm import tqdm_notebook as tqdm from transformers import BertTokenizer import pandas as pd from pprint import pprint print('TensorFlow:', tf.__version__) !gdown --id "1owiHXcDyTYecOq0Y27bOk0s4jgxmukTs" !pip install --upgrade --quiet gspread import gspread from oauth2client.service_account import ServiceAccountCredentials scope = ['https://www.googleapis.com/auth/spreadsheets'] credentials = ServiceAccountCredentials.from_json_keyfile_name('worksheet1.worksheet2.worksheet3.json', scope) gc = gspread.authorize(credentials) worksheet2 = gc.open_by_key('1AU37NTxsafd9GNhum2yR3iCux9nT9GAN5Bn4HaWcyU4').sheet1 worksheet3 = gc.open_by_key('1Vaxn8rWz0CufCeDF_Ip9lzErZjK3AUA3g02fYMBe5P4').sheet1 print('Loading Embeddings') citations_embeddings = np.load('CitationSimilarityVectors106Epochs.npy') abstract_embeddings = np.load('AbstractSimVectors.npy') assert citations_embeddings.shape == abstract_embeddings.shape normalizedC = tf.nn.l2_normalize(citations_embeddings, axis=1) normalizedA = tf.nn.l2_normalize(abstract_embeddings, axis=1) print('Loading Model') model = tf.saved_model.load('tfworld/inference_model/') print('laoding Tokenizer') tokenizer = BertTokenizer(vocab_file='scibert_scivocab_uncased/vocab.txt') print('Loading Semantic Scholar CS data, almost done . . .') df = pd.read_json('/content/TitlesIdsAbstractsEmbedIdsCOMPLETE_12-30-19.json.gzip', compression = 'gzip') embed2Title = pd.Series(df['title'].values,index=df['EmbeddingID']).to_dict() embed2Abstract = pd.Series(df['paperAbstract'].values,index=df['EmbeddingID']).to_dict() embed2Paper = pd.Series(df['id'].values,index=df['EmbeddingID']).to_dict() import sys, os # Disable def blockPrint(): sys.stdout = open(os.devnull, 'w') # Restore def enablePrint(): sys.stdout = sys.__stdout__ ``` Use cell below to search for papers. Our model was trained on using full abstracts using the 'query', so the model performs better with longer queries, but the model works surprisingly well with short queries as well. Give it a try. The average input (abstract) was about 150-200 words. So a combination of rephrasing your query and copy and pasting it too 150-200 words may return better results. Our model was trained to use a citation emedding as a label, but we found out running similarity on our abstract embeddings results in surprisingly robust results as well, so we included both. The first half of the results are from the citation embeddings, the second half are from the abstract embeddings. ``` query = "The effect of negative sampling on embedding quality. noise contrastive sampling in vector representation. " #@param {type:"string"} top_k_results = 50 #@param {type:"integer"} if top_k_results%2 == 0: halfA = halfC = int(top_k_results/2) else: halfC = int(top_k_results/2) + 1 halfA = int(top_k_results/2) abstract_encoded = tokenizer.encode(query, max_length=512, pad_to_max_length=True) abstract_encoded = tf.constant(abstract_encoded, dtype=tf.int32)[None, :] print('\nQuery : ') pprint(query) s = time() bert_output = model(abstract_encoded) xq = tf.nn.l2_normalize(bert_output, axis=1) prediction_time = time() - s simNumpyC = np.matmul(normalizedC, tf.transpose(xq)) simNumpyCTopK = (-simNumpyC[:,0]).argsort()[:halfC] simNumpyC_oTopK = -np.sort(-simNumpyC[:,0])[:halfC] allCit = np.vstack((simNumpyCTopK , simNumpyC_oTopK) ) del simNumpyC simNumpyA = np.matmul(normalizedA, tf.transpose(xq)) simNumpyATopK = (-simNumpyA[:,0]).argsort()[:halfA] simNumpyA_oTopK = -np.sort(-simNumpyA[:,0])[:halfA] allAbs = np.vstack((simNumpyATopK , simNumpyA_oTopK) ) del simNumpyA allResults = np.concatenate((allAbs, allCit), axis = 1) print('\n') print('------ Nearest papers -----------------------------------------------------------') print('\n') for embed in allResults[0]: print('---------------') print('-------') print('---') title = embed2Title[int(embed)] abstractR = embed2Abstract[int(embed)] paperId = embed2Paper[int(embed)] print('Title: ', title) print('\nAbstract : ') pprint(abstractR) # print('\n') print('\nLink: https://www.semanticscholar.org/paper/'+paperId) print('---') print('-------') ``` ## If you have any additional feedback about a query, or just feedback in general, we would very much appreciate it. The feedback will help in the qualitative analysis of our models ``` #@title Feedback about a particular query %%capture query = "The effect of negative sampling on embedding quality. noise contrastive sampling in vector representation. " #@param {type:"string"} feedback = "First result didn't seem to say anything about negative sampling" #@param {type:"string"} values_list = worksheet2.col_values(3) values_list2 = worksheet2.col_values(4) rowV = max(len(values_list) , len(values_list2) ) worksheet2.update_cell(rowV+1, 3, query) worksheet2.update_cell(rowV+1, 4, feedback) print('Submitted') print('Query recorded, ', query) print('Feedback recorded, ', feedback) #@title Genderal feedback %%capture feedback = "UI could use some work" #@param {type:"string"} values_list = worksheet3.col_values(3) values_list2 = worksheet3.col_values(4) rowV = max(len(values_list) , len(values_list2) ) worksheet3.update_cell(rowV+1, 3, feedback) print('Submitted') print('Feedback recorded, ', feedback) ```
github_jupyter
``` %matplotlib inline import os import cv2 import time import pickle import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from IPython.display import clear_output from datetime import datetime from lib.utils import make_seed, make_circle_masks from lib.utils import get_living_mask, get_sobel, softmax, to_rgb class VAE_encoder(nn.Module): def __init__(self, dim_out): super(VAE_encoder, self).__init__() self.conv_mu = nn.Sequential(nn.Conv2d(3,64,7,padding=3,stride=1), nn.ReLU(), nn.MaxPool2d(2,stride=2), nn.Conv2d(64,64,3,padding=1), nn.ReLU(), nn.Conv2d(64,64,3,padding=1), nn.ReLU(), nn.Conv2d(64,64,3,padding=1), nn.ReLU(), nn.MaxPool2d(2,stride=2), nn.Conv2d(64,128,3,padding=1), nn.ReLU(), nn.Conv2d(128,128,3,padding=1), nn.ReLU(), nn.Conv2d(128,128,3,padding=1), nn.ReLU(), nn.MaxPool2d(2,stride=2), nn.Conv2d(128,256,3,padding=1), nn.ReLU(), nn.Conv2d(256,256,3,stride=1,padding=1), nn.ReLU(), nn.Conv2d(256,512,5,stride=2,padding=2), nn.ReLU(), nn.Conv2d(512,1024,3)) self.lin_mu = nn.Linear(1024,dim_out) def forward(self, x): c_mu = self.conv_mu(x) c_mu = torch.reshape(c_mu, [-1,1024]) return self.lin_mu(c_mu) class VAE_decoder(nn.Module): def __init__(self, dim_in, dim_out): super(VAE_decoder, self).__init__() self.lin = nn.Sequential(nn.Linear(dim_in,dim_in*2), nn.ReLU(), nn.Linear(dim_in*2,dim_in*2), nn.ReLU(), nn.Linear(dim_in*2,dim_in*2), nn.ReLU(), nn.Linear(dim_in*2,dim_in*2), nn.ReLU(), nn.Linear(dim_in*2,dim_out)) def forward(self, z): r = self.lin(z) return r class GNCAModel(nn.Module): def __init__(self, sobels, channel_n, alpha_channel, fire_rate=0.5, calibration=1.0, device=torch.device("cpu")): super(GNCAModel, self).__init__() self.sobels = sobels self.device = device self.channel_n = channel_n self.alpha_channel = alpha_channel self.pool = torch.nn.MaxPool2d(kernel_size=3, padding=1, stride=1) self.fire_rate = fire_rate self.calibration = calibration self.to(self.device) def perceive(self, x, angle): def _perceive_with(x, weight): size = weight.shape[0] padding = (size-1)/2 conv_weights = torch.from_numpy(weight.astype(np.float32)).to(self.device) conv_weights = conv_weights.view(1,1,size,size).repeat(self.channel_n, 1, 1, 1) return F.conv2d(x, conv_weights, padding=int(padding), groups=self.channel_n) ys = [x,self.pool(x)] for sobel in self.sobels: wa_1, wa_2 = get_sobel(sobel) wa_1/=np.sum(np.abs(wa_1)) wa_2/=np.sum(np.abs(wa_2)) y1 = _perceive_with(x, wa_1) y2 = _perceive_with(x, wa_2) ys.append(y1) ys.append(y2) y = torch.cat(ys,1) return y def linear(self, x, w, b=None): original_shape = x.size() batch = x.size(0) y = torch.reshape(x, [batch,-1,original_shape[-1]]).to(self.device) if b is None: y = torch.bmm(y, w) else: y = torch.bmm(y, w)+b y = torch.reshape(y, list(original_shape[:-1])+[y.size(-1)]) return y def update(self, x, params, fire_rate, angle): w0, b0, w1 = params x = x.transpose(1,3) pre_life_mask = get_living_mask(x, self.alpha_channel, 3) dx = self.perceive(x, angle) dx = dx.transpose(1,3) dx = self.linear(dx, w0, b0) dx = F.relu(dx) dx = self.linear(dx, w1) if fire_rate is None: fire_rate=self.fire_rate stochastic = torch.rand([dx.size(0),dx.size(1),dx.size(2),1])>fire_rate stochastic = stochastic.float().to(self.device) dx = dx * stochastic dx = dx.transpose(1,3) x = x+dx post_life_mask = get_living_mask(x, self.alpha_channel, 3) life_mask = (pre_life_mask & post_life_mask).float() x = x * life_mask x = x.transpose(1,3) return x def forward(self, x, params, steps, calibration_map=None, fire_rate=None, angle=0.0): history = [x.detach().cpu().clamp(0.0, 1.0).numpy(),] for step in range(steps): x = self.update(x, params, fire_rate, angle) if calibration_map is not None: h = x[..., :(self.alpha_channel+1)] t = calibration_map[..., :(self.alpha_channel+1)] _delta = t*(h-1) delta = _delta * self.calibration * (calibration_map!=0).float() _x = x[..., :(self.alpha_channel+1)]-delta x = torch.cat((_x,x[..., (self.alpha_channel+1):]), -1) history.append(x.detach().cpu().clamp(0.0, 1.0).numpy()) return x, history class model_VAE(nn.Module): def __init__(self, sobels, hidden_encoder, hidden_channel_n, channel_n, size, alpha_channel, device=torch.device("cpu")): super(model_VAE, self).__init__() self.sobels = sobels self.channel_n = channel_n self.hidden_channel_n = hidden_channel_n self.size = size self.alpha_channel = alpha_channel self.eps = 1e-3 self.encoder = VAE_encoder(hidden_encoder) self.decoder_w0 = VAE_decoder(hidden_encoder, channel_n*(len(self.sobels)+1)*2*self.hidden_channel_n) self.decoder_b0 = VAE_decoder(hidden_encoder, self.hidden_channel_n) self.decoder_w1 = VAE_decoder(hidden_encoder, self.hidden_channel_n*channel_n) self.GNCA = GNCAModel(self.sobels, self.channel_n, self.alpha_channel, device=device) self.device = device self.to(self.device) def encode(self, x): z = self.encoder(x) return z def decode(self, z): w0 = self.decoder_w0(z) b0 = self.decoder_b0(z) w1 = self.decoder_w1(z) params = (torch.reshape(w0,[-1,self.channel_n*(len(self.sobels)+1)*2,self.hidden_channel_n]), torch.reshape(b0,[-1,1,self.hidden_channel_n]), torch.reshape(w1,[-1,self.hidden_channel_n,self.channel_n])) return params def infer(self, x0, x, steps, calibration_map=None): with torch.no_grad(): x_ = torch.reshape(x, [-1,self.alpha_channel,self.size,self.size]) z = self.encode(x_) params = self.decode(z) y, history = self.GNCA(x0, params, steps, calibration_map=calibration_map) y = y[..., :(self.alpha_channel+1)].clamp(self.eps, 1.0-self.eps) return y, history def multi_infer(self, x0, xs, steps, gamma=0.99, calibration_map=None): with torch.no_grad(): params_list = [] zs = [] for x in xs: x_ = torch.reshape(x, [-1,self.alpha_channel,self.size,self.size]) z = self.encode(x_) zs.append(z.detach().cpu().numpy()) params = self.decode(z) params_list.append(params) y = x0 history = [x0.detach().cpu().numpy()] for i in range(steps): y, _ = self.GNCA(y, params_list[i%len(params_list)], 1, calibration_map=calibration_map) his = y[..., :(self.alpha_channel+1)].clamp(self.eps, 1.0-self.eps).detach().cpu().numpy() history.append(his) y = y[..., :(self.alpha_channel+1)].clamp(self.eps, 1.0-self.eps) return y, history, zs def train(self, x0, x, target, steps, beta, calibration_map=None): x_ = torch.reshape(x, [-1,self.alpha_channel,self.size,self.size]) z = self.encode(x_) params = self.decode(z) y_raw, _ = self.GNCA(x0, params, steps, calibration_map=calibration_map) y_raw = y_raw.clamp(self.eps, 1.0-self.eps) y = y_raw[..., :(self.alpha_channel+1)] mse = F.mse_loss(y, target) l2 = torch.sum(torch.pow(z, 2)) loss = mse + beta*l2 return y_raw, loss, (mse.item(), l2.item()) def read_and_resize(path, size): raw=mpimg.imread(path) scale = size/min(raw.shape[0], raw.shape[1]) new_shape = (max(int(raw.shape[1]*scale),64), max(int(raw.shape[0]*scale),64)) img = cv2.resize(raw, new_shape) img = img[(img.shape[0]-size)//2:(img.shape[0]-size)//2+size, (img.shape[1]-size)//2:(img.shape[1]-size)//2+size, :] return img def plot_loss(loss_log): plt.figure(figsize=(10, 4)) plt.title('Loss history (log10)') plt.plot(np.log10(loss_log), '.', alpha=0.1) plt.show() return ROOT = "/disk2/mingxiang_workDir/vggface2/test/" SIZE = 40 DEVICE = torch.device("cuda:0") model_path = "models/gen_AE_vgg2.pth" init_coord = (SIZE//2, SIZE//2) SOBEL_SIZES = [3,5,9] ALPHA_CHANNEL = 3 HIDDEN_ENCODER = 1024 CHANNEL_N = 24 HIDDEN_CHANNEL_N = 256 BATCH_SIZE = 8 N_STEPS = 160 names = [x for x in os.listdir(ROOT) if x[0]!='.'] paths = {} for name in names: paths[name] = [x for x in os.listdir(ROOT+name) if x[0]!='.'] print("num_images", np.sum([len(paths[name]) for name in names])) my_model = model_VAE(SOBEL_SIZES, HIDDEN_ENCODER, HIDDEN_CHANNEL_N, CHANNEL_N, SIZE, ALPHA_CHANNEL, device=DEVICE) my_model.load_state_dict(torch.load(model_path)) loss_log = [] n_batch = 8 for index in range(n_batch): name_batch = np.random.choice(len(names), BATCH_SIZE, replace=False) path_is = [np.random.randint(len(paths[names[names_i]])) for names_i in name_batch] x_np = [] for i in range(len(name_batch)): name = names[name_batch[i]] path = ROOT+name+"/"+paths[name][path_is[i]] x_np.append(read_and_resize(path, SIZE)) x_np = np.array(x_np).transpose([0,3,1,2])/255.0 x_np = x_np.astype(np.float32) x = torch.from_numpy(x_np).to(DEVICE) target_np = x_np.reshape([-1, ALPHA_CHANNEL, SIZE, SIZE]).transpose([0,2,3,1]) alpha_values = np.expand_dims(np.ones(target_np.shape[:-1]), -1) target_np = np.concatenate([target_np, alpha_values], -1) seed = make_seed((SIZE,SIZE), CHANNEL_N, np.arange(CHANNEL_N-ALPHA_CHANNEL)+ALPHA_CHANNEL, init_coord) x0_np = np.repeat(seed[None, ...], len(name_batch), 0) x0 = torch.from_numpy(x0_np.astype(np.float32)).to(DEVICE) y, history = my_model.infer(x0, x, N_STEPS) y = y.detach().cpu().numpy() i_shows = [i for i in range(BATCH_SIZE)] plt.figure(figsize=(18,5)) for i,ii in enumerate(i_shows): plt.subplot(2,len(i_shows),i+1) plt.imshow(to_rgb(target_np[ii])) plt.axis('off') for i,ii in enumerate(i_shows): plt.subplot(2,len(i_shows),i+len(i_shows)+1) plt.imshow(to_rgb(y[ii,...,:(ALPHA_CHANNEL+1)])) plt.axis('off') plt.show() print("----------") n_batch = 1 for index in range(n_batch): name_batch = np.random.choice(len(names), BATCH_SIZE, replace=False) path_is = [np.random.randint(len(paths[names[names_i]])) for names_i in name_batch] x_np = [] for i in range(len(name_batch)): name = names[name_batch[i]] path = ROOT+name+"/"+paths[name][path_is[i]] x_np.append(read_and_resize(path, SIZE)) x_np = np.array(x_np).transpose([0,3,1,2])/255.0 x_np_raw = x_np.astype(np.float32) damages = [] for _ in range(BATCH_SIZE): n_damage = 8 damage = 1.0-make_circle_masks(n_damage, SIZE, SIZE, rmin=0.02, rmax=0.05) damage = np.sum(damage, 0)>=n_damage damages.append(damage) damages = np.array(damages)[:, None, ...] x_np = x_np.astype(np.float32)*damages x = torch.from_numpy(x_np).to(DEVICE) target_np_raw = x_np_raw.reshape([-1, ALPHA_CHANNEL, SIZE, SIZE]).transpose([0,2,3,1]) alpha_values = np.expand_dims(np.ones(target_np_raw.shape[:-1]), -1) target_np_raw = np.concatenate([target_np_raw, alpha_values], -1) target_np = x_np.reshape([-1, ALPHA_CHANNEL, SIZE, SIZE]).transpose([0,2,3,1]) alpha_values = np.expand_dims(np.ones(target_np.shape[:-1]), -1) target_np = np.concatenate([target_np, alpha_values], -1) seed = make_seed((SIZE,SIZE), CHANNEL_N, np.arange(CHANNEL_N-ALPHA_CHANNEL)+ALPHA_CHANNEL, init_coord) x0_np = np.repeat(seed[None, ...], len(name_batch), 0) x0 = torch.from_numpy(x0_np.astype(np.float32)).to(DEVICE) y, history = my_model.infer(x0, x, N_STEPS) y = y.detach().cpu().numpy() i_shows = [i for i in range(BATCH_SIZE)] plt.figure(figsize=(18,5)) for i,ii in enumerate(i_shows): plt.subplot(3,len(i_shows),i+1) plt.imshow(to_rgb(target_np_raw[ii])) plt.axis('off') for i,ii in enumerate(i_shows): plt.subplot(3,len(i_shows),i+len(i_shows)+1) plt.imshow(to_rgb(target_np[ii])) plt.axis('off') for i,ii in enumerate(i_shows): plt.subplot(3,len(i_shows),i+len(i_shows)*2+1) plt.imshow(to_rgb(y[ii,...,:(ALPHA_CHANNEL+1)])) plt.axis('off') plt.show() print("----------") ```
github_jupyter
# Chapter 10: Inequalities and limit theorems This Jupyter notebook is the Python equivalent of the R code in section 10.6 R, pp. 447 - 450, [Introduction to Probability, 2nd Edition](https://www.crcpress.com/Introduction-to-Probability-Second-Edition/Blitzstein-Hwang/p/book/9781138369917), Blitzstein & Hwang. ---- ``` import matplotlib.pyplot as plt import numpy as np %matplotlib inline ``` ## Jensen's inequality Python/NumPy/SciPy make it easy to compare the expectations of $X$ and $g(X)$ for a given choice of $g$, and this allows us to verify some special cases of Jensen's inequality. For example, suppose we simulate 10<sup>4</sup> times from the $Expo(1)$ distribution: ``` np.random.seed(24157817) from scipy.stats import expon x = expon.rvs(size=10**4) ``` According to Jensen's inequality, $\mathbb{E}(log \, X) \leq log \, \mathbb{E} \, X$. The former can be approximated by `numpy.mean(numpy.log(x))` and the latter can be approximated by `numpy.log(numpy.mean(x))`, so compute both ``` meanlog = np.mean(np.log(x)) print('numpy.mean(numpy.log(x)) = {}'.format(meanlog)) logmean = np.log(np.mean(x)) print('numpy.log(numpy.mean(x)) = {}'.format(logmean)) ``` For the $Expo(1)$ distribution, we find that `numpy.mean(numpy.log(x))` is approximately −0.56 (the true value is around −0.577), while `numpy.log(numpy.mean(x))` is approximately 0 (the true value is 0). This indeed suggests $\mathbb{E}(log \, X) \leq log \, \mathbb{E} \, X$. We could also compare `numpy.mean(x**3)` to `numpy.mean(x)**3`, or `numpy.mean(numpy.sqrt(x))` to `numpy.sqrt(numpy.mean(x))` - the possibilities are endless. ## Visualization of the law of large numbers To plot the running proportion of Heads in a sequence of independent fair coin tosses, we first generate the coin tosses themselves: ``` np.random.seed(39088169) from scipy.stats import binom nsim = 300 p = 1/2 x = binom.rvs(1, p, size=nsim) ``` Then we compute $\bar{X}_n$ for each value of $n$ and store the results in `xbar`: ``` # divide by sequence from 1 to nsim, inclusive xbar = np.cumsum(x) / np.arange(1, nsim+1) ``` The above line of code performs element-wise division of the two arrays `numpy.cumsum(x)` and `np.arange(1, nsim+1)`. Finally, we plot `xbar` against the number of coin tosses: ``` x = np.arange(1, nsim+1) y = xbar plt.figure(figsize=(12, 4)) plt.plot(x, y, '-', label=r'$\bar{x}$') plt.hlines(p, 0, nsim, linestyle='dotted', lw=1.1, alpha=0.5, label=r'$p={}$'.format(p)) plt.xlim([0.0, nsim]) plt.xlabel(r'$n$: number of coin tosses') plt.yticks([0.0, 0.5, 1.0]) plt.ylabel(r'$\bar{x}_{n}$: proportion of H to $n$') plt.title('Visualizing the Law of Large Numbers') plt.legend() plt.show() ``` You should see that the values of `xbar` approach `p`, by the law of large numbers. ## Monte Carlo estimate of $\pi$ A famous example of Monte Carlo integration is the Monte Carlo estimate of $\pi$. The unit disk ${(x, y): x^2 +y^2 ≤ 1}$ is inscribed in the square $[−1, 1] \times [−1, 1]$, which has area 4. If we generate a large number of points that are Uniform on the square, the proportion of points falling inside the disk is approximately equal to the ratio of the disk's area to the square’s area, which is $\pi/4$. Thus, to estimate $\pi$ we can take the proportion of points inside the circle and multiply by 4. In `matplotlib.pyplot`, to generate Uniform points on the 2D square, we can independently generate the $x$-coordinate and the $y$-coordinate as $Unif(−1, 1)$ r.v.s, using the results of Example 7.1.22: ``` np.random.seed(63245986) from scipy.stats import uniform nsim = 10**6 a = -1 b = 1 x = uniform.rvs(loc=a, scale=b-a, size=nsim) y = uniform.rvs(loc=a, scale=b-a, size=nsim) ``` Let's try graphing a small portion of those $x$- and $y$-coordinates. ``` inside = x**2 + y**2 < 1.0 outside = ~inside _, ax = plt.subplots(figsize=(8,8)) # we'll graph the first n co-ordinate pairs # for points inside and points outside of # x**2 + y**2 = 1.0 n = 5000 ax.plot(x[inside][0:n], y[inside][0:n], '.', color='#fc8d59') ax.plot(x[outside][0:n], y[outside][0:n], '.', color='#91bfdb') ax.set_xlim([-1.0, 1.0]) ax.set_xlabel('x') ax.set_ylim([-1.0, 1.0]) ax.set_ylabel('y') ax.set_title(r'Monte Carlo estimate of $\pi$: {} / {} points'.format(n, nsim)) plt.show() ``` To count the number of points _in_ the disk, we use `numpy.sum(x**2 + y**2 < 1.0)`. The array `x**2 + y**2 < 1.0` is effectively an indicator vector whose i<sup>th</sup> element is `True` (equivalent to 1) if the i<sup>th</sup> point falls _inside_ the disk, and `False` (equivalent to 0) otherwise, so the sum of the boolean elements is the number of points in the disk. To get our estimate of $\pi$, we convert the sum into a proportion and multiply by 4. Altogether, we have ``` num_points_inside = np.sum(x**2 + y**2 < 1.0) est_pi = 4.0 * num_points_inside / nsim print('estimated value for pi: {}'.format(est_pi)) ``` How close was your estimate to the actual value of $\pi$? ## Visualizations of the central limit theorem One way to visualize the central limit theorem for a distribution of interest is to plot the distribution of $\bar{X}_{n}$ for various values of $n$, as in Figure 10.5. To do this, we first have to generate i.i.d. $X_{1}, \, \ldots, \, X_{n}$ a bunch of times from our distribution of interest. For example, suppose that our distribution of interest is $Unif(0, 1)$, and we are interested in the distribution of $\bar{X}_{12}$, i.e., we set $n = 12$. In the following code, we create a matrix of i.i.d. standard Uniforms. The matrix has 12 columns, corresponding to $X_{1}$ through $X_{12}$. Each row of the matrix is a different realization of $X_{1}$ through $X_{12}$. ``` np.random.seed(102334155) nsim = 10**4 n = 12 x = uniform.rvs(size=n*nsim).reshape((nsim, n)) print('matrix x has shape: {}'.format(x.shape)) ``` Now, to obtain realizations of $\bar{X}_{12}$, we simply take the average of each row of the matrix `x`; we can do this by calling the [`numpy.ndarray.mean`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.mean.html) method on `numpy.array` object `x`, specifying `axis=1` to take the average of each row of matrix `x`: ``` xbar = x.mean(axis=1) ``` Finally, we create a histogram: ``` plt.figure(figsize=(10, 4)) plt.hist(xbar, bins=20) plt.title(r'Histogram of $\bar{X}_{12}$, $X_i \sim Unif(0,1)$') plt.xlabel(r'$\bar{x}$') plt.ylabel(r'Frequency') plt.show() ``` You should see a histogram that looks Normal. Because the $Unif(0, 1)$ distribution is symmetric, the CLT kicks in quickly and the Normal approximation for $\bar{X}_{n}$ works quite well, even for $n = 12$. Changing `scipy.stats.uniform` to `scipy.stats.expon`, we see that for $X_j$ generated from the $Expo(1)$ distribution, the distribution of $\bar{X}_{n}$ remains skewed when $n = 12$, so a larger value of $n$ is required before the Normal approximation is adequate. ``` np.random.seed(165580141) from scipy.stats import expon n1, n2, n3 = 12, 32, 256 x1 = expon.rvs(size=n1*nsim).reshape((nsim, n1)) x2 = expon.rvs(size=n2*nsim).reshape((nsim, n2)) x3 = expon.rvs(size=n3*nsim).reshape((nsim, n3)) xbar1 = x1.mean(axis=1) xbar2 = x2.mean(axis=1) xbar3 = x3.mean(axis=1) _, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(14, 5)) ax1.hist(xbar1, bins=40, color='#1b9e77') ax1.set_ylabel('Frequency') ax1.set_title(r'Histogram of $\bar{X}_{12}$, $X_j \sim \, Expo(1)$') ax2.hist(xbar2, bins=40, color='#d95f02') ax2.set_title(r'Histogram of $\bar{X}_{32}$, $X_j \sim \, Expo(1)$') ax3.hist(xbar3, bins=40, color='#7570b3') ax3.set_title(r'Histogram of $\bar{X}_{256}$, $X_j \sim \, Expo(1)$') plt.show() ``` Have a look at Appendix A below for another neat visualization of the CLT. ## Chi-Square and Student-$t$ distributions Although the Chi-Square is just a special case of the Gamma (refer to [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html)), it still has its own functions in [`scipy.stats.chi2`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2.html): `chi2.pdf(x, n)` and `chi2.cdf(x, n)` return the values of the $\chi^2_{n}$ PDF and CDF at `x`; and `chi2.rvs(n, size=nsim)` generates `nsim` $\chi^2_{n}$ r.v.s. The graph below illustrates Theorem 10.4.2: ``` from scipy.stats import chi2, gamma _, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(14, 6)) x = np.linspace(0, 20, 1000) n_vals = [1, 2, 4, 8, 16] alphas = [1.0, .7, .5, .3, .2] # graph for Chi-Square for i,n in enumerate(n_vals): ax1.plot(x, chi2.pdf(x, n), lw=3.2, alpha=alphas[i], color='#33aaff', label='n={}'.format(n)) ax1.set_title(r'$X \sim \chi^2_{n}$') ax1.set_xlim((0, 20.0)) ax1.set_ylim((0.0, 0.5)) ax1.legend() # graph for Gamma lambd = 0.5 for i,n in enumerate(n_vals): ax2.plot(x, gamma.pdf(x, n/2, scale=1/lambd), lw=3.2, alpha=alphas[i], color='#ff9933', label='n={}'.format(n)) ax2.set_title(r'$X \sim Gamma(\frac{n}{2}, \frac{1}{2})$') ax2.set_xlim((0, 20.0)) ax2.set_ylim((0.0, 0.5)) ax2.legend() plt.suptitle((r'Theorem 10.4.2: $\chi^2_{n}$ distribution is the $Gamma(\frac{n}{2},' r' \, \frac{1}{2})$ distribution')) plt.show() ``` The Student-$t$ distribution is supported in [`scipy.stats.t`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.t.html). To evaluate the PDF or CDF of the $t_n$ distribution at $x$, we use `t.pdf(x, n)` or `t.cdf(x, n)`. To generate `nsim` r.v.s from the $t_{n}$ distribution, we use `t.rvs(n, size=nsim)`. Of course, `t.pdf(x, 1)` is the same as [`scipy.stats.cauchy`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.cauchy.html)'s `cauchy.pdf(x)`. ``` from scipy.stats import t, cauchy _, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(14, 6)) x = np.linspace(0, 20, 1000) # graph for Student-t y1 = t.pdf(x,1) ax1.plot(x, y1, lw=3.2, alpha=0.8, color='#33aaff') ax1.set_title(r'Student-$t$: $\tt{t.pdf(x, 1)}$') # graph for Cauchy y2 = cauchy.pdf(x) ax2.plot(x, y2, lw=3.2, alpha=0.8, color='#ff9933') ax2.set_title(r'Cauchy: $\tt{cauchy.pdf(x)}$') plt.suptitle(r'Student-$t$ vs. Cauchy') plt.show() ``` ---- ## Appendix A: Quincunx - an interactive visualization Statistician and geneticist [Francis Galton](https://en.wikipedia.org/wiki/Francis_Galton) invented the _quincunx_, or _bean machine_, to illustrate the Normal distribution. Here is an implementation built with IPython and [D3.js](https://d3js.org/) (version 5.7.0), embedded right in this notebook. In this interactive visualization of the CLR, you can: * alter the animation by controlling the time between redraws with the `delay` control (from 50 to 1000 milliseconds, changes are immediately effective without restarting the animation) * change the number of bins with the `num. bins` control (from 1 to 25, board is redrawn upon completion of change) * change the probability $p$ that the ball will go to the right (slider moves left to 0.00, right to 1.00, board is redrawn upon completion of change) * as the balls drop into the bins below, the heights of the bins will increase to form a histogram * the running percentage of balls per bin is displayed on each bin and updated in real-time (`Math.floor` is used, so some accuracy is lost) First, we import `display`, `Javascript` and `HTML` from the [`IPython.display`](https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html) module. Next we use `display` to load the D3.js file from the [Google Hosted Libraries content delivery network](https://developers.google.com/speed/libraries/#d3js). We also load the main JavaScript code in `assets/quincunx.js` and Cascading Style Sheet definitions in `assets/quincunx.html`. Lastly, we embed a small JavaScript snippet into the code cell below to provide an entry point to the `quincunx` module in `assets/quincunx.js`. ``` from IPython.display import display, Javascript, HTML display(Javascript(""" require.config({ paths: { d3: 'https://ajax.googleapis.com/ajax/libs/d3js/5.7.0/d3.min' } }); """)) display(Javascript(filename="./assets/quincunx.js")) display(HTML(filename="./assets/quincunx.html")) Javascript(""" (function(element){ require(['quincunx'], function(quincunx) { quincunx(element.get(0)) }); })(element); """) ``` _Can you use the central limit theorem to explain why the distribution of particles at the bottom is approximately Normal?_ #### References This interactive visualization extends ideas from the following blogposts: * [Central Limit Theorem Visualized in D3](http://blog.vctr.me/posts/central-limit-theorem.html) * [Custom D3.js Visualization in a Jupyter Notebook](https://www.stefaanlippens.net/jupyter-custom-d3-visualization.html) ---- Joseph K. Blitzstein and Jessica Hwang, Harvard University and Stanford University, &copy; 2019 by Taylor and Francis Group, LLC
github_jupyter
``` class LinearRegressionGD(object): def __init__(self, eta=0.001, n_iter=20, random_state=1): self.eta = eta self.n_iter = n_iter self.random_state = random_state def fit(self, X, y): rgen = np.random.RandomState(self.random_state) print(self.random_state) self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1]) self.cost_ = [] for i in range(self.n_iter): output = self.net_input(X) errors = (y - output) self.w_[1:] += self.eta * X.T.dot(errors) self.w_[0] += self.eta * errors.sum() cost = (errors**2).sum() / 2.0 self.cost_.append(cost) return self def net_input(self, X): return np.dot(X, self.w_[1:]) + self.w_[0] def predict(self, X): return self.net_input(X) # Do not modify import pandas as pd import numpy as np import matplotlib.pyplot as plt from IPython.display import Image # inline plotting instead of popping out %matplotlib inline df = pd.read_csv( 'http://archive.ics.uci.edu/ml/machine-learning-databases/00381/PRSA_data_2010.1.1-2014.12.31.csv', sep=',') df.head() # Do not modify df = df.drop(['cbwd'], axis=1) # drop non-scalar feature df = df.dropna(axis=0, how='any') # drop samples who has nan feature df.head() # Do not modify idx = np.logical_or( np.logical_and(df['year'].values == 2014, df['month'].values < 3), np.logical_and(df['year'].values == 2013, df['month'].values == 12)) X = df.loc[idx].drop('pm2.5', axis=1) y = df.loc[idx]['pm2.5'].values X.head() # define a function for residual plot def residual_plot(y_train, y_train_pred, y_test, y_test_pred): # Residual plot plt.scatter( y_train_pred, y_train_pred - y_train, c='blue', marker='o', label='Training data') plt.scatter( y_test_pred, y_test_pred - y_test, c='green', marker='s', label='Test data') plt.xlabel('Predicted values') plt.ylabel('Residuals') plt.legend(loc='upper left') xmin = min(y_train_pred.min(), y_test_pred.min()) xmax = max(y_train_pred.max(), y_test_pred.max()) plt.hlines(y=0, xmin=xmin, xmax=xmax, lw=2, color='red') plt.xlim([xmin, xmax]) plt.tight_layout() plt.show() # select features and train model by yourself from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.metrics import r2_score from sklearn.metrics import mean_squared_error from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures from sklearn.ensemble import RandomForestRegressor import itertools X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0) #print('#Training data points: %d' % X_train.shape[0]) #print('#Testing data points: %d' % X_test.shape[0]) # Standardization sc = StandardScaler() X_train_std = sc.fit_transform(X_train) X_test_std = sc.transform(X_test) penta = PolynomialFeatures(degree=5) regr = LinearRegressionGD(eta=0.0000005, n_iter=5000, random_state=3333) X_quad_train = penta.fit_transform(X_train_std) X_quad_test = penta.transform(X_test_std) Z = sc.fit_transform(X_quad_train) R = np.dot(Z.T, Z) / X_quad_train.shape[0] eigen_vals, eigen_vecs = np.linalg.eigh(R) eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:,i]) for i in range(len(eigen_vals))] eigen_pairs.sort(reverse=True) W_2D = eigen_pairs[0][1][:, np.newaxis] for i in range(1, 250): W_2D = np.hstack((W_2D, eigen_pairs[i][1][:, np.newaxis])) Z_pca = Z.dot(W_2D) Z_pca_test = sc.transform(X_quad_test).dot(W_2D) #print('#Features: %d' % X_quad_train.shape[1]) regr = regr.fit(Z_pca, y_train) y_train_pred = regr.predict(Z_pca) y_test_pred = regr.predict(Z_pca_test) print('MSE train: %.2f, test: %.2f' % (mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.2f, test: %.2f' % (r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred))) #residual_plot(y_train, y_train_pred, y_test, y_test_pred) from sklearn.ensemble import RandomForestRegressor forest = RandomForestRegressor( n_estimators=1000, criterion='mse', random_state=1, n_jobs=-1) forest.fit(X_train, y_train) y_train_pred = forest.predict(X_train) y_test_pred = forest.predict(X_test) print('MSE train: %.2f, test: %.2f' % (mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.2f, test: %.2f' % (r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred))) # Residual plot plt.scatter( y_train_pred, y_train_pred - y_train, c='blue', marker='o', label='Training data') plt.scatter( y_test_pred, y_test_pred - y_test, c='green', marker='s', label='Test data') plt.xlabel('Predicted values') plt.ylabel('Residuals') plt.legend(loc='upper left') plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red') plt.xlim([-10, 50]) plt.tight_layout() plt.show() ```
github_jupyter
## Problem - Let's say you have a list comments made on a blog post, and you want to know the top 3 Users with the most number of comments. - How to accomplish that in the most pythonic way possible? ``` from collections import namedtuple Comment = namedtuple('Comment', ['author', 'content']) comments = [ Comment(author='Junior', content='Python3 is awesome'), Comment(author='Zak', content='Yeah I agree'), Comment(author='Amy', content='Indeed'), Comment(author='Junior', content='Python3 beats Python2'), Comment(author='Paul', content='Sure'), Comment(author='Ralf', content='Yeah I agree'), Comment(author='Becca', content='Yeah I agree'), Comment(author='Zak', content='Yeah I agree'), Comment(author='Simon', content='Yeah I agree'), Comment(author='Zak', content='Yeah I agree'), Comment(author='Becca', content='Yeah I agree'), Comment(author='Junior', content='Yeah I agree'), Comment(author='Matt', content='Yeah I agree'), Comment(author='Ali', content='Yeah I agree'), Comment(author='Becca', content='Yeah I agree'), Comment(author='Junior', content='Yeah I agree'), Comment(author='Amy', content='Yeah I agree'), Comment(author='Becca', content='We all agree then :)') ] comments_authors = [comment.author for comment in comments] print(f'comments_authors = {comments_authors}') ``` ## Answer - The most Pythonic way is to use collections.Counter ``` # BAD WAY: to much of boilerplate from collections import defaultdict authors_count = defaultdict(int) #<0> for author in comments_authors: authors_count[author] += 1 #<0> print(f'authors_count = {authors_count}') sorted_authors = sorted(authors_count.items(), key=lambda el: el[1], reverse=True) #<1> print(f'sorted_authors = {sorted_authors}') top3 = sorted_authors[:3] print(f'top3 = {top3}') # GOOD WAY: using collections.Counter from collections import Counter authors_count = Counter(comments_authors) #<2> print(f'authors_count = {authors_count}') top3 = authors_count.most_common(3) #<3> print(f'top3 = {top3}') ``` ## Discussion - <0> using defaultdict allows us to auto-initialize keys to an integer value of 0 if missing. (ref: 3.2) - <1> items() return a list of (author, count) tuples that we sort in descending order (via reverse=True) based on the count (via key=lambda el: el[1]) - <2, 3> building the authors_count Lookup Table via Counter instead of dict we can find the top3 comment authors in just 2 steps. ## Problem - You are keeping track of a dict of average daily reviews for each movie, and one per day. - How to know if a movie has ever been reviewed in the most Pythonic way regardless of the review day? ``` daily_avg_reviews = { '01-Jan-2019': {'Python3 beats Python2': 3.5, 'Python2 end game': 4.7}, '02-Jan-2019': {'Python2 end game': 3.9}, '03-Jan-2019': {'Python3 beats Python2': 4.5}, '04-Jan-2019': {'Python3 is the future': 5.0, 'Python2 end game': 4.1}, } all_movies = [ 'Python3 beats Python2', 'Python2 end game', 'Python3 is the future Season 2', 'Python3 is the future', 'dummy movie', ] ``` ## Answer - The most Pythonic way is to use collections.ChainMap ``` # BAD WAY: not the most pythonic for movie_name in all_movies: for day, day_reviews_dict in daily_avg_reviews.items(): if movie_name in day_reviews_dict: print(f'[HIT] - [{movie_name}] has been reviewed !') break else: print(f'[MISS] - [{movie_name}] has NEVER been reviewed !') # GOOD WAY: Using ChainMap from collections import ChainMap all_days_reviews_dicts = daily_avg_reviews.values() #<0> chained_lookup = ChainMap(*all_days_reviews_dicts) #<1> print(f'chained_lookup = {chained_lookup}') print() for movie_name in all_movies: if movie_name in chained_lookup: #<2> print(f'[HIT] - [{movie_name}] has been reviewed !') else: print(f'[MISS] - [{movie_name}] has NEVER been reviewed !') ``` ## Discussion - <0> building an iterator made of all the individual daily average reviews dicts - <1> passing all the individual daily average reviews dicts to ChainMap to "emulate" a single "meta" dict from them. - <2> using the in operator on the chained_lookup finds a given key in each of the individual dict sequentially and returns True if one of them contains the key. ## Problem - we have the same dict above mapping days to daily_avg_reviews per movie with the keys sorted by dates and we would like the order to be preserved every time we insert a new day or remove an existing one. ## Answer - Using collections.OrederedDict it is possible to have key-ordered lookup table. ``` # <0> from collections import OrderedDict daily_avg_reviews = OrderedDict() daily_avg_reviews['01-Jan-2019'] = {'Python3 beats Python2': 3.5, 'Python2 end game': 4.7} daily_avg_reviews['02-Jan-2019'] = {'Python2 end game': 3.9} daily_avg_reviews['03-Jan-2019'] = {'Python3 beats Python2': 4.5} daily_avg_reviews['04-Jan-2019'] = {'Python3 is the future': 5.0, 'Python2 end game': 4.1} print(f'keys insertion order preserved during = {list(daily_avg_reviews.keys())}') del daily_avg_reviews['04-Jan-2019'] print(f'keys order preserved after delete = {list(daily_avg_reviews.keys())}') ``` ## Discussion - <0> OrderedDict preserves keys insertion order. However, They consume more than twice the memory space required by a normal dict as they use linked-lists under the hood to maintain the order so they should be use carefully.
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt ``` # Integrais difíceis Vimos em aula que os métodos de "alta ordem" convergem mais rápido quando a função é várias vezes diferenciável. Neste teste, vamos olhar a situação contrária: vamos integrar funções que não são - diferenciáveis - contínuas no intervalo de integração, e ver como os métodos se comportam! ``` # Derivada simétrica, como aproximação numérica def df(f,x,h=2**-17): """17*3 = 54""" return (f(x+h) - f(x-h))/(2*h) # Métodos de Cauchy, ponto médio e simpson. # Nomes: cauchy, midpoint, simpson # Forma: (f,a,b,n=100) # YOUR CODE HERE raise NotImplementedError() methods = [cauchy, midpoint, simpson] ``` # Parte 1: Integrando uma função que não é diferenciável ## Questão 1: A função e sua primitiva Vamos calcular o valor da integral de duas formas: pelo TFC, usando a primitiva, obtemos um valor (quase) exato. Este valor servirá para comparar com as respostas dos métodos de aproximação (Cauchy, Ponto Médio e Simpson). ``` f = np.abs ``` Dê uma primitiva de $f$. ``` def F(x): """Primitiva de x -> |x|. Vetorizada em x.""" # YOUR CODE HERE raise NotImplementedError() ``` #### Testando que a sua primitiva parece uma primitiva ``` assert np.abs(df(F,1) - 1) < 1e-12 assert np.abs(df(F,-2) - 2) < 2e-12 ## Esta caixa depende de F ser vetorizada. np.random.seed(1) xs = np.random.randn(10) relerr = (df(F,xs) - f(xs))/f(xs) assert np.all(np.abs(relerr) < 2e-11) ``` $F$ é diferenciável em zero, mas é uma conta difícil: o erro já é bem maior... ``` df(F,0) ``` ## Questão 2: Integrando em $[0,1]$ A que velocidade decai o erro de integração de $f$ no intervalo $[0,1]$, conforme o número $n$ de divisões aumenta? Faça um gráfico com alguns valores de $n$, para observar a ordem dos três métodos - Dica: `f.__name__` dá o nome de uma função, para você usar na legenda ``` ax = None ans = F(1) - F(0) ns = np.logspace(1,4,dtype=int) for m in methods: # YOUR CODE HERE raise NotImplementedError() plt.legend() ax = plt.gca() plt.show() assert len(ax.lines) == 3 assert len(ax.legend().texts) == 3 ``` Dê a ordem do método de Cauchy para esta função neste intervalo - A ordem de um método é o expoente $d$ tal que o erro de integração $e_n$ (com $n$ subdivisões) decai em função de $n$ como $\text{Const}/n^d$. ``` # Forma da resposta: decay_speed = n # YOUR CODE HERE raise NotImplementedError() for n in np.random.randint(100,10000, size=(6)): I1 = cauchy(f,0,1,n) err1 = np.abs(I1 - ans) I2 = cauchy(f,0,1,2*n) err2 = np.abs(I2 - ans) assert np.abs( err2/err1 - 2**-decay_speed ) < 2*n*1e-14 ``` Agora, explique o que aconteceu nos métodos do ponto médio e de Simpson. YOUR ANSWER HERE ## Questão 3: Mudando o intervalo de integração Vamos fazer vários gráficos, então para evitar fazer muito _copy-paste_ vamos definir uma função que faz sempre "o mesmo tipo de gráfico". ``` # Faça aqui uma função genérica para "fazer gráficos de erro" # Ela pode ser adaptada do código que você fez para a questão 2. def graph_err(f,a,b,ans): """ Gráficos de erro de integração da função $f$ no intervalo $[a,b]$ - em função do número de subdivisões; - para os três métodos cauchy, midpoint e simpson. A resposta teórica (para poder calcular o erro!) é dada em `ans`.""" ns = np.logspace(1,4,dtype=int) for m in methods: # YOUR CODE HERE raise NotImplementedError() plt.legend() ``` ### 3.1: $I = [-1,2]$ Gráfico do erro numérico para a integral de $f$ no intervalo $[-1,2]$. Qual a melhor escala para ver este gráfico? Use o gráfico para escolher entre `plot`, `semilogx`, `semilogy` e `loglog` na sua função `graph_err()` acima! ``` ax = None a,b = -1,2 ans = F(b) - F(a) graph_err(f,a,b,ans) ax = plt.gca() plt.show() assert len(ax.lines) == 3 assert len(ax.legend().texts) == 3 ``` Quais são as velocidades de convergência dos métodos? Você percebe algum comportamento especial? Como explicar isso? YOUR ANSWER HERE ### 3.2 $I = [-1,1]$ Agora, repita o estudo para o intervalo $[-1,1]$. Aqui, será melhor ter um gráfico separado para cada um dos três métodos. ``` ans = F(1) - F(-1) ns = np.logspace(1,4,dtype=int) _, ax = plt.subplots(ncols=3, figsize=(15,4)) for m,a in zip(methods,ax): # YOUR CODE HERE raise NotImplementedError() plt.show() ``` O que aconteceu agora? Porquê? YOUR ANSWER HERE ### 3.3 $I = \text{random}$. Enfim, estude intervalos "aleatórios"! Aqui, usar a `graph_err` vai ser uma boa ideia ;-) ``` _, ax = plt.subplots(ncols=3, figsize=(15,4)) for axi in ax: # Três intervalos aleatórios! (porque pedimos 3 subgráficos acima) a,b = -np.random.rand(), np.random.rand() axi.set_title('$|x|$ in $[{:.2},{:.2}]$'.format(a,b)) plt.sca(axi) # Para a graph_err usar o eixo certo # YOUR CODE HERE raise NotImplementedError() plt.show() ``` Estes resultados são mais fáceis ou mais difíceis de interpretar? Porquê? YOUR ANSWER HERE # Parte 2: Uma função descontínua ``` def f(x): return np.cos(x)*np.sign(x) ts = np.linspace(-1,2) plt.plot(ts, f(ts)) plt.show() ``` ## Questão 4: De novo, dê uma primitiva de $f$ ``` def F(x): # YOUR CODE HERE raise NotImplementedError() assert np.abs(df(F,1) - f(1)) < 1e-12 np.random.seed(1) xs = np.random.randn(10) relerr = (df(F,xs) - f(xs))/f(xs) assert np.all(np.abs(relerr) < 5e-11) ``` ## Questão 5: Gráficos de erro Aqui a graph_err vai ajudar bastante! Comece com o intervalo $[-1,1]$: ``` a,b = -1,1 # YOUR CODE HERE raise NotImplementedError() plt.show() ``` E agora $[-1,2]$: ``` a,b = -1,2 # YOUR CODE HERE raise NotImplementedError() plt.show() ``` E um intervalo aleatório ``` a,b = -np.random.rand(), np.random.rand() plt.title('$|x|$ in $[{:.2},{:.2}]$'.format(a,b)) # YOUR CODE HERE raise NotImplementedError() plt.show() ``` O que você pode concluir sobre o comportamento dos três métodos para funções descontínuas? Quais fenômenos são similares para funções contínuas, mas não diferenciáveis? Porquê? YOUR ANSWER HERE
github_jupyter
# Model Creation for the Penguin Classifier App This notebook shows how I created the machine learning model used for the penguin classifier app shown in the lectures on graphical user interfaces. My approach is very simple -- I'm using a decision tree and skipping cross-validation. The resulting model is OK, but with cross-validation and more careful modeling decisions, I know that you can do much better!! ``` import pandas as pd from matplotlib import pyplot as plt import numpy as np from sklearn.model_selection import train_test_split from sklearn import preprocessing from sklearn.model_selection import cross_val_score from sklearn import tree url = 'https://raw.githubusercontent.com/PhilChodrow/PIC16A/master/datasets/palmer_penguins.csv' penguins = pd.read_csv(url) penguins['Species'] = penguins['Species'].str.split().str.get(0) penguins.groupby(['Island', 'Species'])[['Body Mass (g)', 'Culmen Length (mm)']].aggregate(np.mean) train, test = train_test_split(penguins, test_size = 0.5) def prep_penguins(data_df): """ prepare the penguins data set first, we apply a LabelEncoder to the Species and Island columns second, we remove all columns other than the three we'll use for this exercise. third, we remove rows with na values in any of the required columns. finally, we split into predictor and target variables. data_df: a row-subset of the penguins data frame return: X, y, the cleaned predictor and target variables (both data frames) """ # copy the original df to suppress warnings df = data_df.copy() # apply label encoders to Species and Island columns le = preprocessing.LabelEncoder() df['Species'] = le.fit_transform(df['Species']) le = preprocessing.LabelEncoder() df['Island'] = le.fit_transform(df['Island']) # only need these columns df = df[['Species', 'Island', 'Body Mass (g)', 'Culmen Length (mm)']] # remove rows if they have NA in any of the needed columns df = df.dropna() # separate into predictor and target variables X = df.drop(['Species'], axis = 1) y = df['Species'] return(X, y) X_train, y_train = prep_penguins(train) X_test, y_test = prep_penguins(test) # make the model T = tree.DecisionTreeClassifier(max_depth = 5) T.fit(X_train, y_train) T.score(X_train, y_train), T.score(X_test, y_test) ``` # Pickling Here is the new part: after creating the model, we *pickle* it. This saves its state, allowing us to load it into a new Python session without going through the hassle of downloading the data and training the model every time we want to use the app. ``` import pickle # saves the model pickle.dump(T, open("model.p", "wb")) # loads the model from file T2 = pickle.load(open("model.p", "rb")) # T2 can now do everything T can T2.score(X_test, y_test) # from LabelEncoder, # input 0 means Biscoe Island # output 2 means Gentoo penguin T2.predict([[0, 5000, 47]]) ```
github_jupyter
# Multi Layer Perceptron - Fashion MNIST We will be starting with a classification exercise. And using the **Fashion Mnist** dataset to do so. It involves identifying the 10 types of products that are there in the image. - Train: 60,000 images - Test: 10,000 images - Class: 10 - Labels: - 0: T-shirt/top - 1: Trouser - 2: Pullover - 3: Dress - 4: Coat - 5: Sandal - 6: Shirt - 7: Sneaker - 8: Bag - 9: Ankle boot ### Get Input and Output ``` import numpy as np import pandas as pd import keras import matplotlib.pyplot as plt %matplotlib inline import vis from keras.datasets import fashion_mnist (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() x_train.shape, y_train.shape, x_test.shape, y_test.shape labels = vis.fashion_mnist_label() labels ``` #### See an Image ``` import vis vis.imshow(x_train[0]) ``` #### See an Image from each class ``` vis.imshow_unique(x_train, y_train, labels) ``` #### See 500 of the Images ``` vis.imshow_sprite(x_train[:500]) ``` ## Multi Layer Perceptron ![](img/single_dl.png) Lets learn both the representation and the classifier together now **Step 1: Prepare the images and labels** Convert from 'uint8' to 'float32' and normalise the data to (0,1) ``` x_train = x_train.astype("float32")/255 x_test = x_test.astype("float32")/255 ``` Flatten the data from (60000, 28, 28) to (60000, 784) ``` x_train_flatten = x_train.reshape(60000, 28 * 28) x_test_flatten = x_test.reshape(10000, 28 * 28) ``` Convert class vectors to binary class matrices ``` from keras.utils import to_categorical y_train_class = to_categorical(y_train, 10) y_test_class = to_categorical(y_test, 10) ``` **Step 2: Craft the feature transfomation and classifier model ** ``` from keras import Sequential from keras.layers import Dense, Activation model_mlp = Sequential() model_mlp.add(Dense(100, activation='relu', input_shape=(28 * 28,))) model_mlp.add(Dense(50, activation='relu')) model_mlp.add(Dense(10, activation='softmax')) model_mlp.summary() ``` **Step 3: Compile and fit the model** ``` model_mlp.compile(loss='categorical_crossentropy', optimizer="sgd", metrics=['accuracy']) %%time mlp_output = model_mlp.fit(x_train_flatten, y_train_class, epochs=10, verbose=0, validation_data=(x_test_flatten, y_test_class)) ``` **Step 4: Check the performance of the model** ``` mlp_metrics = mlp_output.history vis.metrics(mlp_metrics) score = model_mlp.evaluate(x_test_flatten, y_test_class, verbose=1) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` **Step 5: Make & Visualise the Prediction** ``` predict_classes = model_mlp.predict_classes(x_test_flatten) pd.crosstab(y_test, predict_classes) proba = model_mlp.predict_proba(x_test_flatten) i = 4 vis.imshow(x_test[i], labels[y_test[i]]) | vis.predict(proba[i], y_test[i], labels) ```
github_jupyter
# Problem 2 In this problem i'm going to: - retrieve a set of images (faces) - compute mean image and perform mean substraction - perform singular value decomposition on them - plot the error function (as frobenius norm) for low rank approximation of images - compute r-dimensional feature matrix for images array - train a logistic regression model and perform classification of faces and compare the results with the test images To show code and it's results I used jupyter notebook and python language to implement this project. ``` # Some initializations %matplotlib inline import numpy as np from PIL import Image from numpy.ma import array X = [] XTag = [] XTest = [] XTestTag = [] shape = [1, 2500] average = np.zeros(shape) averageTest = np.zeros(shape) ``` ### Part A, B In the first part I read `train.txt` and `test.txt` files and their inputs to `X` and `XTest` arrays. ``` with open('faces/train.txt') as trainFile: trainText = trainFile.readlines() for line in trainText: path = line.split(' ') arr = np.reshape(array(Image.open(path[0])), shape) X.append(arr) XTag.append(path[1].replace("\n", "")) average += arr with open('faces/test.txt') as testFile: testText = testFile.readlines() for line in testText: path = line.split(' ') arr = np.reshape(array(Image.open(path[0])), shape) XTest.append(arr) XTestTag.append(path[1].replace("\n", "")) averageTest += arr ``` ### Part C: computing average face Here I computed the average faces for `X` and `XTest` datasets and output the result below. I also defined two helper functions (`showImage` and `showTwoImages`) for showing images inline this jupyter notebook. ``` from matplotlib import pyplot as plt def showImage(data, title): data = data.reshape([50, 50]) fig, [ax1] = plt.subplots(1, 2) ax1.set_title(title) ax1.set_xticks([]) ax1.set_yticks([]) ax1.imshow(data, cmap="gray") def showTwoImages(firstData, firstTitle, secondData, secondTitle): firstData = firstData.reshape([50, 50]) secondData = secondData.reshape([50, 50]) fig, [ax1, ax2] = plt.subplots(1, 2) ax1.set_title(firstTitle) ax1.set_xticks([]) ax1.set_yticks([]) ax1.imshow(firstData, cmap="gray") ax2.set_title(secondTitle) ax2.set_xticks([]) ax2.set_yticks([]) ax2.imshow(secondData, cmap="gray") average /= len(X) averageTest /= len(X) showTwoImages(average, "Average Image", averageTest, "Test Average Image") ``` ### Part D: Mean subtraction Here I subtract the mean matrices from the original matrices to normalize them and preview 53'th index of `X` and `XTest` as some sample results for each case. ``` for i in range(len(X)): X[i] = X[i] - average for i in range(len(XTest)): XTest[i] = XTest[i] - average showTwoImages(X[53], "53'th Normal image", XTest[53], "53'th Normal Test Image") ``` ### Part E: computing SVD I used `linalg` library's `svd` function to perform singular value decomposition for `X` and `XTest` matrices and show images in 53'th index of `V^T` matrices as some results. ``` X = np.reshape(X, [len(X), 2500]) XTest = np.reshape(XTest, [len(XTest), 2500]) U, S, Vt = np.linalg.svd(X, full_matrices=False) UTest, STest, VtTest = np.linalg.svd(XTest, full_matrices=False) showTwoImages(Vt[53], "Vt for real data", VtTest[53], "Vt for test data") ``` ### Part F: Low Rank Approximation Error I performed a low rank approximation of `X` matrix for `r` in `[1,200]` and visualized the result. as one shall see, the approximation is better as we increase `r` because we are prserving more amount of data in our approximation and the result will become much more exact. ``` errorData = [] for r in range(1, 200): error = 0 sigma = np.zeros([len(X), len(X)]) for i in range(r): sigma[i][i] = S[i] lowRankApprox = np.matmul(np.matmul(U, sigma), Vt) errorData.append(np.linalg.norm(lowRankApprox - X, 'fro')) plt.plot(errorData) plt.xlabel('R') plt.ylabel('Approximation error') plt.show() ``` ### Part G: computing feature matrix Here I provide a function to compute feature matrix based on the input matrix and its `V^T` associated matrix in singular value decomposition. ``` from sklearn import linear_model def computeF(dataset, datasetVt, r): f = np.zeros([len(dataset), r]) reshaped_set = np.reshape(dataset, [len(dataset), 2500]) reshaped_v = np.reshape(datasetVt[0:r].T, [2500, r]) np.matmul(reshaped_set, reshaped_v, f) return f ``` ### Part G and H: performing face recognition In the final part I computed the feature matrices for `X` and `XTest` matrices and then used `LogisticRegression()` class from `linear_model` package to predict class (or tag) of each image and perform a recognition of the faces to know which face belongs to which human. I also collected the errors (wrong predicts) in `errorData` array and visualized them; as we can see, the number of errors reduces as we increase the size of feature matrix and again that's because more data is preserved in our feataure model. ``` e = 0 errorData = [] for r in range(1, 200): F = computeF(X, Vt, r) FTest = computeF(XTest, Vt, r) e = 0 regression = linear_model.LogisticRegression() regression.fit(F, XTag) predict = regression.predict(FTest) for j in range(len(XTestTag)): if XTestTag[j] != predict[j]: e += 1 errorData.append(e) print(e) plt.plot(errorData) plt.xlabel("R") plt.ylabel("Wrong predictions") plt.show() ```
github_jupyter
``` %matplotlib inline import pandas as pd import cv2 import numpy as np from matplotlib import pyplot as plt df = pd.read_csv("data/1105573_SELECT_t___FROM_data_data_t.csv",header=None,index_col=0) df = df.rename(columns={0:"no", 1: "CAPTDATA", 2: "CAPTIMAGE",3: "timestamp"}) %%time df.info() df.sample(5) def alpha_to_gray(img): alpha_channel = img[:, :, 3] _, mask = cv2.threshold(alpha_channel, 128, 255, cv2.THRESH_BINARY) # binarize mask color = img[:, :, :3] img = cv2.bitwise_not(cv2.bitwise_not(color, mask=mask)) return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def preprocess(data): data = bytes.fromhex(data[2:]) img = cv2.imdecode( np.asarray(bytearray(data), dtype=np.uint8), cv2.IMREAD_UNCHANGED ) img = alpha_to_gray(img) kernel = np.ones((3, 3), np.uint8) img = cv2.dilate(img, kernel, iterations=1) img = cv2.medianBlur(img, 3) kernel = np.ones((4, 4), np.uint8) img = cv2.erode(img, kernel, iterations=1) # plt.imshow(img) return img df["IMAGE"] = df["CAPTIMAGE"].apply(preprocess) def bounding(gray): # data = bytes.fromhex(df["CAPTIMAGE"][1][2:]) # image = cv2.imdecode( np.asarray(bytearray(data), dtype=np.uint8), cv2.IMREAD_UNCHANGED ) # alpha_channel = image[:, :, 3] # _, mask = cv2.threshold(alpha_channel, 128, 255, cv2.THRESH_BINARY) # binarize mask # color = image[:, :, :3] # src = cv2.bitwise_not(cv2.bitwise_not(color, mask=mask)) ret, binary = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY) binary = cv2.bitwise_not(binary) contours, hierachy = cv2.findContours(binary, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_NONE) ans = [] for h, tcnt in enumerate(contours): x,y,w,h = cv2.boundingRect(tcnt) if h < 20: continue if 50 < w < 100: # 2개가 붙어 있는 경우 ans.append([x,y,w//2+5,h]) ans.append([x+(w//2)-5,y,w//2+5,h]) continue # cv2.rectangle(src,(x,y),(x+w,y+h),(255,0,0),1) ans.append([x,y,w,h]) return ans # cv2.destroyAllWindows() df["bounding"] = df["IMAGE"].apply(bounding) def draw_bounding(idx): CAPTIMAGE = df["CAPTIMAGE"][idx] bounding = df["bounding"][idx] data = bytes.fromhex(CAPTIMAGE[2:]) image = cv2.imdecode( np.asarray(bytearray(data), dtype=np.uint8), cv2.IMREAD_UNCHANGED ) alpha_channel = image[:, :, 3] _, mask = cv2.threshold(alpha_channel, 128, 255, cv2.THRESH_BINARY) # binarize mask color = image[:, :, :3] src = cv2.bitwise_not(cv2.bitwise_not(color, mask=mask)) for x,y,w,h in bounding: # print(x,y,w,h) cv2.rectangle(src,(x,y),(x+w,y+h),(255,0,0),1) return src import random nrows = 4 ncols = 4 fig, axes = plt.subplots(nrows=nrows, ncols=ncols) fig.set_size_inches((16, 6)) for i in range(nrows): for j in range(ncols): idx = random.randrange(20,22800) axes[i][j].set_title(str(idx)) axes[i][j].imshow(draw_bounding(idx)) fig.tight_layout() plt.savefig('sample.png') plt.show() charImg = [] for idx in df.index: IMAGE = df["IMAGE"][idx] bounding = df["bounding"][idx] for x,y,w,h in bounding: newImg = IMAGE[y:y+h,x:x+w] newImg = cv2.resize(newImg, dsize=(41, 38), interpolation=cv2.INTER_NEAREST) charImg.append(newImg/255.0) # cast to numpy arrays trainingImages = np.asarray(charImg) # reshape img array to vector def reshape_image(img): return np.reshape(img,len(img)*len(img[0])) img_reshape = np.zeros((len(trainingImages),len(trainingImages[0])*len(trainingImages[0][0]))) for i in range(0,len(trainingImages)): img_reshape[i] = reshape_image(trainingImages[i]) %%time from sklearn.cluster import KMeans import matplotlib.pyplot as plt import seaborn as sns # create model and prediction model = KMeans(n_clusters=300,algorithm='auto') model.fit(img_reshape) predict = pd.DataFrame(model.predict(img_reshape)) predict.columns=['predict'] import pickle pickle.dump(model, open("KMeans_300_1105573.pkl", "wb")) %%time import random r = pd.concat([pd.DataFrame(img_reshape),predict],axis=1) nrows = 4 ncols = 10 fig, axes = plt.subplots(nrows=nrows, ncols=ncols) fig.set_size_inches((16, 6)) for j in range(300): i = 0 for idx in r[r["predict"] == j].sample(nrows * ncols).index: axes[i // ncols][i % ncols].set_title(str(idx)) axes[i // ncols][i % ncols].imshow(trainingImages[idx]) i+=1 fig.tight_layout() plt.savefig('res_1105573/sample_' + str(j) + '.png') ```
github_jupyter
# Реализация логистической регрессии в TensorFlow ``` import numpy as np import tensorflow as tf %matplotlib inline from matplotlib import pyplot as plt ``` ## Генерируем данные для задачи регрессии ``` NUM_FEATURES = 2 NUM_SAMPLES = 1000 from sklearn.datasets import make_classification X, y = make_classification(n_samples = NUM_SAMPLES, n_features = NUM_FEATURES, n_informative = NUM_FEATURES, n_redundant = 0, n_classes = 2, n_clusters_per_class = 1, class_sep = 0.75, random_state = 54312) y = y.reshape(-1, 1) ones = np.where(y == 1) # индексы объектов класса '1' zeros = np.where(y == 0) # индексы объектов класса '0' plt.xlabel('x') plt.ylabel('y') plt.plot(X[ones, 0], X[ones, 1], 'ob', X[zeros, 0], X[zeros, 1], 'or'); ``` ## Вспомогательная функция для создания операций ``` import string def py_func_with_grad(func, inp, Tout, grad, name = None, stateful = False, graph = None): name_prefix = ''.join(np.random.choice(list(string.ascii_letters), size = 10)) name = '%s_%s' % (name_prefix, name or '') grad_func_name = '%s_grad' % name tf.RegisterGradient(grad_func_name)(grad) g = graph or tf.get_default_graph() with g.gradient_override_map({'PyFunc': grad_func_name, 'PyFuncStateless': grad_func_name}): with tf.name_scope(name, 'PyFuncOp', inp): return tf.py_func(func, inp, Tout, stateful = stateful, name = name) ``` ## Реализация линейной опреаций ``` def linear_op_forward(X, W): ''' Реализация линейной операции ''' return np.dot(X, W.T) # аргументы являются numpy-массивами def linear_op_backward(op, grads): ''' Реализация вычисления градиента линейной операции ''' X = op.inputs[0] # тензор входных данных W = op.inputs[1] # тензор параметров модели dX = tf.multiply(grads, W) dW = tf.reduce_sum(tf.multiply(X, grads), axis = 0, keep_dims = True) return dX, dW def sigmoid_op_forward(X): # TODO: реализовать операцию sigmoid return np.zeros_like(X) def sigmoid_op_backward(op, grads): # TODO: реализовать вычисление градиента для sigmoid return tf.zeros([1, 1]) ``` ## Создание графа вычислений и обучение модели ``` BATCH_SIZE = NUM_SAMPLES // 10 weights = None # в этой переменной мы сохраним результат обучения модели learning_curve = [] # значения ошибки на каждой итерации обучения with tf.Session(graph = tf.Graph()) as sess: # инициализируем сессию вычислений # создаем placeholdr'ы, через них мы будем # передавать внешние данные в граф вычислений plh_X = tf.placeholder(dtype = tf.float32, shape = [None, NUM_FEATURES]) plh_labels = tf.placeholder(dtype = tf.float32, shape = [None, 1]) # создаем переменную для хранения весов модели # эти веса будут изменяться в процессе обучения var_W = tf.Variable(tf.random_uniform(shape = [1, NUM_FEATURES], dtype = tf.float32, seed = 54321)) # создаем переменную для результата предсказания модели var_Pred = py_func_with_grad(linear_op_forward, # функция предсказания модели [plh_X, var_W], # аргументы функции [tf.float32], # тип выходных значений name = 'linear_op', # имя операции grad = linear_op_backward, # функция для вычисления градиента graph = sess.graph) # объект графа вчислений # создаем переменную для результата операции sigmoid var_Sigmoid = py_func_with_grad(sigmoid_op_forward, [var_Pred], [tf.float32], name = 'sigmoid_op', grad = sigmoid_op_backward, graph = sess.graph) # кроссэнтропийная функция потерь для бмнарной классификации cost = tf.losses.sigmoid_cross_entropy(plh_labels, var_Sigmoid) # инициализируем оптимизатор и указываем скорость обучения optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.9).minimize(cost) # инициализируем placeholder'ы и переменные sess.run(tf.global_variables_initializer()) indices = np.arange(len(X)) # массив индексов объектов # выполняем итерации по 10-ти эпохам for epoch in range(10): # вначале каждой эпохи перемешиваем индексы np.random.shuffle(indices) # внутри каждой эпохи данные разбиваются на батчи for batch in range(len(X) // BATCH_SIZE): # выбираем индексы очередного батча batch_indices = indices[batch * BATCH_SIZE:(batch + 1) * BATCH_SIZE] # выполняем шаг обучения: вычисляем ошибку и обновляем веса loss, _ = sess.run([cost, optimizer], # указываем, какие операции необходимо выполнить feed_dict = {plh_X: X[batch_indices], # передаем входные данные для вычисления plh_labels: y[batch_indices]}) # сохраняем занчения ошибки для построения кривой обучения learning_curve.append(loss) # выводим текущее значение ошибки для каждого 10го шага steps = len(learning_curve) - 1 if steps % 10 == 0: print('[%03d] loss=%.3f weights=%s' % (steps, loss, var_W.eval())) # сохраняем обученные веса weights = var_W.eval() ``` ## Визуализируем кривую обучения ``` plt.xlabel('step') plt.ylabel('loss') plt.title('Learning curve') plt.plot(learning_curve); ``` ## Визуализируем разделяющую гиперплоскость ``` y_pred = - X[:, 0] * weights[0, 0] / weights[0, 1] order = np.argsort(X[:, 0]) plt.xlabel('x') plt.ylabel('y') plt.plot(X[ones, 0], X[ones, 1], 'ob', X[zeros, 0], X[zeros, 1], 'or', X[order, 0], y_pred[order], '-g'); ```
github_jupyter
``` import os os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'mesolitica-storage.json' from google.cloud import storage client = storage.Client() bucket = client.bucket('mesolitica-general') best = '508000' !mkdir t5-base-summary model = best # blob = bucket.blob(f't5-base-summary/model.ckpt-{model}.data-00000-of-00002') # blob.download_to_filename(f't5-base-summary/model.ckpt-{model}.data-00000-of-00002') # blob = bucket.blob(f't5-base-summary/model.ckpt-{model}.data-00001-of-00002') # blob.download_to_filename(f't5-base-summary/model.ckpt-{model}.data-00001-of-00002') # blob = bucket.blob(f't5-base-summary/model.ckpt-{model}.index') # blob.download_to_filename(f't5-base-summary/model.ckpt-{model}.index') # blob = bucket.blob(f't5-base-summary/model.ckpt-{model}.meta') # blob.download_to_filename(f't5-base-summary/model.ckpt-{model}.meta') # blob = bucket.blob('t5-base-summary/checkpoint') # blob.download_to_filename('t5-base-summary/checkpoint') # blob = bucket.blob('t5-base-summary/operative_config.gin') # blob.download_to_filename('t5-base-summary/operative_config.gin') # with open('t5-base-summary/checkpoint', 'w') as fopen: # fopen.write(f'model_checkpoint_path: "model.ckpt-{model}"') import os os.environ['CUDA_VISIBLE_DEVICES'] = '0' import tensorflow as tf import tensorflow_datasets as tfds import t5 model = t5.models.MtfModel( model_dir='t5-base-summary', tpu=None, tpu_topology=None, model_parallelism=2, batch_size=5, sequence_length={"inputs": 1024, "targets": 1024}, learning_rate_schedule=0.003, save_checkpoints_steps=5000, keep_checkpoint_max=3, iterations_per_loop=100, mesh_shape="model:1,batch:1", mesh_devices=["gpu:0"] ) !rm -rf output/* saved_model_path = model.export( 'output', checkpoint_step=-1, beam_size=1, temperature=0, sentencepiece_model_path='sp10m.cased.t5.model' ) saved_model_path.decode() import tensorflow_text tf.compat.v1.reset_default_graph() sess = tf.InteractiveSession() meta_graph_def = tf.compat.v1.saved_model.load(sess, ["serve"], 'output/1603121610') signature_def = meta_graph_def.signature_def["serving_default"] pred = lambda x: sess.run( fetches=signature_def.outputs["outputs"].name, feed_dict={signature_def.inputs["input"].name: x} ) string = """ Amanah Kedah berpendapat jika ada Adun Pakatan Harapan atau Bersatu negeri itu mahu berpaling tadah memberikan sokongan kepada kumpulan Muafakat Nasional, mereka perku membuat kenyataan rasmi mengenainya. Pengerusi Amanah Kedah, Phahrolrazi Mohd Zawawi, berkata disebabkan tiada mana-mana Adun membuat kenyataan berhubung isu itu maka kerajaan negeri berpendapat tiada apa-apa yang berlaku. Ditemui media selepas mengadakan pertemuan tertutup lebih sejam dengan Menteri Besar, Mukhriz Mahathir, hari ini Phahrolrazi berkata pihaknya juga mendapati kerajaan negeri masih berfungsi seperti biasa. "Kami bincang keadaan semasa, ada juga kita sentuh (cubaan menukar kerajaan negeri), tetapi kita lihat kerajaan masih berfungsi. "Tidak ada apa-apa kenyataan dari pihak sana (pembangkang) bahawa mereka sudah cukup majoriti setakat ini," katanya seperti dipetik BH Online. Spekulasi mengenai pertukaran kerajaan menjadi kencang sejak semalam ekoran berlaku pertemuan tertutup pemimpin PAS dan Umno Kedah di Alor Setar semalam. Turut hadir Setiausaha Agung PAS yang juga Menteri di Jabatan Perdana Menteri, Takiyuddin Hassan, dan Menteri Besar Terengganu, Dr Ahmad Samsuri Mokhtar. Cuba jatuhkan sejak dulu Perkembangan itu berlaku kesan tindakan PKR memecat dan menggantung sejumlah anggota mereka baru-baru ini dan dipercayai memberi kesan terhadap pendirian wakil rakyat parti itu di Kedah. Turut disebut-sebut akan beralih arah dalam perjalanan politik mereka ialah Adun Bersatu. Untuk rekod berdasarkan pecahan parti PAS menguasai kerusi terbesar dalam DUN dan lazimnya pemimpin parti itu akan menjadi pilihan menjadi menteri besar jika berlaku pertukaran kerajaan. Menurut Phahrolrazi, jika ada mana-mana wakil rakyat Bersatu atau PH mahu melompat, mereka wajar menyatakannya secara rasmi. Tanpa kenyataan begitu, katanya, Amanah beranggapan isu perubahan kerajaan negeri masih bersifat spekulasi. Timbalan Pengerusi Amanah Kedah, Dr Ismail Salleh, pula berkata ada kemungkinan Adun Bersatu, PH atau exco negeri tu yang sudah diumpan untuk membelakangkan mandat rakyat. Beliau yang juga exco Kedah berkata memang sejak dulu lagi PAS cuba menjatuhkan kerajaan negeri dengan memujuk Adun PH serta Bersatu bertindak seperti rakan mereka di Perak, Johor dan Selangor. """ import re # minimum cleaning, just simply to remove newlines. def cleaning(string): string = string.replace('\n', ' ') string = re.sub(r'[ ]+', ' ', string).strip() return string string = cleaning(string) r = pred([f'{string}'] * 5) r r = pred([f'ringkasan: {string}'] * 5) r ```
github_jupyter
``` from __future__ import absolute_import from __future__ import division from __future__ import print_function %pylab %matplotlib inline import os import math import time import tensorflow as tf from datasets import dataset_utils,cifar10 from tensorflow.contrib import slim dropout_keep_prob=0.8 image_size = 32 step=20000 learning_rate=0.0002 train_dir = '/tmp/cifar10/elu2-lrn' cifar10_data_dir='/media/ramdisk/data/cifar10' display_step=2 with tf.Graph().as_default(): dataset = cifar10.get_split('train', cifar10_data_dir) data_provider = slim.dataset_data_provider.DatasetDataProvider( dataset, common_queue_capacity=32, common_queue_min=1) image, label = data_provider.get(['image', 'label']) with tf.Session() as sess: with slim.queues.QueueRunners(sess): for i in range(display_step): np_image, np_label = sess.run([image, label]) height, width, _ = np_image.shape class_name = name = dataset.labels_to_names[np_label] plt.figure() plt.imshow(np_image) plt.title('%s, %d x %d' % (name, height, width)) plt.axis('off') plt.show() def cnn_elu(images, num_classes, is_training): #https://github.com/agrawalnishant/tensorflow-1/tree/master/tensorflow/contrib/slim ##vgg와 cifarndet을 참조함 with slim.arg_scope([slim.max_pool2d], stride=2): net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1',activation_fn=tf.nn.elu,normalizer_fn=tf.nn.lrn) net = slim.max_pool2d(net, [2, 2], scope='pool1') net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2',activation_fn=tf.nn.elu,normalizer_fn=tf.nn.lrn) net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.repeat(net, 4, slim.conv2d, 256, [3, 3], scope='conv3',activation_fn=tf.nn.elu,normalizer_fn=tf.nn.lrn) net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.repeat(net, 4, slim.conv2d, 512, [3, 3], scope='conv4',activation_fn=tf.nn.elu,normalizer_fn=tf.nn.lrn) net = slim.max_pool2d(net, [2, 2], scope='pool4') net = slim.conv2d(net, 512, [2, 2], padding="VALID", scope='fc6') net = slim.dropout(net, dropout_keep_prob, is_training=is_training, scope='dropout6') net = slim.conv2d(net, 512, [1, 1], scope='fc8', activation_fn=None) net = slim.dropout(net, dropout_keep_prob, is_training=is_training, scope='dropout7') net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None, normalizer_fn=None, scope='fc9') net = tf.squeeze(net, [1,2],name='fc9/squeezed') return net from preprocessing import cifarnet_preprocessing def load_batch(dataset, batch_size=128, height=image_size, width=image_size, is_training=False): """Loads a single batch of data. Args: dataset: The dataset to load. batch_size: The number of images in the batch. height: The size of each image after preprocessing. width: The size of each image after preprocessing. is_training: Whether or not we're currently training or evaluating. Returns: images: A Tensor of size [batch_size, height, width, 3], image samples that have been preprocessed. images_raw: A Tensor of size [batch_size, height, width, 3], image samples that can be used for visualization. labels: A Tensor of size [batch_size], whose values range between 0 and dataset.num_classes. """ data_provider = slim.dataset_data_provider.DatasetDataProvider( dataset, common_queue_capacity=128, common_queue_min=32) image_raw, label = data_provider.get(['image', 'label']) # Preprocess image for usage by Inception. image = cifarnet_preprocessing.preprocess_image(image_raw, height, width, is_training=is_training) # Preprocess the image for display purposes. image_raw = tf.expand_dims(image_raw, 0) image_raw = tf.image.resize_images(image_raw, [height, width]) image_raw = tf.squeeze(image_raw) # Batch it up. images, images_raw, labels = tf.train.batch( [image, image_raw, label], batch_size=batch_size, num_threads=4, capacity=4 * batch_size) return images, images_raw, labels %%time # This might take a few minutes. print('Will save model to %s' % train_dir) with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) dataset = cifar10.get_split('train', cifar10_data_dir) images, _, labels = load_batch(dataset) # Create the model: logits =cnn_elu(images, num_classes=dataset.num_classes, is_training=True) # Specify the loss function: one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes) slim.losses.softmax_cross_entropy(logits, one_hot_labels) total_loss = slim.losses.get_total_loss() # Create some summaries to visualize the training process: tf.summary.scalar('losses/Total Loss', total_loss) # Specify the optimizer and create the train op: optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train_op = slim.learning.create_train_op(total_loss, optimizer) # Run the training: final_loss = slim.learning.train( train_op, logdir=train_dir, number_of_steps=step, log_every_n_steps=10, save_interval_secs=100, save_summaries_secs=100) print('Finished training. Final batch loss %d' % final_loss) %%time # This might take a few minutes. with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.DEBUG) dataset = cifar10.get_split('test', cifar10_data_dir) images, _, labels = load_batch(dataset) logits = cnn_elu(images, num_classes=dataset.num_classes, is_training=False) predictions = tf.argmax(logits, 1) # Define the metrics: names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({ 'eval/mse':slim.metrics.streaming_mean_squared_error(predictions, labels), 'eval/Accuracy': slim.metrics.streaming_accuracy(predictions, labels), 'eval/TruePositives': slim.metrics.streaming_true_positives(predictions, labels), 'eval/TrueNegatives': slim.metrics.streaming_true_negatives(predictions, labels), 'eval/FalsePositives': slim.metrics.streaming_false_positives(predictions, labels), 'eval/FalseNegatives': slim.metrics.streaming_false_negatives(predictions, labels), 'eval/Recall5': slim.metrics.streaming_sparse_recall_at_k(logits, labels, 5), }) print('Running evaluation Loop...') checkpoint_path = tf.train.latest_checkpoint(train_dir) metric_values = slim.evaluation.evaluate_once( master='', checkpoint_path=checkpoint_path, logdir=train_dir, eval_op=list(names_to_updates.values()), final_op=list(names_to_values.values()) ) names_to_values = dict(zip(names_to_values.keys(), metric_values)) for name in names_to_values: print('%s: %f' % (name, names_to_values[name])) %%time batch_size = 10 with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) dataset = cifar10.get_split('test',cifar10_data_dir) images, images_raw, labels = load_batch(dataset, height=image_size, width=image_size) # Create the model, use the default arg scope to configure the batch norm parameters. logits = cnn_elu(images, num_classes=dataset.num_classes, is_training=True) probabilities = tf.nn.softmax(logits) checkpoint_path = tf.train.latest_checkpoint(train_dir) init_fn = slim.assign_from_checkpoint_fn( checkpoint_path, slim.get_variables_to_restore()) with tf.Session() as sess: with slim.queues.QueueRunners(sess): sess.run(tf.local_variables_initializer()) init_fn(sess) np_probabilities, np_images_raw, np_labels = sess.run([probabilities, images_raw, labels]) for i in range(batch_size): image = np_images_raw[i, :, :, :] true_label = np_labels[i] predicted_label = np.argmax(np_probabilities[i, :]) predicted_name = dataset.labels_to_names[predicted_label] true_name = dataset.labels_to_names[true_label] plt.figure() plt.imshow(image.astype(np.uint8)) plt.title('Ground Truth: [%s], Prediction [%s]' % (true_name, predicted_name)) plt.axis('off') plt.show() ```
github_jupyter