text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
--- _You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._ --- ## Assignment 4 - Understanding and Predicting Property Maintenance Fines This assignment is based on a data challenge from the Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)). The Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences ([MSSISS](https://sites.lsa.umich.edu/mssiss/)) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. [Blight violations](http://www.detroitmi.gov/How-Do-I/Report/Blight-Complaint-FAQs) are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know: how can we increase blight ticket compliance? The first step in answering this question is understanding when and why a resident might fail to comply with a blight ticket. This is where predictive modeling comes in. For this assignment, your task is to predict whether a given blight ticket will be paid on time. All data for this assignment has been provided to us through the [Detroit Open Data Portal](https://data.detroitmi.gov/). **Only the data already included in your Coursera directory can be used for training the model for this assignment.** Nonetheless, we encourage you to look into data from other Detroit datasets to help inform feature creation and model selection. We recommend taking a look at the following related datasets: * [Building Permits](https://data.detroitmi.gov/Property-Parcels/Building-Permits/xw2a-a7tf) * [Trades Permits](https://data.detroitmi.gov/Property-Parcels/Trades-Permits/635b-dsgv) * [Improve Detroit: Submitted Issues](https://data.detroitmi.gov/Government/Improve-Detroit-Submitted-Issues/fwz3-w3yn) * [DPD: Citizen Complaints](https://data.detroitmi.gov/Public-Safety/DPD-Citizen-Complaints-2016/kahe-efs3) * [Parcel Map](https://data.detroitmi.gov/Property-Parcels/Parcel-Map/fxkw-udwf) ___ We provide you with two data files for use in training and validating your models: train.csv and test.csv. Each row in these two files corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Compliance, as well as a handful of other variables that will not be available at test-time, are only included in train.csv. Note: All tickets where the violators were found not responsible are not considered during evaluation. They are included in the training set as an additional source of data for visualization, and to enable unsupervised and semi-supervised approaches. However, they are not included in the test set. <br> **File descriptions** (Use only this data for training your model!) readonly/train.csv - the training set (all tickets issued 2004-2011) readonly/test.csv - the test set (all tickets issued 2012-2016) readonly/addresses.csv & readonly/latlons.csv - mapping from ticket id to addresses, and from addresses to lat/lon coordinates. Note: misspelled addresses may be incorrectly geolocated. <br> **Data fields** train.csv & test.csv ticket_id - unique identifier for tickets agency_name - Agency that issued the ticket inspector_name - Name of inspector that issued the ticket violator_name - Name of the person/organization that the ticket was issued to violation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred mailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator ticket_issued_date - Date and time the ticket was issued hearing_date - Date and time the violator's hearing was scheduled violation_code, violation_description - Type of violation disposition - Judgment and judgement type fine_amount - Violation fine amount, excluding fees admin_fee - $20 fee assigned to responsible judgments state_fee - $10 fee assigned to responsible judgments late_fee - 10% fee assigned to responsible judgments discount_amount - discount applied, if any clean_up_cost - DPW clean-up or graffiti removal cost judgment_amount - Sum of all fines and fees grafitti_status - Flag for graffiti violations train.csv only payment_amount - Amount paid, if any payment_date - Date payment was made, if it was received payment_status - Current payment status as of Feb 1 2017 balance_due - Fines and fees still owed collection_status - Flag for payments in collections compliance [target variable for prediction] Null = Not responsible 0 = Responsible, non-compliant 1 = Responsible, compliant compliance_detail - More information on why each ticket was marked compliant or non-compliant ___ ## Evaluation Your predictions will be given as the probability that the corresponding blight ticket will be paid on time. The evaluation metric for this assignment is the Area Under the ROC Curve (AUC). Your grade will be based on the AUC score computed for your classifier. A model which with an AUROC of 0.7 passes this assignment, over 0.75 will recieve full points. ___ For this assignment, create a function that trains a model to predict blight ticket compliance in Detroit using `readonly/train.csv`. Using this model, return a series of length 61001 with the data being the probability that each corresponding ticket from `readonly/test.csv` will be paid, and the index being the ticket_id. Example: ticket_id 284932 0.531842 285362 0.401958 285361 0.105928 285338 0.018572 ... 376499 0.208567 376500 0.818759 369851 0.018528 Name: compliance, dtype: float32 ### Hints * Make sure your code is working before submitting it to the autograder. * Print out your result to see whether there is anything weird (e.g., all probabilities are the same). * Generally the total runtime should be less than 10 mins. You should NOT use Neural Network related classifiers (e.g., MLPClassifier) in this question. * Try to avoid global variables. If you have other functions besides blight_model, you should move those functions inside the scope of blight_model. * Refer to the pinned threads in Week 4's discussion forum when there is something you could not figure it out. ``` import pandas as pd import numpy as np def blight_model(): from sklearn.ensemble import RandomForestClassifier # Your code here df = pd.read_csv('train.csv', encoding='ISO-8859-1') df = df[df['compliance'].notnull()] df = df.drop(['payment_amount', 'payment_date', 'payment_status', 'balance_due', 'collection_status', 'compliance_detail'], axis=1) df.dropna(axis=1, inplace=True) df = df[['discount_amount', 'fine_amount', 'judgment_amount', 'late_fee', 'compliance']] y_train = df['compliance'] X_train = df.drop('compliance', axis=1) df = pd.read_csv('test.csv', encoding='ISO-8859-1') #df = df.drop(['payment_amount', 'payment_date', 'payment_status', 'balance_due', 'collection_status', 'compliance_detail'], axis=1) df.dropna(axis=1, inplace=True) indexs = list(df['ticket_id']) df = df[['discount_amount', 'fine_amount', 'judgment_amount', 'late_fee']] X_test = df clf = RandomForestClassifier().fit(X_train, y_train) res = clf.predict_proba(X_test) res = pd.Series(res[:, 1], index=indexs, name='compliance') return res blight_model() ```
github_jupyter
<a href="https://colab.research.google.com/github/cxbxmxcx/EvolutionaryDeepLearning/blob/main/EDL_7_3_Crossover_CNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #@title Install Packages !pip install livelossplot --quiet #@title Imports import tensorflow as tf from tensorflow.keras import datasets, layers, models import numpy as np import math import time import random import matplotlib.pyplot as plt from livelossplot import PlotLossesKeras #@title Load Data dataset = datasets.fashion_mnist (x_train, y_train), (x_test, y_test) = dataset.load_data() # normalize and reshape data x_train = x_train.reshape(x_train.shape[0], 28, 28, 1).astype("float32") / 255.0 x_test = x_test.reshape(x_test.shape[0], 28, 28, 1).astype("float32") / 255.0 x_train = x_train[:1000] y_train= y_train[:1000] x_test = x_test[:100] y_test= y_test[:100] class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] def plot_data(num_images, images, labels): grid = math.ceil(math.sqrt(num_images)) plt.figure(figsize=(grid*2,grid*2)) for i in range(num_images): plt.subplot(grid,grid,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(images[i].reshape(28,28)) plt.xlabel(class_names[labels[i]]) plt.show() plot_data(25, x_train, y_train) #@title Constants max_layers = 5 max_neurons = 128 min_neurons = 16 max_kernel = 5 min_kernel = 2 max_pool = 3 min_pool = 2 CONV_LAYER = -1 CONV_LAYER_LEN = 4 POOLING_LAYER = -2 POOLING_LAYER_LEN = 3 BN_LAYER = -3 BN_LAYER_LEN = 1 DENSE_LAYER = -4 DENSE_LAYER_LEN = 2 #@title Encoding scheme def generate_neurons(): return random.randint(min_neurons, max_neurons) def generate_kernel(): part = [] part.append(random.randint(min_kernel, max_kernel)) part.append(random.randint(min_kernel, max_kernel)) return part def generate_bn_layer(): part = [BN_LAYER] return part def generate_pooling_layer(): part = [POOLING_LAYER] part.append(random.randint(min_pool, max_pool)) part.append(random.randint(min_pool, max_pool)) return part def generate_dense_layer(): part = [DENSE_LAYER] part.append(generate_neurons()) return part def generate_conv_layer(): part = [CONV_LAYER] part.append(generate_neurons()) part.extend(generate_kernel()) return part def create_offspring(): ind = [] for i in range(max_layers): if random.uniform(0,1)<.5: #add convolution layer ind.extend(generate_conv_layer()) if random.uniform(0,1)<.5: #add batchnormalization ind.extend(generate_bn_layer()) if random.uniform(0,1)<.5: #add max pooling layer ind.extend(generate_pooling_layer()) ind.extend(generate_dense_layer()) return ind individual = create_offspring() print(individual) def build_model(individual): model = models.Sequential() il = len(individual) i = 0 while i < il: if individual[i] == CONV_LAYER: n = individual[i+1] k = (individual[i+2], individual[i+3]) i += CONV_LAYER_LEN if i == 0: #first layer, add input shape model.add(layers.Conv2D(n, k, activation='relu', padding="same", input_shape=(28, 28, 1))) else: model.add(layers.Conv2D(n, k, activation='relu', padding="same")) elif individual[i] == POOLING_LAYER: #add pooling layer k = k = (individual[i+1], individual[i+2]) i += POOLING_LAYER_LEN model.add(layers.MaxPooling2D(k, padding="same")) elif individual[i] == BN_LAYER: #add batchnormal layer model.add(layers.BatchNormalization()) i += 1 elif individual[i] == DENSE_LAYER: #add dense layer model.add(layers.Flatten()) model.add(layers.Dense(individual[i+1], activation='relu')) i += 2 model.add(layers.Dense(10)) return model model = build_model(individual) #@title Custom Crossover Operator def get_layers(ind, layer_type): return [a for a in range(len(ind)) if ind[a] == layer_type] def swap(ind1, iv1, ind2, iv2, ll): ch1 = ind1[iv1:iv1+ll] ch2 = ind2[iv2:iv2+ll] print(ll, iv1, ch1, iv2, ch2) ind1[iv1:iv1+ll] = ch2 ind2[iv2:iv2+ll] = ch1 return ind1, ind2 def swap_layers(ind1, ind2, layer_type, layer_len): c1, c2 = get_layers(ind1, layer_type), get_layers(ind2, layer_type) min_c = min(len(c1), len(c2)) for i in range(min_c): if random.random() < 1: i1 = random.randint(0, len(c1)-1) i2 = random.randint(0, len(c2)-1) iv1 = c1.pop(i1) iv2 = c2.pop(i2) ind1, ind2 = swap(ind1, iv1, ind2, iv2, layer_len) return ind1, ind2 def crossover(ind1, ind2): ind1, ind2 = swap_layers(ind1, ind2, CONV_LAYER, CONV_LAYER_LEN) ind1, ind2 = swap_layers(ind1, ind2, POOLING_LAYER, POOLING_LAYER_LEN) ind1, ind2 = swap_layers(ind1, ind2, BN_LAYER, BN_LAYER_LEN) ind1, ind2 = swap_layers(ind1, ind2, DENSE_LAYER, DENSE_LAYER_LEN) return ind1, ind2 ind1 = create_offspring() ind2 = create_offspring() print(ind1) print(ind2) ind1, ind2 = crossover(ind1, ind2) print(ind1) print(ind2) model = build_model(ind2) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(x_train, y_train, epochs=3, validation_data=(x_test, y_test), callbacks=[PlotLossesKeras()], verbose=0) model.summary() model.evaluate(x_test, y_test) ```
github_jupyter
## Solar Radiation Prediction > meteorological data from the HI-SEAS weather station from four months (September through December 2016) between Mission IV and Mission V. Units: * Solar radiation: watts per meter^2 * Temperature: degrees Fahrenheit * Humidity: percent * Barometric pressure: Hg * Wind direction: degrees * Wind speed: miles per hour * Sunrise/sunset: Hawaii time ### Useful imports and read the data ``` import time import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from datetime import datetime from sklearn.model_selection import KFold from sklearn.ensemble import RandomForestRegressor # Read the data df = pd.read_csv('data/SolarPrediction.csv', parse_dates=['Data']) df.shape # Check data format df.head() df.describe() ``` ### Feature Engineering ``` # Convert all dates and times to unix timestamp (timezone doesn't matter now) df['Data'] = df['Data'].dt.date.astype(str) df['TimeSunRise'] = df['Data'] + ' ' + df['TimeSunRise'] df['TimeSunSet'] = df['Data'] + ' ' + df['TimeSunSet'] df['Data'] = df['Data'] + ' ' + df['Time'] # Convert to Unix timestamp fields = ['Data', 'TimeSunRise', 'TimeSunSet'] for x in fields: df[x + '_UnixTimeStamp'] = df[x].apply( lambda k: int(datetime.strptime(k, "%Y-%m-%d %H:%M:%S").timestamp()) ) # New sun time field df['SunTime'] = df['TimeSunSet_UnixTimeStamp'] - df['TimeSunRise_UnixTimeStamp'] # Drop old columns df.drop('UNIXTime', axis=1, inplace=True) df.drop('Data', axis=1, inplace=True) df.drop('Time', axis=1, inplace=True) df.drop('TimeSunRise', axis=1, inplace=True) df.drop('TimeSunSet', axis=1, inplace=True) # Plot head of dataset df.head() ``` ### Visualization ``` # Plots sorted by time df.sort_values(by=['Data_UnixTimeStamp'], ascending=[True], inplace=True) # Radiation and Temperature for the first 2 days f,axarr = plt.subplots(ncols=2, sharex=True) axarr[0].set_title('Radiation for the first 2 days') axarr[1].set_title('Temperature for the first 2 days') sns.tsplot(data=df['Radiation'][0:600], ax=axarr[0]) sns.tsplot(data=df['Temperature'][0:600], ax=axarr[1], color="r") plt.show() # Plot the sun time ax = sns.tsplot(data=df['SunTime']) plt.title('Winter is coming') plt.show() # Heatmap of the dataset heat_df = df.drop(['Data_UnixTimeStamp','TimeSunRise_UnixTimeStamp','TimeSunSet_UnixTimeStamp'], axis=1) sns.set(context="paper", font="monospace") # Get the correlation corrmat = heat_df.corr() # Set up the matplotlib figure f, ax = plt.subplots() # Draw the heatmap using seaborn sns.heatmap(corrmat, vmax=.8, square=True) plt.show() ``` ### Model train ``` # Create the K-folds k_folds = 5 kf = KFold(n_splits=k_folds, shuffle = True) # Prepare dataset X = df.drop(['Radiation','Data_UnixTimeStamp','TimeSunRise_UnixTimeStamp','TimeSunSet_UnixTimeStamp'] , axis=1).as_matrix() y = df['Radiation'].as_matrix() ``` #### Random Forests ``` score = [] for train_idx, test_idx in kf.split(X): # Separe training and test in the Training set for k-Fold fold_Xtrain, fold_Xtest = X[train_idx], X[test_idx] fold_ytrain, fold_ytest = y[train_idx], y[test_idx] # RegrRF regr_rf = RandomForestRegressor(max_depth=25, random_state=2) regr_rf.fit(fold_Xtrain, fold_ytrain) # Test on new data score_kf = regr_rf.score(fold_Xtest, fold_ytest) score.append(score_kf) print("RF score=%.2f" % score_kf) # Plot results #pred = regr_rf.predict(fold_Xtest) #plt.plot(fold_ytest[0:100], color="b") #plt.plot(pred[0:100], color="r") #plt.show() print("Mean score: %.2f" % np.mean(score)) print("Std score: %.2f" % np.std(score)) ``` #### LSTM ``` #@TODO NEXT ```
github_jupyter
# USA_Housing - Linear Regression We will somehow create a model that allows to put in a few features of a house and returns back an estimate of what the house would sell for. We are using data set: USA_Housing.csv. The data contains the following columns: * 'Avg. Area Income': Avg. Income of residents of the city house is located in. * 'Avg. Area House Age': Avg Age of Houses in same city * 'Avg. Area Number of Rooms': Avg Number of Rooms for Houses in same city * 'Avg. Area Number of Bedrooms': Avg Number of Bedrooms for Houses in same city * 'Area Population': Population of city house is located in * 'Price': Price that the house sold at * 'Address': Address for the house **Let's get started!**<br/> let's get our environment ready with the libraries we'll need # Import Libraries ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ``` # Check out the data Import the data! ``` USAhousing = pd.read_csv('USA_Housing.csv') USAhousing.head() USAhousing.info() USAhousing.describe() USAhousing.columns ``` # EDA Let's create some simple plots to check out the data! ``` sns.pairplot(USAhousing) sns.distplot(USAhousing['Price']) sns.heatmap(USAhousing.corr()) ``` # Training a Linear Regression Model Let's now begin to train out regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. We will toss out the Address column because it only has text info that the linear regression model can't use. ### X and y arrays ``` X = USAhousing[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms', 'Avg. Area Number of Bedrooms', 'Area Population']] y = USAhousing['Price'] ``` # Train Test Split Now let's split the data into a training set and a testing set. We will train out model on the training set and then use the test set to evaluate the model. ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101) ``` # Creating and Training the Model ``` from sklearn.linear_model import LinearRegression lm = LinearRegression() lm.fit(X_train,y_train) ``` # Model Evaluation Let's evaluate the model by checking out it's coefficients and how we can interpret them. ``` # print the intercept print(lm.intercept_) coeff_df = pd.DataFrame(lm.coef_,X.columns,columns=['Coefficient']) coeff_df ``` Interpreting the coefficients: - Holding all other features fixed, a 1 unit increase in **Avg. Area Income** is associated with an **increase of \$21.52 **. - Holding all other features fixed, a 1 unit increase in **Avg. Area House Age** is associated with an **increase of \$164883.28 **. - Holding all other features fixed, a 1 unit increase in **Avg. Area Number of Rooms** is associated with an **increase of \$122368.67 **. - Holding all other features fixed, a 1 unit increase in **Avg. Area Number of Bedrooms** is associated with an **increase of \$2233.80 **. - Holding all other features fixed, a 1 unit increase in **Area Population** is associated with an **increase of \$15.15 **. # Predictions from our Model Let's grab predictions off our test set and see how well it did! ``` predictions = lm.predict(X_test) plt.scatter(y_test,predictions) ``` **Residual Histogram** ``` sns.distplot((y_test-predictions),bins=50); ``` # Regression Evaluation Metrics Here are three common evaluation metrics for regression problems: **Mean Absolute Error** (MAE) is the mean of the absolute value of the errors: $$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$ **Mean Squared Error** (MSE) is the mean of the squared errors: $$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$ **Root Mean Squared Error** (RMSE) is the square root of the mean of the squared errors: $$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$ Comparing these metrics: - **MAE** is the easiest to understand, because it's the average error. - **MSE** is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world. - **RMSE** is even more popular than MSE, because RMSE is interpretable in the "y" units. All of these are **loss functions**, because we want to minimize them. ``` from sklearn import metrics print('MAE:', metrics.mean_absolute_error(y_test, predictions)) print('MSE:', metrics.mean_squared_error(y_test, predictions)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions))) ``` **My first real Machine Learning Project! Done!!!**
github_jupyter
# Using Nipype with Amazon Web Services (AWS) Several groups have been successfully using Nipype on AWS. This procedure involves setting a temporary cluster using StarCluster and potentially transferring files to/from S3. The latter is supported by Nipype through `DataSink` and `S3DataGrabber`. ## Using DataSink with S3 The `DataSink` class now supports sending output data directly to an AWS S3 bucket. It does this through the introduction of several input attributes to the `DataSink` interface and by parsing the `base_directory` attribute. This class uses the [boto3](https://boto3.readthedocs.org/en/latest/) and [botocore](https://botocore.readthedocs.org/en/latest/) Python packages to interact with AWS. To configure the `DataSink` to write data to S3, the user must set the ``base_directory`` property to an S3-style filepath. For example: ``` from nipype.interfaces.io import DataSink ds = DataSink() ds.inputs.base_directory = 's3://mybucket/path/to/output/dir' ``` With the `"s3://"` prefix in the path, the `DataSink` knows that the output directory to send files is on S3 in the bucket `"mybucket"`. `"path/to/output/dir"` is the relative directory path within the bucket `"mybucket"` where output data will be uploaded to (***Note***: if the relative path specified contains folders that don’t exist in the bucket, the `DataSink` will create them). The `DataSink` treats the S3 base directory exactly as it would a local directory, maintaining support for containers, substitutions, subfolders, `"."` notation, etc. to route output data appropriately. There are four new attributes introduced with S3-compatibility: ``creds_path``, ``encrypt_bucket_keys``, ``local_copy``, and ``bucket``. ``` ds.inputs.creds_path = '/home/neuro/aws_creds/credentials.csv' ds.inputs.encrypt_bucket_keys = True ds.local_copy = '/home/neuro/workflow_outputs/local_backup' ``` ``creds_path`` is a file path where the user's AWS credentials file (typically a csv) is stored. This credentials file should contain the AWS access key id and secret access key and should be formatted as one of the following (these formats are how Amazon provides the credentials file by default when first downloaded). Root-account user: AWSAccessKeyID=ABCDEFGHIJKLMNOP AWSSecretKey=zyx123wvu456/ABC890+gHiJk IAM-user: User Name,Access Key Id,Secret Access Key "username",ABCDEFGHIJKLMNOP,zyx123wvu456/ABC890+gHiJk The ``creds_path`` is necessary when writing files to a bucket that has restricted access (almost no buckets are publicly writable). If ``creds_path`` is not specified, the DataSink will check the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables and use those values for bucket access. ``encrypt_bucket_keys`` is a boolean flag that indicates whether to encrypt the output data on S3, using server-side AES-256 encryption. This is useful if the data being output is sensitive and one desires an extra layer of security on the data. By default, this is turned off. ``local_copy`` is a string of the filepath where local copies of the output data are stored in addition to those sent to S3. This is useful if one wants to keep a backup version of the data stored on their local computer. By default, this is turned off. ``bucket`` is a boto3 Bucket object that the user can use to overwrite the bucket specified in their ``base_directory``. This can be useful if one has to manually create a bucket instance on their own using special credentials (or using a mock server like [fakes3](https://github.com/jubos/fake-s3)). This is typically used for developers unit-testing the DataSink class. Most users do not need to use this attribute for actual workflows. This is an optional argument. Finally, the user needs only to specify the input attributes for any incoming data to the node, and the outputs will be written to their S3 bucket. ```python workflow.connect(inputnode, 'subject_id', ds, 'container') workflow.connect(realigner, 'realigned_files', ds, 'motion') ``` So, for example, outputs for `sub001`’s `realigned_file1.nii.gz` will be in: s3://mybucket/path/to/output/dir/sub001/motion/realigned_file1.nii.gz ## Using S3DataGrabber Coming soon...
github_jupyter
# Improving performance ``` import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt # Load the data df = pd.read_csv('../data/new_titanic_features.csv') # Create Features and Labels X = df[['Male', 'Family', 'Pclass2_one', 'Pclass2_two', 'Pclass2_three', 'Embarked_C', 'Embarked_Q', 'Embarked_S', 'Age2', 'Fare3_Fare11to50', 'Fare3_Fare51+', 'Fare3_Fare<=10']] y = df['Survived'] X.describe() from sklearn.model_selection import train_test_split # Train test split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=.2, random_state=0) from sklearn.linear_model import LogisticRegression model = LogisticRegression() model.fit(X_train, y_train) pred_train = model.predict(X_train) pred_test = model.predict(X_test) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report print('Train Accuracy: {:0.3}'.format(accuracy_score(y_train, pred_train))) print('Test Accuracy: {:0.3}'.format(accuracy_score(y_test, pred_test))) confusion_matrix(y_test, pred_test) print(classification_report(y_test, pred_test)) ``` ## Feature importances (wrong! see exercise 1) ``` coeffs = pd.Series(model.coef_.ravel(), index=X.columns) coeffs coeffs.plot(kind='barh') ``` ## Cross Validation ``` from sklearn.model_selection import cross_val_score, ShuffleSplit cv = ShuffleSplit(n_splits=5, test_size=.4, random_state=0) scores = cross_val_score(model, X, y, cv=cv) scores 'Crossval score: %0.3f +/- %0.3f ' % (scores.mean(), scores.std()) ``` ## Learning curve ``` from sklearn.model_selection import learning_curve tsz = np.linspace(0.1, 1, 10) train_sizes, train_scores, test_scores = learning_curve(model, X, y, train_sizes=tsz) fig = plt.figure() plt.plot(train_sizes, train_scores.mean(axis=1), 'ro-', label="Train Scores") plt.plot(train_sizes, test_scores.mean(axis=1), 'go-', label="Test Scores") plt.title('Learning Curve: Logistic Regression') plt.ylim((0.5, 1.0)) plt.legend() plt.draw() plt.show() ``` ### Exercise 1 Try rescaling the Age feature with [`preprocessing.StandardScaler`](http://scikit-learn.org/stable/modules/preprocessing.html) so that it will have comparable size to the other features. - Do the model prediction change? - Does the performance of the model change? - Do the feature importances change? - How can you explain what you've observed? ### Exercise 2 Experiment with another classifier for example `DecisionTreeClassifier`, `RandomForestClassifier`, `SVC`, `MLPClassifier`, `SGDClassifier` or any other classifier of choice you can find here: http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html. - Train the model on both the scaled data and on the unscaled data - Compare the score for the scaled and unscaled data - how can you get the features importances for tree based models? Check [here](http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html) for some help. - Which classifiers are impacted by the age rescale? Why? ### Exercise 3 Pick your preferred classifier from Exercise 2 and search for the best hyperparameters. You can read about hyperparameter search [here](http://scikit-learn.org/stable/modules/grid_search.html) - Decide the range of hyperparameters you intend to explore - Try using [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV) to perform brute force search - Try using [`RandomizedSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html#sklearn.model_selection.RandomizedSearchCV) for a random search - Once you've chosen the best classifier and the best hyperparameter set, redo the learning curve. Do you need more data or a better model?
github_jupyter
### A. Taylan Cemgil ### Boğaziçi University, Dept. of Computer Engineering The animations on this notebook need vpython (http://vpython.org/) installed on your system. For some peculiar reason, the kernel needs to be restarted after every run. # Ball jumping around a circle <img src="images/circular-robot.png" width="500" align="right"> A toy example for localization using an Hidden Markov Model. The ball stays put with probability $\pi_0$, jumps with probability $\pi_1$ forward, and is 'kidnapped' with a small probability $\pi_2$. At each step, we observe the color of the current tile. At every step, the filtering density over the possible locations are shown. ``` from __future__ import division, print_function #from math import * import vpython as vp import numpy as np import matplotlib.pylab as plt import matplotlib as mpl from itertools import product # Simulation parameters RATE = 150 dt = 1./RATE g_earth = 19.8 T_period = 0.5 showFilteringDensity = True # Set the scene # Floor FloorLenght = 7 FloorHeight = 7 floorYPos = -1.5 floor = vp.box(pos=vp.vector(0,floorYPos,0), length=FloorLenght, height=0.05, width=FloorHeight, color=vp.vector(0.4,0.4,0.4)) # Number of Tiles L = 40 tile_radius = 0.1 TileCoordinate = [(2*(np.cos(th)+np.sin(th)), 0, 2*(np.cos(th)-np.sin(th)) ) for th in np.linspace(0,2*np.pi,L)] ColorOfObservation = [vp.color.white, vp.color.blue] TileColor = [] FilteringDensityCylinder = [] for i in range(L): sx, sy, sz = TileCoordinate[i] s = vp.vector(sx,sy,sz) # Pick a random color for the tile c = np.random.choice(range(len(ColorOfObservation))) TileColor.append(c) vp.cylinder(pos=s, axis=vp.vector(0,-tile_radius,0), color=ColorOfObservation[c], radius=tile_radius) if showFilteringDensity: s2 = vp.vector(sx,sy+floorYPos-0.1,sz) cyl = vp.cylinder(pos=s2, axis=vp.vector(0,0.4,0), radius=tile_radius) FilteringDensityCylinder.append(cyl) def Setup_HMM_parameters(): # Probability of staying on the same tile ep = 0.2 # Probability of making an arbitrary jump kidnap = 0.01 # Probability of correctly observing the tile color a = 0.99 # Set up the transition matrix idx = [i for i in range(1,L)]+[0] I = np.diag(np.ones(L)) A = (1-kidnap)*(ep*I + (1-ep)*I[:,idx]) + kidnap*np.ones((L,L))/L C = np.zeros((2,L)) pred = np.ones(L)/L for i in range(L): C[0,i] = a*(1 - TileColor[i]) + (1-a)*TileColor[i] C[1,i] = a*TileColor[i] + (1-a)*(1 - TileColor[i]) return A, C, pred A, C, pred = Setup_HMM_parameters() Obs_noiseless = vp.sphere(pos=vp.vector(0,floorYPos-0.2,0), color=vp.color.black, radius=tile_radius) Obs = vp.sphere(pos=vp.vector(0,floorYPos+0.2,0), color=vp.color.black, radius=tile_radius) nf = mpl.colors.Normalize(vmin=0, vmax=0.7, clip=True) cmap = plt.cm.ScalarMappable(cmap=plt.cm.hot, norm=nf) lamp = vp.local_light(pos=vp.vector(5,-4,0), color=vp.color.white) # Select a random initial state CurrentState = np.random.choice(range(L)) sx, sy, sz = TileCoordinate[CurrentState] # Set the initial state of the yellow ball x0 = vp.vector(sx,sy,sz) v0 = vp.vector(0,0,0) YellowBall = vp.sphere(pos=x0, radius=tile_radius, color=vp.color.yellow, make_trail=True, interval=1, retain=RATE*T_period) YellowBall.vel = v0 YellowBall.g = g_earth YellowBall.T_period = T_period YellowBall.retain = RATE*YellowBall.T_period FlightCounter = YellowBall.T_period/dt while 1: vp.rate (RATE) YellowBall.pos = YellowBall.pos + YellowBall.vel*dt # If hit halfway, dim the observation if FlightCounter>= YellowBall.T_period/dt/2.: Obs.color = vp.color.black Obs_noiseless.color = vp.color.black # If arrived, dim the observation if FlightCounter>= YellowBall.T_period/dt: observation = np.random.choice(range(C.shape[0]), p=C[:,CurrentState]) o_noiseless = TileColor[CurrentState] Obs.color = ColorOfObservation[observation] Obs_noiseless.color = ColorOfObservation[o_noiseless] NextState = np.random.choice(range(A.shape[0]), p=A[:,CurrentState]) pred = C[observation,:]*pred pred = pred/np.sum(pred) if showFilteringDensity: for k in range(L): col = cmap.to_rgba(pred[k]) vcol = vp.vector(col[0], col[1], col[2]) FilteringDensityCylinder[k].color = vcol FilteringDensityCylinder[k].axis=vp.vector(0,pred[k]+0.15,0) pred = A.dot(pred) ## Plan the jump sx, sy, sz = TileCoordinate[CurrentState] tx, ty, tz = TileCoordinate[NextState] v_vert = YellowBall.g*YellowBall.T_period/2 + (ty-sy)/YellowBall.T_period YellowBall.vel = vp.vector((tx-sx)/YellowBall.T_period, v_vert,(tz-sz)/YellowBall.T_period) YellowBall.pos = vp.vector(sx,sy,sz) # CurrentState = NextState FlightCounter = 0 else: YellowBall.vel.y = YellowBall.vel.y - YellowBall.g*dt FlightCounter +=1 ``` # Proposal Mechanism of Metropolis Hastings <img src="images/mh-chain.png" width="500" align="right"> This demo illustrates a MH chain on a discrete state space. The size of each tile $x$ is proportional to the target probability $\pi(x)$. The yellow big ball shows the current state $x$ of the chain, and the small gray ball shows the proposed state $x'$ drawn from a proposal density $q(\cdot|x)$. The proposed new state is accepted with probability $$ \min\left\{1, \frac{\pi(x') q(x|x')}{\pi(x) q(x'|x)} \right\} $$ ``` from __future__ import division, print_function from math import * import vpython as vp import numpy as np import matplotlib.pylab as plt import matplotlib as mpl from vpython_utilities import make_grid2D # Simulation speed RATE = 300 dt = 1./RATE W1 = 1 W2 = 6 step = 0.5 n2 = int(W2/step)+1 n1 = int(W1/step)+1 PointList, sub2ind, ind2sub, edges, A = make_grid2D(n2,n1) Trans = A/np.sum(A, axis=0, keepdims=True) Trans = Trans.dot(Trans).dot(Trans) L = len(PointList) Y = [] for i in range(L): p = PointList[i] x = p[0]*step-W2/2. z = p[2]*step-W1/2. E = 2+ np.cos(2*pi*z/3)+np.sin(2*pi*x/5)+np.random.randn()/10. y = 2*np.exp(-1.1*E) PointList[i] = (x, 0, z) Y.append(y) MAX = 1 MIN = 0 nf = mpl.colors.Normalize(vmin=MIN, vmax=MAX, clip=True) cmap = plt.cm.ScalarMappable(cmap=plt.cm.cool_r, norm=nf) #floor = box(pos=vector(0,-0.04,0), length=W1, height=0.05, width=W2, color=color.black) wd = 0.4 radius = 0.2 maxY = max(Y) for i in range(L): sx, sy, sz = PointList[i] s = vp.vector(sx,-radius,sz) #vp.sphere(pos=s, color=vp.color.cyan, radius=0.1) #vcol = vp.vector(0.9,0.9,0.9) col = cmap.to_rgba(Y[i]/maxY) vcol = vp.vector(col[0], col[1], col[2]) #vp.cylinder(pos=s, axis=vp.vector(0,-0.1,0), color=vcol, radius=radius*np.sqrt(nf(sy))) wd = step*np.sqrt(Y[i]/maxY) vp.box(pos=s,length=wd, height=0.05, width=wd, color=vcol ) #s = vp.vector(sx,(sy-radius)/2.,sz) #vp.box(pos=s,length=wd, height=sy-radius, width=wd, color=vcol ) Cur = [] # Cnt[i] is the number of ticks after a new movement has started of the i'th particle Cnt = [] B = [] g_earth = 49.8 T_period = 0.25 N = 2 for i in range(N): cur = np.random.choice(range(L)) sx, sy, sz = PointList[cur] x0 = vp.vector(sx,sy,sz) v0 = vp.vector(0,0,0) ball = vp.sphere(pos=x0, radius=radius, color=vp.color.yellow, make_trail=True, interval=1, retain=RATE*T_period) ball.vel = v0 ball.g = g_earth ball.T_period = T_period ball.retain = RATE*ball.T_period cnt = ball.T_period/dt B.append(ball) Cur.append(cur) Cnt.append(cnt) lamp = vp.local_light(pos=vp.vector(0,-1,0), color=vp.color.yellow) def selectNextState(cur): pr = Trans[:,cur] nex = np.random.choice(range(L), p=pr) lw = np.log(Trans[cur, nex]) - np.log(Trans[nex, cur]) return nex, lw def planJump(ball, curPos, nexPos): sx, sy, sz = curPos tx, ty, tz = nexPos v_vert = ball.g*ball.T_period/2 + (ty-sy)/ball.T_period vel = vp.vector((tx-sx)/ball.T_period, v_vert,(tz-sz)/ball.T_period) pos = vp.vector(sx,sy,sz) return pos, vel # Particle index of the Chain pP = 0 # Particle index of the proposal pQ = 1 B[pQ].make_trail = False B[pQ].color = vp.vector(0.6,0.6,0.6) B[pQ].radius = radius/2 # Is proposal ball moving? pQmove = True log_q_ratio = 0 while 1: vp.rate (RATE) B[pQ].pos = B[pQ].pos + B[pQ].vel*dt B[pP].pos = B[pP].pos + B[pP].vel*dt if Cnt[pQ]>= B[pQ].T_period/dt: if pQmove: accept = np.log(np.random.rand()) < log_q_ratio + np.log(Y[Cur[pQ]]) - np.log(Y[Cur[pP]]) if accept: # pP jumps to new location B[pP].g = g_earth pos, vel = planJump(B[pP], PointList[Cur[pP]], PointList[Cur[pQ]]) B[pP].vel = vel B[pP].pos = pos # pQ stays put B[pQ].vel = vp.vector(0,0,0) B[pQ].pos.x, B[pQ].pos.y, B[pQ].pos.z = PointList[Cur[pQ]] B[pQ].g = 0 Cur[pP] = Cur[pQ] else: # pP jumps vertically B[pP].g = g_earth pos, vel = planJump(B[pP], PointList[Cur[pP]], PointList[Cur[pP]]) B[pP].vel = vel B[pP].pos = pos # pQ disappears B[pQ].visible = False B[pQ].g = g_earth/10. pos, vel = planJump(B[pQ], PointList[Cur[pQ]], PointList[Cur[pP]]) B[pQ].vel = vel B[pQ].pos = pos Cur[pQ] = Cur[pP] pQmove = False else: B[pQ].visible = True nex, log_q_ratio = selectNextState(Cur[pP]) # pP stays put B[pP].vel = vp.vector(0,0,0) B[pP].pos.x, B[pP].pos.y, B[pP].pos.z = PointList[Cur[pP]] B[pP].g = 0 # pQ jumps to new location B[pQ].g = g_earth/10. pos, vel = planJump(B[pQ], PointList[Cur[pP]], PointList[nex]) B[pQ].vel = vel B[pQ].pos = pos Cur[pQ] = nex pQmove = True Cnt[pP] = 0 Cnt[pQ] = 0 else: B[pP].vel.y = B[pP].vel.y - B[pP].g*dt B[pQ].vel.y = B[pQ].vel.y - B[pQ].g*dt Cnt[pP] +=1 Cnt[pQ] +=1 ```
github_jupyter
# Acme: Tutorial <a href="https://colab.research.google.com/github/deepmind/acme/blob/master/examples/tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> This colab provides an overview of how Acme's modules can be stacked together to create reinforcement learning agents. It shows how to fit networks to environment specs, create actors, learners, replay buffers, datasets, adders, and full agents. It also highlights where you can swap out certain modules to create your own Acme based agents. ## Installation In the first few cells we'll start by installing all of the necessary dependencies (and a few optional ones). ``` #@title Install necessary dependencies. !sudo apt-get install -y xvfb ffmpeg !pip install 'gym==0.10.11' !pip install imageio !pip install PILLOW !pip install 'pyglet==1.3.2' !pip install pyvirtualdisplay !pip install dm-acme !pip install dm-acme[reverb] !pip install dm-acme[tf] !pip install dm-acme[envs] from IPython.display import clear_output clear_output() ``` ### Install dm_control The next cell will install environments provided by `dm_control` _if_ you have an institutional MuJoCo license. This is not necessary, but without this you won't be able to use the `dm_cartpole` environment below and can instead follow this colab using `gym` environments. To do so simply expand the following cell, paste in your license file, and run the cell. Alternatively, Colab supports using a Jupyter kernel on your local machine which can be accomplished by following the guidelines here: https://research.google.com/colaboratory/local-runtimes.html. This will allow you to install `dm_control` by following instructions in https://github.com/deepmind/dm_control and using a personal MuJoCo license. ``` #@title Add your License #@test {"skip": true} mjkey = """ """.strip() mujoco_dir = "$HOME/.mujoco" # Install OpenGL dependencies !apt-get update && apt-get install -y --no-install-recommends \ libgl1-mesa-glx libosmesa6 libglew2.0 # Get MuJoCo binaries !wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip !unzip -o -q mujoco.zip -d "$mujoco_dir" # Copy over MuJoCo license !echo "$mjkey" > "$mujoco_dir/mjkey.txt" # Install dm_control !pip install dm_control # Configure dm_control to use the OSMesa rendering backend %env MUJOCO_GL=osmesa # Check that the installation succeeded try: from dm_control import suite env = suite.load('cartpole', 'swingup') pixels = env.physics.render() except Exception as e: raise e from RuntimeError( 'Something went wrong during installation. Check the shell output above ' 'for more information.') else: from IPython.display import clear_output clear_output() del suite, env, pixels ``` ## Import Modules Now we can import all the relevant modules. ``` #@title Import modules. #python3 %%capture import copy import pyvirtualdisplay import imageio import base64 import IPython from acme import environment_loop from acme.tf import networks from acme.adders import reverb as adders from acme.agents.tf import actors as actors from acme.datasets import reverb as datasets from acme.wrappers import gym_wrapper from acme import specs from acme import wrappers from acme.agents.tf import d4pg from acme.agents import agent from acme.tf import utils as tf2_utils from acme.utils import loggers import gym import dm_env import matplotlib.pyplot as plt import numpy as np import reverb import sonnet as snt import tensorflow as tf # Set up a virtual display for rendering OpenAI gym environments. display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start() ``` ## Load an environment We can now load an environment. In what follows we'll create an environment in order to generate and visualize a single state from that environment. Just select the environment you want to use and run the cell. ``` environment_name = 'gym_mountaincar' # @param ['dm_cartpole', 'gym_mountaincar'] # task_name = 'balance' # @param ['swingup', 'balance'] def make_environment(domain_name='cartpole', task='balance'): from dm_control import suite env = suite.load(domain_name, task) env = wrappers.SinglePrecisionWrapper(env) return env if 'dm_cartpole' in environment_name: environment = make_environment('cartpole') def render(env): return env._physics.render(camera_id=0) #pylint: disable=protected-access elif 'gym_mountaincar' in environment_name: environment = gym_wrapper.GymWrapper(gym.make('MountainCarContinuous-v0')) environment = wrappers.SinglePrecisionWrapper(environment) def render(env): return env.environment.render(mode='rgb_array') else: raise ValueError('Unknown environment: {}.'.format(environment_name)) # Show the frame. frame = render(environment) plt.imshow(frame) plt.axis('off') ``` ### Environment Spec We will later interact with the environment in a loop corresponding to the following diagram: <img src="https://github.com/deepmind/acme/raw/master/docs/diagrams/environment_loop.png" width="500" /> But before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g. observations) or consumes (e.g. actions). The `environment_spec` will show you the form of the *observations*, *rewards* and *discounts* that the environment exposes and the form of the *actions* that can be taken. ``` environment_spec = specs.make_environment_spec(environment) print('actions:\n', environment_spec.actions, '\n') print('observations:\n', environment_spec.observations, '\n') print('rewards:\n', environment_spec.rewards, '\n') print('discounts:\n', environment_spec.discounts, '\n') ``` ## Build a policy network that maps observations to actions. The most important part of a reinforcement learning algorithm is potentially the policy that maps environment observations to actions. We can use a simple neural network to create a policy, in this case a simple feedforward MLP with layer norm. For our TensorFlow agents we make use of the `sonnet` library to specify networks or modules; all of the networks we will work with also have an initial batch dimension which allows for batched inference/learning. It is possible that the the observations returned by the environment are nested in some way: e.g. environments from the `dm_control` suite are frequently returned as dictionaries containing `position` and `velocity` entries. Our network is allowed to arbitarily map this dictionary to produce an action, but in this case we will simply concatenate these observations before feeding it through the MLP. We can do so using Acme's `batch_concat` utility to flatten the nested observation into a single dimension for each batch. If the observation is already flat this will be a no-op. Similarly, the output of the MLP is may have a different range of values than the action spec dictates. For this, we can use Acme's `TanhToSpec` module to rescale our actions to meet the spec. ``` # Calculate how big the last layer should be based on total # of actions. action_spec = environment_spec.actions action_size = np.prod(action_spec.shape, dtype=int) exploration_sigma = 0.3 # In order the following modules: # 1. Flatten the observations to be [B, ...] where B is the batch dimension. # 2. Define a simple MLP which is the guts of this policy. # 3. Make sure the output action matches the spec of the actions. policy_modules = [ tf2_utils.batch_concat, networks.LayerNormMLP(layer_sizes=(300, 200, action_size)), networks.TanhToSpec(spec=environment_spec.actions)] policy_network = snt.Sequential(policy_modules) # We will also create a version of this policy that uses exploratory noise. behavior_network = snt.Sequential( policy_modules + [networks.ClippedGaussian(exploration_sigma), networks.ClipToSpec(action_spec)]) ``` ## Create an actor An `Actor` is the part of our framework that directly interacts with an environment by generating actions. In more detail the earlier diagram can be expanded to show exactly how this interaction occurs: <img src="https://github.com/deepmind/acme/raw/master/docs/diagrams/actor_loop.png" width="500" /> While you can always write your own actor, in Acme we also provide a number of useful premade versions. For the network we specified above we will make use of a `FeedForwardActor` that wraps a single feed forward network and knows how to do things like handle any batch dimensions or record observed transitions. ``` actor = actors.FeedForwardActor(policy_network) ``` All actors have the following public methods and attributes: ``` [method_or_attr for method_or_attr in dir(actor) # pylint: disable=expression-not-assigned if not method_or_attr.startswith('_')] ``` ## Evaluate the random actor's policy. Although we have instantiated an actor with a policy, the policy has not yet learned to achieve any task reward, and is essentially just acting randomly. However this is a perfect opportunity to see how the actor and environment interact. Below we define a simple helper function to display a video given frames from this interaction, and we show 500 steps of the actor taking actions in the world. ``` def display_video(frames, filename='temp.mp4'): """Save and display video.""" # Write video with imageio.get_writer(filename, fps=60) as video: for frame in frames: video.append_data(frame) # Read video and display the video video = open(filename, 'rb').read() b64_video = base64.b64encode(video) video_tag = ('<video width="320" height="240" controls alt="test" ' 'src="data:video/mp4;base64,{0}">').format(b64_video.decode()) return IPython.display.HTML(video_tag) # Run the actor in the environment for desired number of steps. frames = [] num_steps = 500 timestep = environment.reset() for _ in range(num_steps): frames.append(render(environment)) action = actor.select_action(timestep.observation) timestep = environment.step(action) # Save video of the behaviour. display_video(np.array(frames)) ``` ## Storing actor experiences in a replay buffer Many RL agents utilize a data structure such as a replay buffer to store data from the environment (e.g. observations) along with actions taken by the actor. This data will later be fed into a learning process in order to update the policy. Again we can expand our earlier diagram to include this step: <img src="https://github.com/deepmind/acme/raw/master/docs/diagrams/batch_loop.png" width="500" /> In order to make this possible, Acme leverages [Reverb](https://github.com/deepmind/reverb) which is an efficient and easy-to-use data storage and transport system designed for Machine Learning research. Below we will create the replay buffer before interacting with it. ``` # Create a table with the following attributes: # 1. when replay is full we remove the oldest entries first. # 2. to sample from replay we will do so uniformly at random. # 3. before allowing sampling to proceed we make sure there is at least # one sample in the replay table. # 4. we use a default table name so we don't have to repeat it many times below; # if we left this off we'd need to feed it into adders/actors/etc. below. replay_buffer = reverb.Table( name=adders.DEFAULT_PRIORITY_TABLE, max_size=1000000, remover=reverb.selectors.Fifo(), sampler=reverb.selectors.Uniform(), rate_limiter=reverb.rate_limiters.MinSize(min_size_to_sample=1)) # Get the server and address so we can give it to the modules such as our actor # that will interact with the replay buffer. replay_server = reverb.Server([replay_buffer], port=None) replay_server_address = 'localhost:%d' % replay_server.port ``` We could interact directly with Reverb in order to add data to replay. However in Acme we have an additional layer on top of this data-storage that allows us to use the same interface no matter what kind of data we are inserting. This layer in Acme corresponds to an `Adder` which adds experience to a data table. We provide several adders that differ based on the type of information that is desired to be stored in the table, however in this case we will make use of an `NStepTransitionAdder` which stores simple transitions (if N=1) or accumulates N-steps to form an aggregated transition. ``` # Create a 5-step transition adder where in between those steps a discount of # 0.99 is used (which should be the same discount used for learning). adder = adders.NStepTransitionAdder( client=reverb.Client(replay_server_address), n_step=5, discount=0.99) ``` We can either directly use the adder to add transitions to replay directly using the `add()` and `add_first()` methods as follows: ``` num_episodes = 2 #@param for episode in range(num_episodes): timestep = environment.reset() adder.add_first(timestep) while not timestep.last(): action = actor.select_action(timestep.observation) timestep = environment.step(action) adder.add(action=action, next_timestep=timestep) ``` Since this is a common enough way to observe data, `Actor`s in Acme generally take an `Adder` instance that they use to define their observation methods. We saw earlier that the `FeedForwardActor` like all `Actor`s defines `observe` and `observe_first` methods. If we give the actor an `Adder` instance at init then it will use this adder to make observations. ``` actor = actors.FeedForwardActor(policy_network=behavior_network, adder=adder) ``` Below we repeat the same process, but using `actor` and its `observe` methods. We note these subtle changes below. ``` num_episodes = 2 #@param for episode in range(num_episodes): timestep = environment.reset() actor.observe_first(timestep) # Note: observe_first. while not timestep.last(): action = actor.select_action(timestep.observation) timestep = environment.step(action) actor.observe(action=action, next_timestep=timestep) # Note: observe. ``` ## Learning from experiences in replay Acme provides multiple learning algorithms/agents. Here, we will use the Acme's D4PG learning algorithm to learn from the data collected by the actor. To do so, we first create a TensorFlow dataset from the Reverb table using the `make_dataset` function. ``` # This connects to the created reverb server; also note that we use a transition # adder above so we'll tell the dataset function that so that it knows the type # of data that's coming out. dataset = datasets.make_dataset( server_address=replay_server_address, batch_size=256, environment_spec=environment_spec, transition_adder=True) ``` In what follows we'll make use of D4PG, an actor-critic learning algorithm. D4PG is a somewhat complicated algorithm, so we'll leave a full explanation of this method to the accompanying paper (see the documentation). However, since D4PG is an actor-critic algorithm we will have to specify a critic for it (a value function). In this case D4PG uses a distributional critic as well. D4PG also makes use of online and target networks so we need to create copies of both the policy_network (from earlier) and the new critic network we are about to create. To build our critic networks, we use a ***multiplexer***, which is simply a neural network module that takes multiple inputs and processes them in different ways before combining them and processing further. In the case of Acme's `CriticMultiplexer`, the inputs are observations and actions, each with their own network torso. There is then a critic network module that processes the outputs of the observation network and the action network and outputs a tensor. Finally, in order to optimize these networks the learner must receive networks with the variables created. We have utilities in Acme to handle exactly this, and we do so in the final lines of the following code block. ``` critic_network = snt.Sequential([ networks.CriticMultiplexer( observation_network=tf2_utils.batch_concat, action_network=tf.identity, critic_network=networks.LayerNormMLP( layer_sizes=(400, 300), activate_final=True)), # Value-head gives a 51-atomed delta distribution over state-action values. networks.DiscreteValuedHead(vmin=-150., vmax=150., num_atoms=51)]) # Create the target networks target_policy_network = copy.deepcopy(policy_network) target_critic_network = copy.deepcopy(critic_network) # We must create the variables in the networks before passing them to learner. tf2_utils.create_variables(network=policy_network, input_spec=[environment_spec.observations]) tf2_utils.create_variables(network=critic_network, input_spec=[environment_spec.observations, environment_spec.actions]) tf2_utils.create_variables(network=target_policy_network, input_spec=[environment_spec.observations]) tf2_utils.create_variables(network=target_critic_network, input_spec=[environment_spec.observations, environment_spec.actions]) ``` We can now create a learner that uses these networks. Note that here we're using the same discount factor as was used in the transition adder. The rest of the parameters are reasonable defaults. Note however that we will log output to the terminal at regular intervals. We have also turned off checkpointing of the network weights (i.e. saving them). This is usually used by default but can cause issues with interactive colab sessions. ``` learner = d4pg.D4PGLearner(policy_network=policy_network, critic_network=critic_network, target_policy_network=target_policy_network, target_critic_network=target_critic_network, dataset=dataset, discount=0.99, target_update_period=100, policy_optimizer=snt.optimizers.Adam(1e-4), critic_optimizer=snt.optimizers.Adam(1e-4), # Log learner updates to console every 10 seconds. logger=loggers.TerminalLogger(time_delta=10.), checkpoint=False) ``` Inspecting the learner's public methods we see that it primarily exists to expose its variables and update them. IE this looks remarkably similar to supervised learning. ``` [method_or_attr for method_or_attr in dir(learner) # pylint: disable=expression-not-assigned if not method_or_attr.startswith('_')] ``` The learner's `step()` method samples a batch of data from the replay dataset given to it, and performs optimization using the optimizer, logging loss metrics along the way. Note: in order to sample from the replay dataset, there must be at least 1000 elements in the replay buffer (which should already have from the actor's added experiences.) ``` learner.step() ``` # Training loop Finally, we can put all of the pieces together and run some training steps in the environment, alternating the actor's experience gathering with learner's learning. This is a simple training loop that runs for `num_training_episodes` episodes where the actor and learner take turns generating and learning from experience respectively: - Actor acts in environment & adds experience to replay for `min_actor_steps_per_iteration` steps<br> - Learner samples from replay data and learns from it for `num_learner_steps_per_iteration` steps<br> Note: Since the learner and actor are sharing a policy network, any learning done on the learner, automatically is transferred to the actor's policy. ``` num_training_episodes = 10 # @param {type: "integer"} min_actor_steps_before_learning = 1000 # @param {type: "integer"} num_actor_steps_per_iteration = 100 # @param {type: "integer"} num_learner_steps_per_iteration = 1 # @param {type: "integer"} learner_steps_taken = 0 actor_steps_taken = 0 for episode in range(num_training_episodes): timestep = environment.reset() actor.observe_first(timestep) episode_return = 0 while not timestep.last(): # Get an action from the agent and step in the environment. action = actor.select_action(timestep.observation) next_timestep = environment.step(action) # Record the transition. actor.observe(action=action, next_timestep=next_timestep) # Book-keeping. episode_return += next_timestep.reward actor_steps_taken += 1 timestep = next_timestep # See if we have some learning to do. if (actor_steps_taken >= min_actor_steps_before_learning and actor_steps_taken % num_actor_steps_per_iteration == 0): # Learn. for learner_step in range(num_learner_steps_per_iteration): learner.step() learner_steps_taken += num_learner_steps_per_iteration # Log quantities. print('Episode: %d | Return: %f | Learner steps: %d | Actor steps: %d'%( episode, episode_return, learner_steps_taken, actor_steps_taken)) ``` ## Putting it all together: an Acme agent <img src="https://github.com/deepmind/acme/raw/master/docs/diagrams/agent_loop.png" width="500" /> Now that we've used all of the pieces and seen how they can interact, there's one more way we can put it all together. In the Acme design scheme, an agent is an entity with both a learner and an actor component that will piece together their interactions internally. An agent handles the interchange between actor adding experiences to the replay buffer and learner sampling from it and learning and in turn, sharing its weights back with the actor. Similar to how we used `num_actor_steps_per_iteration` and `num_learner_steps_per_iteration` parameters in our custom training loop above, the agent parameters `min_observations` and `observations_per_step` specify the structure of the agent's training loop. * `min_observations` specifies how many actor steps need to have happened to start learning. * `observations_per_step` specifies how many actor steps should occur in between each learner step. ``` d4pg_agent = agent.Agent(actor=actor, learner=learner, min_observations=1000, observations_per_step=8.) ``` Of course we could have just used the `agents.D4PG` agent directly which sets all of this up for us. We'll stick with this agent we've just created, but most of the steps outlined in this tutorial can be skipped by just making use of a prebuilt agent and the environment loop. ## Training the full agent To simplify collecting and storing experiences, you can also directly use Acme's `EnvironmentLoop` which runs the environment loop for a specified number of episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely. ``` # This may be necessary if any of the episodes were cancelled above. adder.reset() # We also want to make sure the logger doesn't write to disk because that can # cause issues in colab on occasion. logger = loggers.TerminalLogger(time_delta=10.) loop = environment_loop.EnvironmentLoop(environment, d4pg_agent, logger=logger) loop.run(num_episodes=50) ``` ## Evaluate the D4PG agent We can now evaluate the agent. Note that this will use the noisy behavior policy, and so won't quite be optimal. If we wanted to be absolutely precise we could easily replace this with the noise-free policy. Note that the optimal policy can get about 1000 reward in this environment. D4PG should generally get to that within 50-100 learner steps. We've cut it off at 50 and not dropped the behavior noise just to simplify this tutorial. ``` # Run the actor in the environment for desired number of steps. frames = [] num_steps = 500 timestep = environment.reset() for _ in range(num_steps): frames.append(render(environment)) action = d4pg_agent.select_action(timestep.observation) timestep = environment.step(action) # Save video of the behaviour. display_video(np.array(frames)) ```
github_jupyter
``` import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torchvision import torchvision.datasets as dsets import torchvision.transforms as transforms import numpy as np import pandas as pd import matplotlib.pyplot as plt use_cuda = True device = torch.device('cuda:4' if use_cuda else 'cpu') ``` # 1、data loader Loading the MNIST data ``` dataMNIST_train = dsets.MNIST( root = 'data', train = True, download = True, transform = transforms.ToTensor() ) dataMNIST_test = dsets.MNIST( root = 'data', train = False, download = True, transform = transforms.ToTensor() ) dataLoaderMNIST_train = torch.utils.data.DataLoader( dataset = dataMNIST_train, batch_size = 128, shuffle = True, ) dataLoaderMNIST_test = torch.utils.data.DataLoader( dataset = dataMNIST_test, batch_size = 128, shuffle = True, ) dataMNIST_train dataMNIST_test x,y = iter(dataLoaderMNIST_train).next() # (batch_size, channel, height, width) # RNN input data shape:(batch_size, seq_size, input_size) x.shape,x.squeeze(1).shape x = x.squeeze(1) #(batch_size) y.shape ``` # 2、RNN model ``` class modelLSTM(nn.Module): def __init__(self, input_size = 28, hidden_size = 32, num_layers = 1): super(modelLSTM, self).__init__() self.lstm = nn.LSTM( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, batch_first = True, ) self.linear = nn.Linear( in_features = hidden_size*input_size, out_features = 10 ) def forward(self, x, state = None): x = x.view(-1,x.shape[-2],x.shape[-1]) y, next_state = self.lstm(x, state) y = y.contiguous().view(x.shape[0],-1) # contiguous operation y = self.linear(y) y = F.softmax(y,dim = 1) return y,next_state class modelGRU(nn.Module): def __init__(self, input_size = 28, hidden_size = 32, num_layers = 1): super(modelGRU, self).__init__() self.gru = nn.GRU( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, batch_first = True, ) self.linear = nn.Linear( in_features = hidden_size*input_size, out_features = 10 ) def forward(self, x, state = None): x = x.view(-1,x.shape[-2],x.shape[-1]) y, next_state = self.gru(x, state) y = y.contiguous().view(x.shape[0],-1) # contiguous operation y = self.linear(y) y = F.softmax(y,dim = 1) return y,next_state mylstm = modelLSTM() mylstm.cpu() mygru = modelGRU() mygru.cpu() x = torch.randn(32,1,28,28) y,state = mylstm(x) y,state = mygru(x) ``` # 3、train_data ``` optimizerLSTM = optim.Adam(mylstm.parameters(),lr = 0.001,) optimizerGRU = optim.Adam(mygru.parameters(),lr = 0.001,) loss_func = nn.CrossEntropyLoss() mylstm.to(device) mygru.to(device) %%time mylstm.train() for epoch in range(5): for step,(x,y) in enumerate(dataLoaderMNIST_train): x = x.cuda(device) y = y.cuda(device) y_,state = mylstm(x) loss = loss_func(y_,y) optimizerLSTM.zero_grad() loss.backward() optimizerLSTM.step() print('\repoch:{epoch:3}--step:{step:5}--loss:{loss:.4}'.format(epoch = epoch, step=step, loss=loss),end = '') acc = 0 for _,(x,y) in enumerate(dataLoaderMNIST_test): x = x.cuda(device) y = y.cuda(device) y_,state = mylstm(x) acc += torch.sum(y_.argmax(1) == y) print('\repoch:{epoch:3}--step:{step:5}--loss:{loss:.4}--acc:{acc:.4}%-----'.format(epoch = epoch, step=step, loss=loss, acc = acc/10000*100)) print() %%time mygru.train() for epoch in range(5): for step,(x,y) in enumerate(dataLoaderMNIST_train): x = x.cuda(device) y = y.cuda(device) y_,state = mygru(x) loss = loss_func(y_,y) optimizerGRU.zero_grad() loss.backward() optimizerGRU.step() print('\repoch:{epoch:3}--step:{step:5}--loss:{loss:.4}'.format(epoch = epoch, step=step, loss=loss),end = '') acc = 0 for _,(x,y) in enumerate(dataLoaderMNIST_test): x = x.cuda(device) y = y.cuda(device) y_,state = mygru(x) acc += torch.sum(y_.argmax(1) == y) print('\repoch:{epoch:3}--step:{step:5}--loss:{loss:.4}--acc:{acc:.4}%-----'.format(epoch = epoch, step=step, loss=loss, acc = acc/10000*100)) print() FPS_LSTM = 70000*5/55.7 FPS_GRU = 70000*5/52.9 FPS_LSTM,FPS_GRU (FPS_GRU-FPS_LSTM)/FPS_LSTM*100 ```
github_jupyter
``` !git add "td5_ressources/" !git add "TD5_A_Simple_NN_for_a_Simple_LR.ipynb" !git commit -m "reshaping TD5" !git push origin master ``` # TD5 • Coding a simple perceptron with Backprop <h3> Un <a href="http://playground.tensorflow.org/#activation=linear&regularization=L1&batchSize=29&dataset=gauss&regDataset=reg-plane&learningRate=0.001&regularizationRate=0.003&noise=15&networkShape=1&seed=0.37334&showTestData=true&discretize=false&percTrainData=50&x=false&y=false&xTimesY=true&xSquared=true&ySquared=true&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false">lien sympathique</a> pour s'amuser avec différentes architectures de réseau de neurones </h3> ``` import os import pandas as pd import numpy as np import matplotlib.pyplot as plt ``` # A scalar ``` scalar = 4 scalar2 = np.array(4) scalar, scalar2, scalar2.shape ``` # A vector ## Creation ``` vector_1D = np.array([4]) vector_1D_of_multiple_elements = np.array([1,2,3,4,5,6]) print(vector_1D, vector_1D_of_multiple_elements) print(vector_1D.shape, vector_1D_of_multiple_elements.shape) print(vector_1D.ndim, vector_1D_of_multiple_elements.ndim) # Number of elements in the array print(vector_1D.size, vector_1D_of_multiple_elements.size) ``` ## Transpose vector ``` (vector_1D_of_multiple_elements.T, vector_1D_of_multiple_elements.T.shape) # same thing (in terms of representation) (vector_1D_of_multiple_elements, vector_1D_of_multiple_elements.shape, vector_1D_of_multiple_elements.ndim) ``` # A matrix ## creation ``` matrix = np.array( [ [ 1,2,3], [ 4,5,6] ]) matrix, matrix.shape, matrix.ndim, matrix.size ``` ## transpose matrix ``` matrix.T, matrix.T.shape, matrix.T.ndim, matrix.size ``` # Re-shape a vector or matrix ``` vector_1D.size vector_1D.reshape((1,1)) vector_1D.reshape((1,1,1)) vector_1D_of_multiple_elements.size vector_1D_of_multiple_elements.reshape(3,2) vector_1D_of_multiple_elements.reshape(2, 3) ``` ## pd.DataFrame ``` pd.DataFrame(vector_1D) pd.DataFrame(vector_1D_of_multiple_elements) pd.DataFrame(matrix) pd.DataFrame(vector_1D_of_multiple_elements) pd.DataFrame(vector_1D_of_multiple_elements.T) pd.DataFrame(vector_1D_of_multiple_elements).shape pd.DataFrame(vector_1D_of_multiple_elements).T ``` # Vector "as" matrix In linear algebra, a **column vector** or **column matrix** is an **m × 1 matrix**, that is, a **matrix consisting of a single column of m elements** ``` as_matrix = vector_1D_of_multiple_elements.reshape(6, 1) # 6 rows, 1 column pd.DataFrame(as_matrix) ``` Similarly, a row vector or row matrix is a 1 × m matrix, that is, a matrix consisting of a single row of m elements ``` as_matrix = vector_1D_of_multiple_elements.reshape(1, 6) # 1 column, 6 rows pd.DataFrame(as_matrix) ``` # Dot Product (produit scalaire) ``` vector1 = np.array([1,2,3,4,5]) vector2 = np.array([2,4]) try: np.dot( vector1, vector2 ) except: print("Not the same number of elements to perform a dot product (must be aligned)") vector2 = np.array([2,4,1,1,1]) np.dot(vector1, vector2) # 1*2 + 2*4 + 3*1 + 4*1 + 5*1 try: np.dot(vector1.reshape(1,5), vector2.reshape(1,5)) except: print("Again, not aligned because in maths you do the transpose of the vector1 ") ``` <img src="td5_ressources/dot product.png" width="100%"> ``` # this works result_dot_product = np.dot(vector1.reshape(1,5), vector2.reshape(1,5).T) # this works result_matrix_product = np.matmul(vector1.reshape(1,5), vector2.reshape(1,5).T) # same result result_dot_product, result_matrix_product ``` # elementwise multiplication (Hadamard Product on Matrices (or column/row vector) To not confuse with dot product: ``` vector1 * vector2 # vector of element multiplications # same result np.multiply(vector1, vector2) np.multiply(vector1.reshape(1,5), vector2.reshape(1,5)) ``` # Broadcasting rules numpy refers to how numpy **treats arrays** with **different shapes** during **arithmetic operations**.<br> Subject to certain constraints, the **smaller array is “broadcast” across the larger array** so that they have compatible shapes! * Same shapes, no problem: ``` vector1 = np.array([1,2,3]) vector2 = np.array([1,2,3]) vector1 + vector2 ``` * different shapes, what to do ? <u>**Example1:**</u> ``` vector1 = np.array([1]) vector2 = np.array([1,2,3]) vector1.shape, vector2.shape vector1 + vector2 ``` This is the same thing as: ``` vector1_transformed = np.tile(vector1, reps=(1,3)) print( vector1_transformed ) vector1_transformed + vector2 ``` <img src="td5_ressources/broadcoast1.png" width="100%"> > from https://cs231n.github.io/python-numpy-tutorial/#numpy-broadcasting: * If the arrays do not have the same rank (number of dimensions), **prepend the shape of the lower rank array** with **1s until both shapes have the same length**. * The two arrays **are said to be compatible** in **a** dimension if they have the **same size** in the dimension, or if one of the arrays **has size 1** in **that** dimension. * The arrays can be broadcast together if they **are compatible in all dimensions**. After broadcasting: * After broadcasting, each array **behaves as if it had shape equal to the elementwise maximum of shapes** of the two input arrays. * In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves **as if it were copied along that dimension**. ``` vector1 = np.array([[1]]) vector2 = np.array([1,2,3]) vector1.shape, vector2.shape ``` prepending ones, the last dimension of the array will be "strecthed" so the number of elements match with the one of the first array ``` vector1 + vector2 ``` <u>**Example2:**</u> ``` vector1 = np.array([[1], [2]]) # 2 rows, 1 column vector2 = np.array([1,2,3]) # 1D vector vector1.shape, vector2.shape ``` 1. 2 rows in `vector1`, `vector2` does not have "rows dimension", prepended a 1 to create a new dimension 2. then `vector2` is stretched on its new dim so to have 2 elements, as `vector1` 3. while `vector1` has last dimensions (columns) strecthed to have 3 elements, as `vector2` ``` vector1 + vector2 ``` This is the same as: ``` vector2 np.tile(vector2, reps=(2, 1)) np.tile(vector1, reps=(1, 3)) ``` <u>**Example3:**</u> ``` vector1 = np.array([[1, 4, 5], [2, 2, 5]]) # 2 rows, 1 column vector2 = np.array([1, 2]) # 1D vector vector1.shape, vector2.shape try: vector1 + vector2 except Exception as e: print("After prepending with 1 the vector2, impossible to match 2 to 3:\n{}".format(e)) ``` # Finding the parameters in a simple linear regression case ## The data ``` %matplotlib inline x = np.linspace(0, 100, 100) y = 8*x + np.random.normal(x, 100) # y = 8*x + epsilon with epsilon ~ N(0,1) plt.scatter(x, y) plt.show() ``` How to find the coefficient $\beta$s (here the intercept $\beta$0 and the slope $\beta$1) in order to have the best fitting (simple) linear model ? ## The plotting function ``` def plotting(beta0, beta1): plt.scatter(x_scaled_and_centered, y) plt.plot(x_scaled_and_centered, lm.intercept_ + lm.coef_ * x_scaled_and_centered, color='r') ``` ## Using OLS ``` from sklearn.linear_model import LinearRegression # adding one dimension to the x (to have a feature matrix notation, # although x is only 1 feature, # which then can be apparented as a column vector lm = LinearRegression().fit(x[:, np.newaxis], y) lm.intercept_, lm.coef_ ``` With standardization before: ``` from sklearn.preprocessing import StandardScaler x_scaled_and_centered = StandardScaler().fit_transform(x[:, np.newaxis]) lm = LinearRegression(fit_intercept=True).fit(x_scaled_and_centerd, y) lm.intercept_, lm.coef_ plotting(lm.intercept_, lm.coef_) ``` ## Using a self-made (definitely non-optimised) algorithm ``` def algo_simple_linreg(x, y): """A self-made, definitely non-optimised algorithm to find the best alpha and beta values:""" from sklearn.metrics import mean_squared_error MSE = {} for beta0 in np.linspace(-5000, 5000, 100): for beta1 in np.linspace(-5000, 5000, 100): model = lambda x: beta0 + beta1*x mse = mean_squared_error( model(x), y) MSE[(beta0, beta1)] = mse return MSE MSE = algo_simple_linreg(x_scaled_and_centered, y) params = pd.Series(MSE).unstack() import seaborn as sns ax = plt.subplot(111) sns.heatmap(params, cmap="coolwarm", ax=ax) params.stack().idxmin() plotting(*params.stack().idxmin()) ``` ## Using One neuron 🤓 ### Definition 2 nice definitions i like, just stolen from [there](https://people.minesparis.psl.eu/fabien.moutarde/ES_MachineLearning/Slides/MLP-NeuralNetworks_course_2pp.pdf) > A **processing “unit”** applying a **simple operation to its inputs**, and which can be **“connected” to others** to build a **network** able to realize any input-output function > “Usual” definition: a “unit” computing a **weighted sum of its inputs**, that can add a constant term, and **apply some non-linearity (sigmoïd, ReLU, ...)** <img src="td4_ressources/img_perceptron_towards_data_science.png" width=500> ## Behavior formulas From the last definition of what a neuron is we get: From the last definition of what a neuron is we get: 1. <span style="color: red;">Weighted sum of its inputs</span> and <span style="color: blue;">can add a constant term<span>. $$ f(x_i) = \color{red}{\sum_{i=1}^{p}{w_i x_i}} + \color{blue}{cst}$$ 2. Apply some <span style="color: green;">non-linearity function g</span>:<br> Example: sigmoid function (g is Sigmoid) $$ g(z) = Sigmoid(z) = \color{green}{\frac{1}{1+e^{-z}}} $$ **Then the output of the neuron is:** $$ y_i = ( g \circ f ) (x) = g(f(x)) = \color{green}{\frac{1}{1+e^{-\color{red}{\sum_{i=1}^{p}{w_i x_i}} + \color{blue}{cst}}}} $$ Seems that **<span style='color: red;'><u>1.</u></span>** look very similar to a simple linear regression formula !<br> - The **weights** $w_i$ can be seen as the **coefficients** of a linear regression. - The $x_i$ as the **features** of **one** data point (one **row vector** then i.e. **one line of a matrix** or one observation in a **dataframe** !). There is $p$ features for one input vector here using the former notation. - The output $y_i$ is a scalar, that is, the output for one input vector of features $i$ We can rewrite this formula in **vector notation**, so we could scale this to **multiple input vectors**. $$ Y = (g \circ f) (X) = g( X W + B ) $$ or maybe using the indices so it is a little bit clearer $$ Y_{k,1} = (g \circ f) (X_{k,p}) = g( X_{k,p} W_{p,k} + B_{k,1} ) $$ Where $X$ is a **row vector of p features** (or a **matrix of n row vectors of p features**).<br> This notation is useful as it could be used for one single input, or many. - if **one input row vector** is passed, then, it is a **simple dot product** between this vector and a column **weights vector** occur, forming one scalar output $Y_{1,1}$. - if multiple inputs are being passed (size $k x p$), then W is a matrix of size $p x k$, so that Y has k output (one for each input) and that each feature of x is multiplied by its corresponding feature in W, forming finally a vector of outputs $Y_{k,1}$. Let's see the simple linear regression as a specification of multiple linear regression: $W_{k,p}$ for k inputs of p features ``` x = x[:, np.newaxis] # to set x as a matrix of row vectors of 1 feature W = np.random.random(size=tuple(reversed(x.shape))) # all are between 0 and 1 for stability at first ``` the bias term: ``` B = np.random.random(size=(x.shape[0], 1)) ``` ## Loss and Risk function Remembered the cost function ?<br> Let's take a **quadratic loss** as it is **nicely differentiable**,<br> Let's write: $$ z = (g \circ f) $$ then: $$ L(y_i, \hat{y}_i) = L(y_i, z(x_i)) = (y_i - z(x_i))^2 $$ Then in matrix notation: $$ L(Y_{k,1}, \hat{Y}_{k,1}) = L(Y_{k,1}, z(X_{k,p}) = (Y_{k,1} - z(X_{k,p}))^2 $$ Hence the result is a vector of loss for each output. The cost function is the **expected loss value**, if we use the quadratic loss it then becomes the **Mean Squared error**. $$ MSE = \sum_{i=1}^{n}{ ( y_i - z(x_i) )^2}$$ and in matrix notation: $$ MSE = E[L(Y_{k,1}, \hat{Y}_{k,1})]= E[ (Y_{k,1} - z(X_{k,p}))^2 ) ] $$ ## Backpropagation At first the weights (coefficients for a linear regression here) are chosen **randomly**.<br> Of course, if we knew them before, why would we use an algorithm ? :P We are going to use **Gradient descent**: a **first-order** iterative optimization algorithm for **finding a local minimum of a differentiable function**. We want to minimize the errors produces, we will perform the gradient descent of the loss function. It implies computing the derivative of the loss function w.r.t. the weights. The **quadratic loss function** is then a good choice here as it is differentiable. Computing the gradient of the **loss function** with respect to the **weights** enable us to later find the direction in the weight/parameter space that **minimizes the loss**. This derivative can be done in 2 different ways: - each iteration can use **one input vector**. Each of the weights will be updated computing the derivative on the loss function w.r.t. the weights for **that single input vector** that had been passed forward to compute the output and so the errors, this is named: **stochastic gradient descent**. - or each iteration can use a **batch of multiple vectors** (extreme case is using a bach equaling to the training set, the whole data available, that is, **k row vectors**) to compute the **expected loss value for that batch of inputs**, this is named: **batch gradient descent**. This means that each weight will be updating by the same quantity **meaned** over the grouped information from the predictions errors drawn from passing **k input vectors**. Once computed, the gradient points **uphill** (maximize the loss), so we need to update the weights taking the opposite direction. Also we will carefully take each update a **little step in this same direction** by using the (negation of the) derivative by a coefficient also called **learning rate**: since it influences to what extent **newly acquired information overrides old information** (wikipedia always gives the best quote). <img src="td5_ressources/batch_gradient_formula.png" width="100%"> <img src="td5_ressources/stochastic_gradient_formula.png" width="100%" > <img src="td5_ressources/gradient_descent.png" width="100%"> How to compute such gradient w.r.t. to the weights ? We use the **chain rule** ! > from wikipedia: Intuitively, the chain rule states that knowing the instantaneous rate of change of z relative to y and that of y relative to x allows one to calculate the instantaneous rate of change of z relative to x.<br> As put by George F. Simmons: "if a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man, then the car travels 2 × 4 = 8 times as fast as the man."m We are going to see how vary the **prediction errors** by making a **change in the weight space** (w.r.t. to each weight = gradient).<br> Here we face a **composite function**, as computing such derivative w.r.t one weight implies (using the chain rule): - to first derive w.r.t the output of the activation function, - then see how the output of the activation function changes w.r.t. the variable before the activation function (weighted inputs sum) - then w.r.t. to the weight itself. <img src="td4_ressources/img_formula_gradient_descent_backprop_mattmazur.png" width=600> # Recap <img src="td5_ressources/img_explanations_Bertin_Luc.png"> # Time to implement it which activation are we going to use ? ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 100, 800) y = 8*x + np.random.normal(x, 200) # y = 8*x + epsilon with epsilon ~ N(0,1) plt.scatter(x, y) plt.show() x = x[:, np.newaxis] # to set x as a matrix of row vectors of 1 feature b def split_n_batch_indexes(X, nb_chunks=5): import numpy as np indexes = np.arange(len(X)) shuffled_indexes = np.random.shuffle(indexes) return np.array_split(indexes, nb_chunks) class Neuron: """ Implementation of a single Neuron accepting multiple inputs """ def __init__(self, X, y, nb_epochs=100, nb_batches=5, learning_rate=0.01, activation="linear"): self.X, self.y = X, y # random weights self.W = np.random.random(size=(X.shape[1], 1)) # random bias (only one as only one output) self.B = np.random.random(size=(1, 1)) # number of epochs self.nb_epochs = nb_epochs # number of batches self.nb_batches = nb_batches # learning rate self.learning_rate = learning_rate # activation if activation=="linear": self.activation = lambda x: x self.derivative = lambda x: 1 # records self.records = {} def forward_pass(self, is_batch): """ a single forward pass to compute a prediction """ self.y_pred_batch = self.activation( self.X[is_batch] @ self.W + self.B) self.y_pred = self.activation( self.X @ self.W + self.B) def compute_mse_on_whole_dataset(self): """ return the mean squared errors on whole training set""" self.mse = np.mean((self.y - self.y_pred)**2) def backpropagation(self, is_batch): """ compute the gradient of the errors with respect to the weights using the chain rule """ # gradient of the errors w.r.t the output/prediction dE_dout = 2*(self.y_pred_batch - self.y[is_batch, np.newaxis]) # gradient of the prediction w.r.t before the activ. func dout_dz = self.derivative(self.X[is_batch]) #1 for linear # gradient of z w.r.t the weight dz_dw = self.X[is_batch] # final gradient w.r.t the weights: self.dE_dw = dE_dout * dout_dz * dz_dw # for the biases (only last part change) self.dE_db = dE_dout * dout_dz * 1 def update_weights_and_biases(self): dE_dw = self.dE_dw.mean(axis=0)[:, np.newaxis] dE_db = self.dE_db.mean(axis=0)[:, np.newaxis] self.W = self.W - self.learning_rate * dE_dw self.B = self.B - self.learning_rate * dE_db def predict(self, X_test): """ same as forward pass, just provide our own X""" return self.activation( X_test @ self.W + self.B) def run(self): """ learn iteratively: - an iteration is a single forward and backward pass - an epoch is consumed when all the inputs from the dataset have been used for updating the weight and biases """ # epochs for i in range(1, self.nb_epochs): # batches: for batch_i, indices in enumerate( split_n_batch_indexes(self.X, self.nb_batches)): self.forward_pass(indices) self.compute_mse_on_whole_dataset() self.backpropagation(indices) self.update_weights_and_biases() self.records[(i, batch_i)] = [self.W, self.B, self.mse] return self.records from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(x, y) scaler = StandardScaler().fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) neuron = Neuron(X_train, y_train) neuron records = neuron.run() import pandas as pd index = pd.MultiIndex.from_tuples(records.keys()) records = pd.DataFrame(records.values(), index=index, columns=['weights', 'bias', 'mse_train']) records df_weights = records.weights.apply(np.ravel).apply(pd.Series) df_bias = records.bias.apply(np.ravel).apply(pd.Series) df_bias.rename(columns = lambda x: "bias_{}".format(x), inplace=True) df_weights.rename(columns = lambda x: "weights_{}".format(x), inplace=True) df = pd.concat([df_weights, df_bias], axis=1) df df.plot(kind="line") plt.scatter(X_test, y_test) plt.plot(X_test, neuron.predict(X_test), color='green') %matplotlib notebook import matplotlib.animation as animation fig, ax = plt.subplots() # Initial plot x_ = np.linspace(-2, 2, 100).reshape((100,1)) # y_ = weight*x_ + bias y_ = float(df.iloc[0, 0])*x_ + float(df.iloc[0, 1]) line, = ax.plot(x_, y_, label="Fit from the neuron") plt.rcParams["figure.figsize"] = (4,2) plt.ylabel("y") plt.xlabel("X") plt.scatter(X_train, y_train, color='red', label="Training data") plt.scatter(X_test, y_test, color='green', label="Test data") plt.xlim(-2, 2) plt.legend() plt.title("Linear regression training fit using a single neuron | perceptron") def animate(i): line.set_label("Fit from the perceptron : epoch {}".format(i)) plt.legend() x_ = np.linspace(-2, 2, 100).reshape((100,1)) line.set_xdata(x_) # update the data line.set_ydata( float(df.iloc[i, 0])*x_ + float(df.iloc[i, 1]))# update the data return line, ani = animation.FuncAnimation(fig, animate, frames=np.arange(1, len(df)), interval=100) plt.show() ``` # Let's try with multiple features ;-) ``` from sklearn.datasets import load_boston from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split X, y = load_boston(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) lm = LinearRegression().fit(X_train, y_train) print( "linear regression coefficients {}".format(lm.coef_) ) neuron_on_boston = Neuron(X_train, y_train, learning_rate=0.1, nb_batches=1) records_on_boston = neuron_on_boston.run() import pandas as pd index = pd.MultiIndex.from_tuples(records_on_boston.keys()) df_boston = pd.DataFrame(records_on_boston.values(), index=index, columns=['weights', 'bias', 'mse_train']) df_weights = df_boston.weights.apply(np.ravel).apply(pd.Series) df_bias = df_boston.bias.apply(np.ravel).apply(pd.Series) df_bias.rename(columns = lambda x: "bias_{}".format(x), inplace=True) df_weights.rename(columns = lambda x: "weights_{}".format(x), inplace=True) df = pd.concat([df_weights, df_bias], axis=1) %matplotlib inline fig = plt.Figure(figsize=(10,6)) ax = fig.gca() save = df.drop("bias_0", axis=1).stack().unstack(level=0).loc[0].T save.plot(ax=ax) ax.legend().remove() fig.legend(loc='center', bbox_to_anchor=(1,0.5)) fig at_10 = save.iloc[10] at_50 = save.iloc[50] at_98 = save.iloc[98] plt.bar(x=np.arange(len(lm.coef_)), height=lm.coef_, color='red') plt.bar(x=np.arange(len(lm.coef_)), height=at_10, color='white', alpha = 0.5) plt.bar(x=np.arange(len(lm.coef_)), height=at_50, color='white', alpha = 0.5) plt.bar(x=np.arange(len(lm.coef_)), height=at_98, color='white', alpha = 0.5) ``` # In Keras ? ``` from keras.models import Sequential from keras.layers import Dense from keras import optimizers import pandas as pd model = Sequential() model.add(Dense(1, input_shape=(X.shape[1],), activation='linear')) sgd = optimizers.SGD(lr=0.02) model.compile(loss='mean_squared_error', optimizer=sgd) ``` ## A callback to store weights ``` from keras.callbacks import LambdaCallback weights = {} def save_weights(epoch, logs): weights[epoch] = model.layers[0].get_weights() keep_weights = LambdaCallback(on_epoch_end=save_weights) history = model.fit(x=X_train, y=y_train, batch_size=X_train.shape[0], epochs=99, validation_data=(X_test, y_test), verbose=0, callbacks=[keep_weights]) print(history.params) losses_ = pd.DataFrame(history.history) losses_.plot(kind="line") df_weights = pd.DataFrame(weights).T coefs_linear_reg = dict(zip( ["weight_{}".format(_) for _ in range(len(lm.coef_))], lm.coef_ )) coefs_linear_reg fig, ax = plt.subplots(figsize=(12,8)) ( df_weights[0] .apply(np.ravel) .apply(pd.Series) .rename(columns = lambda x: "weight_{}".format(x)) .plot(kind='line', ax=ax) ) ax.set_xlim(0,100) ax.legend().remove() fig.legend(loc='center', bbox_to_anchor=(1, 0.5)) at_98 = df_weights.iloc[98,0].reshape(-1) plt.bar(x=np.arange(len(lm.coef_)), height=lm.coef_, color='red') plt.bar(x=np.arange(len(lm.coef_)), height=at_98, color='white', alpha = 0.8) #history.model.get_weights() ``` # Fin.
github_jupyter
# Merge (IOS) In addition to translating models to native configuration, ntc_rosetta can create configuration deltas that can be applied into the device. This means that given to different sets of data, ntc_rosetta can compute the needed native commands to go from one to the other. To see what this means let's see it with an example. Let's start by loading the driver: ``` from ntc_rosetta import get_driver ios = get_driver("ios", "openconfig") ios_processor = ios() ``` Now we load some data that will represent the "running" configuration: ``` running = { "openconfig-interfaces:interfaces": { "interface": [ { "name": "FastEthernet1", "config": { "name": "FastEthernet1", "type": "iana-if-type:ethernetCsmacd", "description": "This is Fa1", "enabled": False }, "subinterfaces": { "subinterface": [ { "index": 1, "config": { "index": 1, "description": "This is Fa1.1" } }, { "index": 2, "config": { "index": 2, "description": "This is Fa1.2" } } ] } }, { "name": "FastEthernet3", "config": { "name": "FastEthernet3", "type": "iana-if-type:ethernetCsmacd", "description": "This is Fa3", "enabled": True }, "openconfig-if-ethernet:ethernet": { "openconfig-vlan:switched-vlan": { "config": { "interface-mode": "ACCESS", "access-vlan": 10 } } } }, { "name": "FastEthernet4", "config": { "name": "FastEthernet4", "type": "iana-if-type:ethernetCsmacd", "enabled": False }, "openconfig-if-ethernet:ethernet": { "openconfig-vlan:switched-vlan": { "config": { "interface-mode": "TRUNK", "trunk-vlans": [ 10, 20 ] } } } } ] }, "openconfig-network-instance:network-instances": { "network-instance": [ { "name": "default", "config": { "name": "default" }, "vlans": { "vlan": [ { "vlan-id": 10, "config": { "vlan-id": 10, "name": "prod", "status": "ACTIVE" } }, { "vlan-id": 20, "config": { "vlan-id": 20, "name": "dev", "status": "SUSPENDED" } } ] } } ] } } ``` Now we are going to copy this data into a "candidate" variable and apply some changes: ``` from copy import deepcopy candidate = deepcopy(running) ``` We are going to start by disabling vlan 10: ``` vlan_10 = candidate["openconfig-network-instance:network-instances"]["network-instance"][0]["vlans"]["vlan"][0] vlan_10["config"]["status"] = "SUSPENDED" ``` Eliminate vlan 20: ``` candidate["openconfig-network-instance:network-instances"]["network-instance"][0]["vlans"]["vlan"].pop(1) ``` And create a new vlan 30: ``` vlan_30 = { "vlan-id": 30, "config": { "vlan-id": 30, "name": "staging", "status": "ACTIVE" } } candidate["openconfig-network-instance:network-instances"]["network-instance"][0]["vlans"]["vlan"].append(vlan_30) ``` Once we have done those changes we can merge those two objects like this: ``` config = ios_processor.merge(candidate=candidate, running=running) ``` Finally, printing the config variable should return the native commands needed for that merge operation: ``` print(config) ```
github_jupyter
``` %reload_ext autoreload %autoreload 2 %matplotlib inline import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"; os.environ["CUDA_VISIBLE_DEVICES"]="0"; ``` ## Text Regression with Extra Regressors: An Example of Using Custom Data Formats and Models in *ktrain* This notebook illustrates how one can construct custom data formats and models for use in *ktrain*. In this example, we will build a model that can predict the price of a wine by **both** its textual description and the winery from which it was produced. This example is inspired by [FloydHub's regression template](https://github.com/floydhub/regression-template) for wine price prediction. However, instead of using the wine variety as the extra regressor, we will use the winery. Text classification (or text regression) with extra predictors arises across many scenarios. For instance, when making a prediction about the trustworthiness of a news story, one may want to consider both the text of the news aricle in addition to extra metadata such as the news publication and the authors. Here, such models can be built. The dataset in CSV format can be obtained from Floydhub at [this URL](https://www.floydhub.com/floydhub/datasets/wine-reviews/1/wine_data.csv). We will begin by importing some necessary modules and reading in the dataset. ``` # import some modules and read in the dataset import pandas as pd from tensorflow import keras import numpy as np import math path = 'data/wine/wine_data.csv' # ADD path/to/dataset data = pd.read_csv(path) data = data.sample(frac=1., random_state=42) data.head() ``` ## Cleaning the Data We use the exact same data-cleaning steps employed in [FloydHub's regression example](https://github.com/floydhub/regression-template) for this dataset. ``` # Clean it from null values data = data[pd.notnull(data['country'])] data = data[pd.notnull(data['price'])] data = data.drop(data.columns[0], axis=1) variety_threshold = 500 # Anything that occurs less than this will be removed. value_counts = data['variety'].value_counts() to_remove = value_counts[value_counts <= variety_threshold].index data.replace(to_remove, np.nan, inplace=True) data = data[pd.notnull(data['variety'])] data = data[pd.notnull(data['winery'])] # Split data into train and test train_size = int(len(data) * .8) print ("Train size: %d" % train_size) print ("Test size: %d" % (len(data) - train_size)) # Train features description_train = data['description'][:train_size] variety_train = data['variety'][:train_size] # Train labels labels_train = data['price'][:train_size] # Test features description_test = data['description'][train_size:] variety_test = data['variety'][train_size:] # Test labels labels_test = data['price'][train_size:] x_train = description_train.values y_train = labels_train.values x_test = description_test.values y_test = labels_test.values # winery metadata to be used later winery_train = data['winery'][:train_size] winery_test = data['winery'][train_size:] ``` ## Building a Vanilla Text Regression Model in *ktrain* We will preprocess the data and select a `linreg` model for our initial "vanilla" text regression model. ``` import ktrain from ktrain import text trn, val, preproc = text.texts_from_array(x_train=x_train, y_train=y_train, x_test=x_test, y_test=y_test, ngram_range=3, maxlen=200, max_features=35000) text.print_text_regression_models() model = text.text_regression_model('linreg', train_data=trn, preproc=preproc) ``` ## Adding an Extra Regressor to Our Model Next, we will add an extra regressor to our model, thereby, creating a new, augmented model. We choose the winery as the extra regressor, which is a categorical variable. Instead of representing the winery as a typical one-hot-encoded vector, we will learn an embedding for the winery during training. The embedding module will then be concatenated with our `linreg` text regression model forming a new model. The new model expects two distinct inputs. The first input is an integer representing the winery. The second input is a sequence of word IDs - standard input to neural text classifiers/regressors. ``` extra_train_data = winery_train extra_test_data = winery_test # encode winery as integers from sklearn.preprocessing import LabelEncoder encoder = LabelEncoder() encoder.fit(data['winery']) extra_train = encoder.transform(extra_train_data) extra_test = encoder.transform(extra_test_data) no_of_unique_cat = np.max(extra_train) + 1 embedding_size = min(np.ceil((no_of_unique_cat)/2), 50 ) embedding_size = int(embedding_size) vocab = no_of_unique_cat+1 print(embedding_size) extra_train = np.expand_dims(extra_train, -1) extra_test = np.expand_dims(extra_test, -1) # winery module extra_input = keras.layers.Input(shape=(1,)) extra_output = keras.layers.Embedding(vocab, embedding_size, input_length=1)(extra_input) extra_output = keras.layers.Flatten()(extra_output) extra_model = keras.Model(inputs=extra_input, outputs=extra_output) extra_model.compile(loss='mse', optimizer='adam', metrics=['mae']) # Combine winery module with linreg model merged_out = keras.layers.concatenate([extra_model.output, model.output]) merged_out = keras.layers.Dropout(0.25)(merged_out) merged_out = keras.layers.Dense(1000, activation='relu')(merged_out) merged_out = keras.layers.Dropout(0.25)(merged_out) merged_out = keras.layers.Dense(500, activation='relu')(merged_out) merged_out = keras.layers.Dropout(0.5)(merged_out) merged_out = keras.layers.Dense(1)(merged_out) combined_model = keras.Model([extra_model.input] + [model.input], merged_out) combined_model.compile(loss='mae', optimizer='adam', metrics=['mae']) ``` ## Wrapping our Data in an Instance of `ktrain.Dataset` To use this custom data format of two inputs in *ktrain*, we will wrap it in a `ktrain.Dataset` instance. There are two ways to do this. The first is to represent our datasets as `tf.data.Dataset` instances and then wrap each in a `ktrain.TFDataset` instance, which is a wrapper to a `tf.data.Dataset`. Use of `tf.data.Dataset` instances can potentially [yield certain performance improvements](https://www.tensorflow.org/guide/data_performance). See [this example notebook](https://github.com/amaiya/ktrain/blob/master/examples/vision/mnist-tf_workflow.ipynb) for a demonstration of using the `ktrain.TFDataset` class. For this example, one can make us of `ktrain.TFDataset` instances as follows: ```python import tensorflow as tf from ktrain.data import TFDataset BATCH_SIZE = 256 trn_combined = [extra_train] + [trn[0]] + [trn[1]] val_combined = [extra_test] + [val[0]] + [val[1]] def features_to_tfdataset(examples): def gen(): for idx, ex0 in enumerate(examples[0]): ex1 = examples[1][idx] label = examples[2][idx] x = (ex0, ex1) y = label yield ( (x, y) ) tfdataset= tf.data.Dataset.from_generator(gen, ((tf.int32, tf.int32), tf.int64), ((tf.TensorShape([None]), tf.TensorShape([None])), tf.TensorShape([])) ) return tfdataset train_tfdataset= features_to_tfdataset(trn_combined) val_tfdataset= features_to_tfdataset(val_combined) train_tfdataset = train_tfdataset.shuffle(trn_combined[0].shape[0]).batch(BATCH_SIZE).repeat(-1) val_tfdataset = val_tfdataset.batch(BATCH_SIZE) train_data = ktrain.TFDataset(train_tfdataset, n=trn_combined[0].shape[0], y=trn_combined[2]) val_data = ktrain.TFDataset(val_tfdataset, n=val_combined[0].shape[0], y=val_combined[2]) learner = ktrain.get_learner(combined_model, train_data=train_data, val_data=val_data) ``` The second approach is to wrap our datasets in a subclass of `ktrain.SequenceDataset`. We must be sure to override and implment the required methods (e.g., `def nsamples` and `def get_y`). The `ktrain.SequenceDataset` class is simply a subclass of `tf.keras.utils.Sequence`. See the TensorFlow documentation on the [Sequence class](https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence) for more information on how Sequence wrappers work. We employ the second approach in this tutorial. Note that, in the implementation below, we have made `MyCustomDataset` more general such that it can wrap lists containing an arbitrary number of inputs instead of just the two needed in our example. ``` class MyCustomDataset(ktrain.SequenceDataset): def __init__(self, x, y, batch_size=32, shuffle=True): # error checks err = False if type(x) == np.ndarray and len(x.shape) != 2: err = True elif type(x) == list: for d in x: if type(d) != np.ndarray or len(d.shape) != 2: err = True break else: err = True if err: raise ValueError('x must be a 2d numpy array or a list of 2d numpy arrays') if type(y) != np.ndarray: raise ValueError('y must be a numpy array') if type(x) == np.ndarray: x = [x] # set variables super().__init__(batch_size=batch_size) self.x, self.y = x, y self.indices = np.arange(self.x[0].shape[0]) self.n_inputs = len(x) self.shuffle = shuffle # required for instances of tf.keras.utils.Sequence def __len__(self): return math.ceil(self.x[0].shape[0] / self.batch_size) # required for instances of tf.keras.utils.Sequence def __getitem__(self, idx): inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size] batch_x = [] for i in range(self.n_inputs): batch_x.append(self.x[i][inds]) batch_y = self.y[inds] return tuple(batch_x), batch_y # required for instances of ktrain.Dataset def nsamples(self): return self.x[0].shape[0] #required for instances of ktrain.Dataset def get_y(self): return self.y def on_epoch_end(self): if self.shuffle: np.random.shuffle(self.indices) ``` Note that, you can also add a `to_tfdataset` method to your `ktrain.SequenceDataset` subclass. The `to_tfdataset` method is responsible for converting your dataset to a `tf.Dataset` and, if it exists, will be called by *ktrain* just prior to training. We have not done this here. ## Using the Custom Model and Data Format Once we wrap our data in a `ktrain.SequenceDataset` instance, we can wrap the model and datasets in a `Learner` object and use *ktrain* normally. ``` train_data = MyCustomDataset([extra_train] + [trn[0]], trn[1], shuffle=True) val_data = MyCustomDataset([extra_test] + [val[0]], val[1], shuffle=False) learner = ktrain.get_learner(combined_model, train_data=train_data, val_data=val_data, batch_size=256) ``` ### Estimate Learning Rate We'll choose a learning rate where the loss is falling. As shown in the plot, *1e-3* seems to be a good choice in this case. ``` learner.lr_find(show_plot=True, restore_weights_only=True) ``` ### Train the Model We will now train the model using the estimated learning rate from above for 12 epochs using the [1cycle learning rate policy](https://arxiv.org/pdf/1803.09820.pdf). ``` learner.fit_onecycle(1e-3, 12) ``` Our final validation MAE is **7.82**, which means our predictions are, on average, about $8 off the mark, which is not bad considering our model only looks at the textual description of the wine and the winery. ### Plot Some Training History The validation loss is still decreasing, which suggests we could train further if desired. The second and third plot show the learning rate and momentum schedules employed by `fit_onecycle`. ``` learner.plot('loss') learner.plot('lr') learner.plot('momentum') ``` ### View Top Losses Let's examine the validation examples that we got the most wrong. Looks like our model has trouble with expensive wines. ``` learner.view_top_losses(n=3) print(x_test[21790]) print(x_test[13745]) preds = learner.predict(val_data) preds[13745] ``` ### Making Predictions Lastly, we will use our model to make predictions on 5 randomly selected wines in the validation set. ``` # 5 random predictions val_data.batch_size = 1 for i in range(5): idx = np.random.choice(len(x_test)) print("TEXT:\n%s" % (x_test[idx])) print() print("\tpredicted: %s" % (np.squeeze(learner.predict(val_data[idx])))) print("\tactual: %s" % (y_test[idx])) print('----------------------------------------') ``` Let's look at our most expensive prediction. Our most expensive prediction (`$404`) is associated with an expensive wine priced at `$800`, which is good. However, we are `~$400` off. Again, our model has trouble with expensive wines. This is somewhat understandable since our model only looks at short textual descriptions and the winery - neither of which contain clear indicators of their exorbitant prices. ``` max_pred_id = np.argmax(preds) print("highest-priced prediction: %s" % (np.squeeze(preds[max_pred_id]))) print("actual price for this wine:%s" % (y_test[max_pred_id])) print('TEXT:\n%s' % (x_test[max_pred_id])) ``` ## Making Predictions on Unseen Examples In the example above, we made predictions for examples in the validation set. To make predictions for an arbitrary set of wine data, the steps are as follows: 1. Encode the winery using the same label encoder used above for validation data 2. Preprocess the wine description using the `preprocess_test` method. In this example, you will use `preproc.preprocess_test`. 3. Combine both into a `ktrain.Dataset` instance, as we did above.
github_jupyter
``` # Useful for debugging %load_ext autoreload %autoreload 2 ``` # Xopt class, Astra kekgun example This is the class method for running Xopt. ``` from xopt import Xopt # Notebook printing output from xopt import output_notebook output_notebook() import yaml from xopt import Xopt YAML=""" xopt: output_path: temp algorithm: name: cnsga options: max_generations: 3 population_size: 8 crossover_probability: 0.9 mutation_probability: 1.0 selection: auto population: null show_progress: True simulation: name: astra_with_generator evaluate: astra.evaluate.evaluate_astra_with_generator options: astra_input_file: ../templates/kekgun/kekgun.in generator_input_file: ../templates/kekgun/dist004.in # Note that you can call another file in the top level group: vocs: variables: sig_x: [0.05, 1] lt: [0.005, 0.07] maxe(1): [20, 50] phi(1): [-30, 30] maxb(1): [0, 0.4] maxe(2): [0, 32] phi(2): [-180, 180] maxb(2): [0, 0.3] maxe(3): [0, 32] maxe(4): [0, 32] phi(3): [-45, 45] phi(4): [-45, 45] phi(6): [-45, 45] constants: ipart: 1000 lspch: true zstop: 16.54 objectives: end_core_emit_95percent_x: MINIMIZE end_sigma_z: MINIMIZE constraints: end_sigma_z: [LESS_THAN, 0.0015] end_core_emit_95percent_x: [LESS_THAN, 9.0e-07] end_sigma_energy: [LESS_THAN, 200000.0] end_higher_order_energy_spread: [LESS_THAN, 5000.0] end_mean_kinetic_energy: [GREATER_THAN, 90000000.0] end_n_particle_loss: [LESS_THAN, 1] linked_variables: null """ !mkdir temp/ # Create object X = Xopt(YAML) # Change some things to make it run faster X.vocs['constants']['lspch'] = True X.vocs['constants']['ipart'] = 1000 X.vocs['constants']['zstop'] = 0.2 # Show config X # Check random inputs X.random_inputs() # Evaluate with some particular settings X.evaluate( {'ipart': 1000, 'lspch': True, 'zstop': 0.2}) %%time # Do a random evaluate to check that everything will run output = X.random_evaluate() output # These are the algorithm options X.algorithm['options'] # These are the options in the evaluate function X.simulation['options'] ``` # Run CNSGA using processes or threads ``` from concurrent.futures import ProcessPoolExecutor as PoolExecutor #from concurrent.futures import ThreadPoolExecutor as PoolExecutor executor = PoolExecutor() # Create object X = Xopt(YAML) # Change some things to make it run faster X.vocs['constants']['lspch'] = False X.vocs['constants']['ipart'] = 100 X.vocs['constants']['zstop'] = 0.2 X.results # Run X.run(executor=executor) # Check for errors X.results['error'] ``` # Write this configuration ``` X.config['algorithm']['options']['population'] = 'temp/pop_3.json' X.save('test.yaml') ``` # Run with MPI ``` !mpirun -n 4 python -m mpi4py.futures -m xopt.mpi.run -vv --logfile xopt.log test.yaml ``` # Dask ``` from dask.distributed import Client executor = Client() #executor = Client(processes=True) executor # Wait a few seconds for the Dask cluster to start from time import sleep sleep(5) X.algorithm['options']['max_generations'] = 4 X.algorithm['options']['population_size'] = 32 X.algorithm['options']['population'] = None X.results = None X # Run again X.run(executor=executor) executor.close() X.results['error'] ``` # Plot results ``` import matplotlib.pyplot as plt import numpy as np kx = 'end_sigma_z' ky = 'end_core_emit_95percent_x' x = np.array([d[kx] for d in X.results['outputs']]) y = np.array([d[ky] for d in X.results['outputs']]) plt.scatter(x, y) X.results['outputs'] ``` # Cleanup ``` import shutil import os shutil.rmtree('temp/') os.remove('xopt.log') #os.chmod('dask-worker-space/',0o777) shutil.rmtree('dask-worker-space/') ##os.chmod('test.yaml',0o777) #os.remove('test.yaml') ##os.chmod('NORRAN',0o777) ##os.remove('NORRAN') ```
github_jupyter
# Predict XRF Minerology from wireline logs In a *regression* problem, we aim to predict the output of a continuous value, like a price or a probability. Contrast this with a *classification* problem, where we aim to select a class from a list of classes (for example, where a picture contains an apple or an orange, recognizing which fruit is in the picture). This example uses the `tf.keras` API, see [this guide](https://www.tensorflow.org/guide/keras) for details. ``` # Use seaborn for pairplot !pip install -q seaborn !pip install scikit-learn --upgrade import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import datetime, os import sklearn from sklearn.model_selection import KFold, cross_val_score from sklearn.model_selection import cross_val_score, KFold, train_test_split from sklearn.metrics import mean_squared_error, accuracy_score, max_error, median_absolute_error # Make numpy printouts easier to read. np.set_printoptions(precision=3, suppress=True) import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers.experimental import preprocessing print(tf.__version__) ``` ### Get the data First download and import the dataset using pandas: ``` df3 = pd.read_csv('drive/My Drive/1_lewis_research/core_to_wl_merge/OS2_Merged_dataset_imputed_08_23_2021.csv') raw_dataset = df3 raw_dataset.describe() raw_dataset2 = raw_dataset[['CAL', 'GR', 'DT','SP','DENS','PE','RESD', 'PHIN','PHID','GR_smooth','PE_smooth', 'Ti', 'Mg', 'Si', 'Al', 'Ca']] dataset = raw_dataset2.copy() dataset.tail() ``` ### Clean the data The dataset contains a few unknown values. ``` dataset.isna().sum() ``` Drop those rows to keep this simple. ``` dataset = dataset.dropna() ``` ### Split the data into train and test Now split the dataset into a training set and a test set. We will use the test set in the final evaluation of our models. ``` train_dataset = dataset.sample(frac=0.75, random_state=0) test_dataset = dataset.drop(train_dataset.index) test_dataset ``` ### Inspect the data Have a quick look at the joint distribution of a few pairs of columns from the training set. Looking at the top row it should be clear that the fuel efficiency (MPG) is a function of all the other parameters. Looking at the other rows it should be clear that they are each functions of eachother. ``` sns.pairplot(train_dataset[['PE', 'PE_smooth', 'GR_smooth', 'DT', 'RESD','DENS','PHID', 'GR', 'Ti', 'Mg', 'Si', 'Al', 'Ca']], diag_kind='kde') ``` Also look at the overall statistics, note how each feature covers a very different range: ``` train_dataset.describe().transpose() ``` ### Split features from labels Separate the target value, the "label", from the features. This label is the value that you will train the model to predict. ``` test_features2 = test_dataset.copy() test_features = test_features2[['PE', 'PE_smooth', 'GR_smooth', 'DT', 'RESD','DENS','PHID', 'GR']] train_features2 = train_dataset.copy() train_features = train_features2[['PE', 'PE_smooth', 'GR_smooth', 'DT', 'RESD','DENS','PHID', 'GR']] train_labels = train_dataset[['Ti', 'Mg', 'Si', 'Al', 'Ca']] test_labels = test_features2[['Ti', 'Mg', 'Si', 'Al', 'Ca']] input_dimm = np.size(test_features.columns) print('input neurons', input_dimm) output_dimm = np.size(test_labels.columns) print('output neurons', output_dimm) ``` ## Normalization In the table of statistics it's easy to see how different the ranges of each feature are. ``` train_dataset.describe().transpose()[['mean', 'std']] ``` It is good practice to normalize features that use different scales and ranges. One reason this is important is because the features are multiplied by the model weights. So the scale of the outputs and the scale of the gradients are affected by the scale of the inputs. Although a model *might* converge without feature normalization, normalization makes training much more stable. ### The Normalization layer The `preprocessing.Normalization` layer is a clean and simple way to build that preprocessing into your model. The first step is to create the layer: ``` normalizer = preprocessing.Normalization() ``` Then `.adapt()` it to the data: ``` normalizer.adapt(np.array(train_features)) ``` This calculates the mean and variance, and stores them in the layer. ``` print(normalizer.mean.numpy()) ``` When the layer is called it returns the input data, with each feature independently normalized: ``` first = np.array(train_features[:1]) with np.printoptions(precision=2, suppress=True): print('First example:', first) print() print('Normalized:', normalizer(first).numpy()) ``` ### Multiple inputs You can use an almost identical setup to make predictions based on multiple inputs. This model still does the same $y = mx+b$ except that $m$ is a matrix and $b$ is a vector. This time use the `Normalization` layer that was adapted to the whole dataset. ``` linear_model = tf.keras.Sequential([ normalizer, layers.Dense(units=1) ]) ``` ## A DNN regression The previous section implemented linear models for single and multiple inputs. This section implements single-input and multiple-input DNN models. The code is basically the same except the model is expanded to include some "hidden" non-linear layers. The name "hidden" here just means not directly connected to the inputs or outputs. These models will contain a few more layers than the linear model: * The normalization layer. * Two hidden, nonlinear, `Dense` layers using the `relu` nonlinearity. * A linear single-output layer. Both will use the same training procedure so the `compile` method is included in the `build_and_compile_model` function below. ``` early_stop = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=4) logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) def build_and_compile_model(norm): model = keras.Sequential([ norm, layers.Dense(12, input_dim=input_dimm, activation='relu', # 16 (32 is a step too far?) activity_regularizer=tf.keras.regularizers.L2(0.01)), layers.Dropout(.10), # Maybe not needed with sensiable choices layers.Dense(24, activation='relu', # 32 activity_regularizer=tf.keras.regularizers.L1L2(l1 = 0.01, l2 = 0.01)), layers.Dense(output_dimm) # number of output minerals ]) model.compile(loss=tf.keras.losses.MeanAbsoluteError(), optimizer=tf.keras.optimizers.Adam(0.001)) return model ``` ### Full model If you repeat this process using all the inputs it slightly improves the performance on the validation dataset. ``` dnn_model = build_and_compile_model(normalizer) dnn_model.summary() %%time history = dnn_model.fit( train_features, train_labels, validation_split=0.2, verbose=0, epochs=200, callbacks=[early_stop]) plt.figure(figsize=(16,8)) plt.plot(history.history['val_loss'], label = 'val_loss') plt.plot(history.history['loss'], label='loss') plt.grid(True) plt.legend() rmse = mean_squared_error(np.array(history.history['loss']), (history.history['val_loss'])) plt.hist((np.array(history.history['val_loss'])-np.array(history.history['loss'])), bins=6) plt.xlim(-0.75, 0.75) test_features test_labels ``` Collect the results on the test set: ``` dnn_model.evaluate(test_features, test_labels, verbose=0) test_predictions = dnn_model.predict(test_features) test_labels.values test_predictions rmse = mean_squared_error(test_labels.values, test_predictions, squared=False) print("Mean Squared Error: %f" % (rmse)) MAE = median_absolute_error(test_labels.values, test_predictions) print("Median Absolute Error: %f" % (MAE)) ``` # Export ``` x = datetime.datetime.now() d = {'target': ['MultiXRF'], 'MSE': [rmse], 'MAE': [MAE], 'day': [x.day], 'month':[ x.month], 'year':[x.year], 'model':['DNN'], 'version':[tf.__version__]} results = pd.DataFrame(data=d) results.to_csv('drive/My Drive/1_lewis_research/analysis/experiments/dnn/dnn_results/OS2_multi_XRF_DNN.csv') results ``` ## Performance Now that all the models are trained check the test-set performance and see how they did: These results match the validation error seen during training. ### Make predictions Finally, predict have a look at the errors made by the model when making predictions on the test set: ``` test_predictions = dnn_model.predict(test_features) test_predictions test_labels.values new_array = test_labels.values - test_predictions df = pd.DataFrame(data=new_array, columns = test_labels.columns) df df.describe(percentiles=[0.10,0.90]) plt.hist(df.Ti) ``` It looks like the model predicts reasonably well. Now take a look at the error distribution: ``` test_predictions ``` If you're happy with the model save it for later use: ``` dnn_model.save('dnn_model.h5') ``` If you reload the model, it gives identical output: ``` #reloaded = tf.keras.models.load_model('dnn_model') #test_results['reloaded'] = reloaded.evaluate( # test_features, test_labels, verbose=0) ```
github_jupyter
# Jupyter Graffiti: Introduction and User Manual ### What is Jupyter Graffiti? Jupyter Graffiti are short movies you can add to Notebooks that can illustrate and teach any concept you can think of. It's similar to a screencast, but there's no traditional "video"; instead, movies are "live", meaning they play back whatever you were doing while recording, right in the notebook cells. Viewers can pause your movie any time and play around in your Notebook to try out whatever you're showing them. You can add unlimited Graffiti to text, code, and even images in any Notebook cell. Try out a demo of Graffiti on <a style="font-weight:800;color:blue;" target="_demo" title="Jump to Graffiti Demo" href="https://mybinder.org/v2/gh/willkessler/jupytergraffiti/master?filepath=samples%2FGraffiti%20Basic%20Demo.ipynb">this Binder link</a>. A screenshot of a Graffiti movie in play is shown below. (Note that if you are previewing this user manual on GitHub, the images won't display. You're better off viewing the manual on <a style="font-weight:800;color:blue;" target="_demo" title="Jump to Graffiti Demo" href="https://mybinder.org/v2/gh/willkessler/jupytergraffiti/master?filepath=user_manual/UserManual.ipynb">Binder.org</a>). <div style="background: url(images/pythagorasTip5.png); background-repeat:none; background-size:100% 100%;width:100%;height:890px;"> </div> <br> <br> <hr> You can also add buttons and inline terminals (shells) to help illustrate ideas and techniques and record your activities in these as well: <div style="background: url(images/terminal1.png); background-repeat:none; background-size:100% 100%;width:100%;height:504px;"> </div> ### Playing back Jupyter Graffiti Graffiti are indicated by a <span style="border-bottom:2px dashed rgb(47,147,107);">dashed green underline</span> underneath text or images, as well as a green marker off to the side of the Notebook. When the users hover over the underlined text, the user will see a floating tip which gives information and access to play the movie (when a movie is available for the Graffiti). During movie playback, the user is able to "scrub" to any part of the movie (just like a regular video on YouTube), pause the movie, mute the sound, and play the movie at 2x speed. They can also click the red X to cancel playback any time. <div style="background: url(images/graffitiTipAndMarker.png); background-repeat:none; background-size:100% 100%;width:100%;height:288px;"> </div> While a Graffiti movie is playing, the user sees a control panel. Here are the controls available to users during movie playback: <div style="background: url(images/controlPanelExplained2.png); background-repeat:none; background-size:100% 100%;height:510px;"> </div> ### What kinds of things can you record into a movie? * Your voice (just talk as you record) * All your mouse movements, page and cell scrolling, clicking, selecting/highlighting * Any cell execution and its associated output * Adding and removing cells ### What else can you do with Graffiti? * You can use the Graffiti tips alone; you don't need to record movies if you don't want to. Sometimes tips provide all the information a viewer needs. For instance, in complicated code you can create tips explaining parts of the code that can take the place of a lot of code comments. You can therefore keep the code much shorter and users who are interested can hover over the Graffiti to read the tips only if they want to. * You can add "annotations" like freeform drawings, arrows, boxes and other symbols, even pictures-- basically you can scribble on your Notebook anywhere you like (it's only visible while the movie is playing). * You can insert "mini-terminals" that give you access to a system shell inside a Notebook cell. All interaction with the "mini-terminals" can be recorded in your movies. * "Graffiti buttons" can be inserted in cells. These buttons can play Graffiti movies, reveal solutions to challenges students might be facing, or run a command in the mini-terminals. * Lock the markdown cells: You can "lock" all the markdown cells so that the user cannot edit them. This is helpful if your markdown content is for teaching only, and you want students to stay focused on code cells. ### Setting up Jupyter Graffiti for the First Time Jupyter Graffiti is a Notebook extension. The only requirement is Jupyter Notebooks and your web browser (Chrome/Firefox preferred). Installation of the software is covered in the README.md file at the top of this repository, but you can also try it out without installing anything at <a style="font-weight:800;color:blue;" target="_demo" title="Jump to Graffiti Demo" href="https://mybinder.org/v2/gh/willkessler/jupytergraffiti/master?filepath=samples%2FGraffiti%20Basic%20Demo.ipynb">this Binder link</a>. ### Activating Jupyter Graffiti in a Notebook Load any Notebook and activate Graffiti by simply clicking the Activate Graffiti button. This step just adds some metadata to the notebook (a graffiti `id`) so that the Graffiti extension can connect Graffiti and recordings back to this Notebook. You can activate Graffiti on more than one Notebook in a directory; the Graffiti will all be stored inside the `jupytergraffiti_data` folder in the same directory as the Notebook. <div style="background: url(images/activateGraffiti2.png); background-repeat:none; background-size:100% 100%;width:100%;height:180px;"> </div> ### Creating Graffiti Tips Once Graffiti has been activated for a given Notebook, the "Activate Graffiti" button now becomes the Show Graffiti Editor button. Click this button to show or hide the editor panel. You can drag the editor panel around to get it out of your way if it's covering up important content using the drag handle on the left side of the panel. <div style="border:1px solid #ddd;background: url(images/editorPanelWithDataDir.png); background-repeat:no-repeat; height:482px;"> </div> Now, when you select some text in a Notebook code cell, the Graffiti editor panel will change to show only a button that says "Create" on it as shown in the image below. Click this button. A new code cell will appear above your current cell. Enter whatever you want to show up in the Graffiti tip. (You can use markdown formatting here). When you hit control-enter in the Graffiti editor cell, or click "Save Graffiti" in the editor panel, your Graffiti will be saved, the Graffiti editor cell will disappear, and you will see a green underline under the text in the code cell where you made your selection. Mouse over the underlined text to see your new tip. <div style="border:1px solid #ddd;background: url(images/showGraffitiEditor.png); background-size:100% 100%;width:100%;height:200px;"> </div> ### Creating Movie Recordings #### Recording your interactions When you edit a markdown cell and select any text in it, the Editor Panel will show a "Record" button. This allows you to record a movie. Click the Record button. Scroll approximately to where you wish to begin recording, and click anywhere in the Notebook. From now on, all of your activities are being recorded. The Editor Panel will look like what's shown in the screenshot below. You can use the pen, highlight and eraser tools, or expand the stickers part of the panel to get additional markup tools to use while recording. To finish recording, hold down the Option key for about a second. The movie recording will automatically be saved and attached to your Graffiti. <div style="border:1px solid #ddd;background: url(images/recordingGraffitiInProgress.png); background-size:100% 100%;width:100%;height:500px;"> </div> #### Annotating while recording While making a recording, you can use the Editor panel to create line drawings, highlight items, or add "stickers" by selecting from the stickers choices. All of these annotations are drawn in "temporary ink" meaning they fade away after a few seconds (unless you uncheck that option in the Editor panel). You can also type text stickers anywhere in your Notebook, using the Text sticker (`Tt` icon), and even create custom image stickers with the `Cs` sticker icon. Keep in mind that all drawings created during a recording only persist while the recording is being played back. After the movie ends, the drawings will disappear until the next time the movie is played. #### Editing and Recording Once you've made a tip or recording, you can always edit it, (re)record a movie, or remove it entirely. Just select somewhere inside the Graffiti text (in markdown cells you will need to edit the Markdown, in code cells, just click inside the graffiti-ized text). <div style="border:1px solid #ddd;background: url(images/editingGraffitiInMarkdown.png); background-size:100% 100%;width:100%;height:250px;"> </div> #### Using Graffiti Directives You can add additional <a href="Directives.ipynb">"directives"</a> to every Graffiti that control how it behaves. All directives are entered in the Graffiti Editor cell when you Edit the Graffiti tip. For instance, the `%%play_on_click` directive will make any movie recorded for a Graffiti begin playing immediately when the Graffiti target (underlined text) is clicked. Each directive must be added on a line by itself, and all of them start with the special prefix `%%`. A list of available directives is given in the Directives documentation. #### Locking Markdown cells If you click the Lock icon on the Graffiti Editor panel, you will lock all the content in markdown cells so that it cannot be edited. You may want to do this to prevent students from accidentally modifying instructional content in markdown cells or deleting Graffiti. #### Removing Graffiti Use the Trash icon in the Graffiti panel to delete a Graffiti. You must select in the text of the Graffiti first. If the Graffiti is in a markdown cell then edit the markdown cell, click in the Graffiti text, and then click the Trash icon. If you delete text containing a Graffiti, it will not be removed from the stored Graffiti in the `jupytergraffiti_data` folder, although that's not a big deal. But to be most efficient you should use the Trash icon to remove Graffiti before you delete the text containing the Graffiti. #### Using the API Graffiti includes a Python API that's loaded when you run `import jupytergraffiti` in a code cell. Using this API, you can trigger Graffiti movies to play via your Python code, rather than via user clicks. You can also take other actions on Graffiti. For more information on the API, please consult the <a href="Graffiti API.ipynb">Graffiti API documentation.</a> ### Graffiti Extras If no text is selected in your Notebook the Editor Panel gives you access to several Graffiti "extras" as shown below. <div style="border:1px solid #ddd;background: url(images/graffitiExtras2.png); background-size:100% 100%;width:100%;height:500px;"> </div> ### Changing the Graffiti Data Directory **New** for April 27, 2019: You can now change the data directory path. You will see an icon not shown in the image above for the Editor Panel, but that looks like this: <div style="border:1px solid #ddd;background: url(images/editorPanelWithDataDir.png); background-size:100% 100%;width:200px;height:200px;"> </div> Click the "Home Folder Icon" and you will see a confirmation process by which you can change the path to where Graffiti are stored. By default, Graffiti are stored in a folder alongside your notebook called `jupytergraffiti_data`. However, you may wish to store them in another spot. For instance, perhaps you have several notebooks across different sub-folders, but you wish to store all the Graffiti at the top level. Using this function, you can change the location and name of the folder. Note that you cannot use `.` (ie hidden) folders e.g. `.graffiti` because Jupyter will not serve these files up through its built-in web server Tornado. If you wish to distinguish the Graffiti folder from other folders, consider using a prefix like `_` in the folder name, e.g. `../_graffiti-data`. Here's what the process looks like. <div style="border:1px solid #ddd;background: url(images/changeDataDir.png); background-size:100% 100%;width:800px;height:400px;"> </div> <div style="border:1px solid #ddd;background: url(images/changeDataDir2.png); background-size:100% 100%;width:700px;height:180px;"> </div> ### Creating Inline Terminals (Shells) You can insert a Graffiti shell using the shell icon button on the Graffiti Editor panel. It will be inserted before the selected cell. You can control how many lines of text the shell has. Edit the metadata for the cell and change the *"rows"* entry to the number of desired rows: ``` "graffitiConfig": { "rows": 6, "startingDirectory": "samples", "terminalId": "id_73csup4", "type": "terminal" } ``` For instance, in the above metadata the current number of rows is 6. You could, for instance, change the number of rows to 12. In order for the change to "stick", you must save the Notebook and reload the page. If you want all the terminals to use the same linux shell, add a metadata entry at the *Notebook* level. Use Jupyter's **Edit...Edit Notebook Metadata** menu, and in the `graffiti` configuration section add a `singleTerminal` entry like this: ``` "graffiti": { "firstAuthorId": "dev", "id": "id_sz43bzu", "language": "EN", "singleTerminal": "true" }, ``` Note: you **must** use double-quotes around _true_ in the entry above for this to work. Each shell cell gets a **Jump to Notebook Dir** link and a **Reset** link underneath it. Those links will jump the shell to the directory containing the notebook, or destroy the shell cell and replace it with a new one (and new underlying shell process). Please note that these functions will not work on Windows installations. ### Creating Graffiti Buttons You can insert a Graffiti Button by clicking the Button icon in the Graffiti Editor panel. If you have selected a markdown cell with an existing Graffiti button in it, a new button will be created alongside the existing one. To control the Graffiti associated with the button, edit the markdown cell contents. When you click inside the Graffiti text, the Graffiti Editor panel will provide an Edit button you can use to configure the button Graffiti in the same way you configure any Graffiti. You can use <a href="Directives.ipynb">directives</a> to configure what the button does when clicked, for instance, running a command in a Graffiti shell. ### Creating a Graffiti "Suite" The easiest way to set up shells and buttons to work in concert is a Graffiti Suite. A "Graffiti Suite" is actually a regular code cell, a Graffiti shell, and a Graffiti Button, all wired together. You can create a Suite by clicking the Suite button on the Graffiti Editor panel. The code cell is set up to autosave its contents to a text file every time they are changed. The Suite's Graffiti Button will run an arbitrary shell command (by default, the button just runs the `cat` command on the text file, but you can change this to anything you like). Using a Suite, you can provide a simple coding environment for a student which can then run arbitrary commands on their resulting text files. You can configure the file the code cell's contents are saved to, and what command the button runs, by editing the <a href="Directives.ipynb">directives</a> of the Graffiti Button. An example set of directives is shown in the code cell below. These directives are used by the "Run Code" button in the Graffiti Demo Notebook. ``` # Here are some example directives for a "Run Code" button %%play_on_click %%hide_tooltip %%save_to_file id_54d409v "./sum_natural.cpp" %%terminal_command id_up4395w "g++ ./sum_natural.cpp && ./a.out" ``` ### Using Show/Hide Graffiti Buttons Use the Graffiti Editor panel to create a Show/Hide Button, and you can make it easy to show a solution to a student for any problem they're working on. Again, this is a regular Graffiti button but configured with <a href="Directives.ipynb">directives</a> that will insert a code cell below the cell containing the Graffiti button. Into this code cell Graffiti will insert the contents of any file you wish. Click the Graffiti button again and the solution cell will disappear. Steps to use this feature: 1. First, create a code cell or markdown cell with the solution contents. 1. Now click the Show/Hide button on the Graffiti Editor panel 1. Your solution cell will disappear and a Show Solution button will show up instead. 1. Click the Show Solution button to make the solution appear/disappear. If you want to change your solution, just click Show Solution, modify the solution, and click the Graffiti Editor Panel Show/Hide button to create another Show/Hide solution button based on your modified solution contents. **Note** that a file for inclusion is created in the graffiti_data folder. You can change the Graffiti directive on your button if you want to include a different file. ![ksnip_20210209-103244.png](attachment:ksnip_20210209-103244.png) ### Graffiti Movie "Takes" Graffiti records each movie as many times as you like. You can pick the best "take" to show your viewers. After you record a movie, you will see a list of "Takes" in the Editor Panel when you select the Graffiti. The most recent take is the highest number but you can select an earlier take. Whatever you select will be the take that is viewed when that Graffiti's movie is played. This can be handy when you want to rerecord a movie but reference what you recorded previously. You can use the "Cleanup Takes" button (with the bathtub icon) to remove any unused takes when you're satisfied with your final take. ### "Skips" Tapping the option (alt) key while recording begins (or ends) a skip period. During this period, by default your activities will not be recorded. You may want to do this during a recording to pause for a bit and then come back. Via <a href="Directives.ipynb">directives</a>, you can change all the skips in a movie to accelerations instead. For instance, instead of just skipping over a section, you can make that section play at 4x speed or play through in just half a second. Because holding down the option key to end recording takes about a second, Graffiti automatically inserts a jump skip at the end of every recording ended by holding down the option key. You cannot control how this skip behaves: Graffiti will always jump over this "tail second" during playback. ### Notes about How Graffiti Works * Any changes made to the notebook during playback are rolled back when the movie finishes or is cancelled (so students don't lose any of their own work). (Via a directive, you can also make changes "stick" after a movie completes. This is useful if you want a student to pick up and extend your example.) * Movies only affect the cells interacted with during recording. If you record a movie that only changes the contents of one cell, no other cells are affected during playback. * Because all the Graffiti information is stored in text files inside the `jupytergraffiti_data` directory, you can add that directory to a source-code control system like `git` if you want to manage changes that way (including recorded audio) * All Graffiti in a notebook are loaded when the notebook is loaded (but asynchronously). Because there is no streaming, we don't recommend making 10 minute long movies-- the audio portion of the download would be large. Instead, try to create many short movies of 1-3 minutes so that the user doesn't wait a long time to see your videos begin. * Graffiti tries to line up scrolling and the cursor as best it can with the cells that were present during the recording. If you delete cells or rearrange the cells, the movies may not play as expected. * If you want, you can even insert old-skool YouTube videos in the tip via the `%%caption_video_id` directive. This way you can add a talking head to your Graffiti before the user starts playing the Graffiti movie. * If you split a Notebook into two notebooks, be aware that the Notebooks will share a Graffiti id. Behavior of movies in two notebooks that share an ID is unpredictable. Via the API you can make a copy of the first notebook and update all its Graffiti ids to new id's, and then delete Graffiti from the second notebook. * If you are inserting more than one Graffiti terminal, you can have it share a single shell. This may help the terminals load faster but be aware that these are all one Jupyter shell now so whatever you type in one shell will appear in all the shells. The metadata you must modify is at the Notebook level. * In code cells, Graffiti are tied to the "tokens" you select, not specific characters. For instance, if you add a Graffiti to the second "dog" word in the sentence _The cat, who was friends with the dog, refused to go on a walk with the dog_ then if you insert another _dog_ in the sentence, the Graffiti will appear to move. This is because we just store the fact that the second instance of the word "dog" in the code cell has the Graffiti. E.g. _The cat and the dog, who was friends with the other dog, refused to go on a walk with the dog_. The Graffiti will now be shown on the second "dog", not the third "dog". ### Sharing Graffiti with Others You can send a notebook to anyone else as you normally would, and it will be annotated with Graffiti. However, you will also need to send along the `jupytergraffiti_data` folder alongside the notebook, as this contains all the information about the Graffiti and movie recordings. If you use and install the `nbzip` extension then it's easy to download this folder as a compressed (tarball) file for upload elsewhere. The recipient of your Graffiti-ized notebook will also need to install Graffiti to view your Graffiti. Or they can upload the notebook and the `jupytergraffiti_data` folder to binder.org to view them there. ### Hiding the Graffiti Control Panel You may not want your users to have access to the Graffiti editing panel. You can choose to hide the button that shows and hides the Graffiti control panel by adjusting the notebook's metadata. To hide the Graffiti control panel, add a metadata entry at the *Notebook* level like so: use Jupyter's **Edit...Edit Notebook Metadata** menu, and in the `graffiti` section add a `displayControlPanelButton` entry like this: ``` "graffiti": { "firstAuthorId": "dev", "id": "id_sz43bzu", "language": "EN", "displayControlPanelButton": "false" }, ``` Note: you **must** use double-quotes around _false_ in the entry above for this to work. If you want the panel to be displayed again, just remove this entry or use "true" instead of false in this entry. ### Who can use Graffiti? Graffiti is open sourced under the same license as Jupyter Notebook. If you use it, please let us know and spread the word about Graffiti. <hr> * Graffiti Version: 1.01 * Date of this Manual First Writing: 04/16/19 * First Update: 4/29/19 * Latest update: 2/8/21
github_jupyter
``` import pandas as pd drivers_df = pd.read_csv("../data/raw/drivers.csv") laps_df = pd.read_csv("../data/raw/lap_times.csv") Drivers_merge = drivers_df.merge(laps_df, left_on='driverId', right_on='driverId') Drivers_merge constructors_df = pd.read_csv("../data/raw/constructors.csv") constructors_df constructors_standings = pd.read_csv("../data/raw/constructor_standings.csv") constructors_standings Constructor_merge = constructors_df.merge(constructors_standings, left_on='constructorId', right_on='constructorId') Constructor_merge Races_df = pd.read_csv("../data/raw/races.csv") Constructor_Final = Constructor_merge.merge(Races_df, left_on='raceId', right_on='raceId') Constructor_Final.rename(columns={"name_x":"Constructor_Names","name_y":"Race_Name"},inplace=True) Constructor_Final.drop(columns=["url_x","url_y"],inplace=True) Constructor_Final.to_csv("../data/Modified/constructor_final.csv",index=False,header=True) Constructor_New = Constructor_Final[['Constructor_Names','year','wins']] Constructor_New.set_index('Constructor_Names',inplace=True) Constructor_New.set_index('year',inplace=True) Constructor = Constructor_New['wins'].max() driver_standings = pd.read_csv("../data/raw/driver_standings.csv") Drivers_merge = drivers_df.merge(driver_standings, left_on='driverId', right_on='driverId') Drivers_merge Drivers_New = Drivers_merge.merge(Races_df, left_on='raceId', right_on='raceId') Drivers_New Drivers_New.drop(columns=["url_x","url_y"],inplace=True) Drivers_New.columns Drivers_New.to_csv("../data/Modified/Driver_Info.csv",header=True,index=False) pwd races_circuits_final_df = pd.read_csv("../data/Modified/races_circuits_final.csv") Drivers_Country_Race_Data = Drivers_New.merge(races_circuits_final_df , left_on='raceId', right_on='raceId') Drivers_Country_Race_Data.drop(columns=['time_x','time_y','date_x','date_y'],inplace=True) Drivers_Country_Race_Data.columns Drivers_New.drop(columns=['year'],inplace=True) Drivers_Country_Race_Data = Drivers_New.merge(races_circuits_final_df ,on='raceId') Drivers_Race_Data = Drivers_Country_Race_Data[['raceId','driverId','forename','surname','country','race_name','points','wins','year']] Drivers_Race_Data Drivers_Race_Data.to_csv("../data/Modified/Driver_Race_Data.csv",index=False,header=True) Drivers_Race_New = Drivers_Race_Data[['forename','surname','wins','points','year']] Drivers_Race_New['Full_Name'] = Drivers_Race_New['forename'].str.cat(Drivers_Race_New['surname'],sep=" ") Drivers_Race_New.drop(columns=['forename','surname'],inplace=True) grouped_df = Drivers_Race_New.groupby( ['year','Full_Name'] ).max().reset_index() grouped_df grouped_df grouped_df.to_csv("../data/Modified/Driver_Wins_New.csv",index=False,header=True) Drivers_Race_New ``` ## Driver Rank Data ``` results_df = pd.read_csv("../data/raw/results.csv") results_df = results_df[['raceId','driverId','laps','rank']] Drivers_Race_Data.info() # drivers_df # races_df Drivers_Rank_Data = results_df.merge(Drivers_Race_Data,on=['driverId','driverId']) Drivers_Rank_Data['rank'].value_counts() Drivers_Rank_Data.to_csv("../data/Modified/Drivers_Rank_Data.csv",header=True,index=False) ```
github_jupyter
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_03_2_keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Module 3: Introduction to TensorFlow** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 3 Material * Part 3.1: Deep Learning and Neural Network Introduction [[Video]](https://www.youtube.com/watch?v=zYnI4iWRmpc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_1_neural_net.ipynb) * **Part 3.2: Introduction to Tensorflow and Keras** [[Video]](https://www.youtube.com/watch?v=PsE73jk55cE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_2_keras.ipynb) * Part 3.3: Saving and Loading a Keras Neural Network [[Video]](https://www.youtube.com/watch?v=-9QfbGM1qGw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_3_save_load.ipynb) * Part 3.4: Early Stopping in Keras to Prevent Overfitting [[Video]](https://www.youtube.com/watch?v=m1LNunuI2fk&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_4_early_stop.ipynb) * Part 3.5: Extracting Weights and Manual Calculation [[Video]](https://www.youtube.com/watch?v=7PWgx16kH8s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_5_weights.ipynb) # Google CoLab Instructions The following code ensures that Google CoLab is running the correct version of TensorFlow. ``` try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ``` # Part 3.2: Introduction to Tensorflow and Keras ![TensorFlow](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_2_tensorflow.png "TensorFlow") TensorFlow is an open source software library for machine learning in various kinds of perceptual and language understanding tasks. It is currently used for both research and production by different teams in many commercial Google products, such as speech recognition, Gmail, Google Photos, and search, many of which had previously used its predecessor DistBelief. TensorFlow was originally developed by the Google Brain team for Google's research and production purposes and later released under the Apache 2.0 open source license on November 9, 2015. * [TensorFlow Homepage](https://www.tensorflow.org/) * [TensorFlow GitHib](https://github.com/tensorflow/tensorflow) * [TensorFlow Google Groups Support](https://groups.google.com/forum/#!forum/tensorflow) * [TensorFlow Google Groups Developer Discussion](https://groups.google.com/a/tensorflow.org/forum/#!forum/discuss) * [TensorFlow FAQ](https://www.tensorflow.org/resources/faq) ### What version of TensorFlow do you have? TensorFlow is very new and changing rapidly. It is very important that you run the same version of it that I am using. For this semester we will use a specific version of TensorFlow (mentioned in the last class notes). ![Self Driving Car](http://imgc-cn.artprintimages.com/images/P-473-488-90/94/9475/CFB6500Z/posters/paul-noth-does-your-car-have-any-idea-why-my-car-pulled-it-over-new-yorker-cartoon.jpg) [Wrong version of TensorFlow?](https://twitter.com/reza_zadeh/status/849160032608440320) ``` import tensorflow as tf print("Tensor Flow Version: {}".format(tf.__version__)) ``` ### Installing TensorFlow * [Google CoLab](https://colab.research.google.com/) - All platforms, use your browser (includes a GPU). * Windows - Supported platform. * Mac - Supported platform. * Linux - Supported platform. ### Why TensorFlow * Supported by Google * Works well on Linux/Mac * Excellent GPU support * Python is an easy to learn programming language * Python is [extremely popular](http://www.kdnuggets.com/2014/08/four-main-languages-analytics-data-mining-data-science.html) in the data science community ### Other Deep Learning Tools TensorFlow is not the only only game in town. These are some of the best supported alternatives. Most of these are written in C++. In order of my own preference (ordered in my own estimation of approximate importance): * [TensorFlow](https://www.tensorflow.org/) Google's deep learning API. The focus of this class, along with Keras. * [MXNet](https://mxnet.incubator.apache.org/) Apache foundation's deep learning API. Can be used through Keras. * [Theano](http://deeplearning.net/software/theano/) - Python, from the academics that created deep learning. * [Keras](https://keras.io/) - Also by Google, higher level framework that allows the use of TensorFlow, MXNet and Theano interchangeably. [Torch](http://torch.ch/) is used by Google DeepMind, the Facebook AI Research Group, IBM, Yandex and the Idiap Research Institute. It has been used for some of the most advanced deep learning projects in the world. However, it requires the [LUA](https://en.wikipedia.org/wiki/Lua_(programming_language)) programming language. It is very advanced, but it is not mainstream. I have not worked with Torch (yet!). * [PaddlePaddle](https://github.com/baidu/Paddle) - [Baidu](http://www.baidu.com/)'s deep learning API. * [Deeplearning4J](http://deeplearning4j.org/) - Java based. Supports all major platforms. GPU support in Java! * [Computational Network Toolkit (CNTK)](https://github.com/Microsoft/CNTK) - Microsoft. Support for Windows/Linux, command line only. Bindings for predictions for C#/Python. GPU support. * [H2O](http://www.h2o.ai/) - Java based. Supports all major platforms. Limited support for computer vision. No GPU support. ### Using TensorFlow TensorFlow is a low-level mathematics API, similar to [Numpy](http://www.numpy.org/). However, unlike Numpy, TensorFlow is built for deep learning. TensorFlow works by allowing you to define compute graphs with Python. In this regard, it is similar to [Spark](http://spark.apache.org/). TensorFlow compiles these compute graphs into highly efficient C++/[CUDA](https://en.wikipedia.org/wiki/CUDA) code. The [TensorBoard](https://www.tensorflow.org/versions/r0.10/how_tos/summaries_and_tensorboard/index.html) command line utility can be used to view these graphs. The iris neural network's graph used in this class is shown here: ![Iris Graph](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_2_graph_tf.png "Iris Graph") Expanding the DNN gives: ![Iris DNN Graph](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_2_graph_dnn.png "Iris DNN Graph") ### Using TensorFlow Directly Most of the time in the course we will communicate with TensorFlow using Keras, which allows you to specify the number of hidden layers and simply create the neural network. ``` # Import libraries for simulation import tensorflow as tf import numpy as np # Imports for visualization import PIL.Image from io import BytesIO from IPython.display import Image, display def DisplayFractal(a, fmt='jpeg'): """Display an array of iteration counts as a colorful picture of a fractal.""" a_cyclic = (6.28*a/20.0).reshape(list(a.shape)+[1]) img = np.concatenate([10+20*np.cos(a_cyclic), 30+50*np.sin(a_cyclic), 155-80*np.cos(a_cyclic)], 2) img[a==a.max()] = 0 a = img a = np.uint8(np.clip(a, 0, 255)) f = BytesIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) # Use NumPy to create a 2D array of complex numbers Y, X = np.mgrid[-1.3:1.3:0.005, -2:1:0.005] Z = X+1j*Y xs = tf.constant(Z.astype(np.complex64)) zs = tf.Variable(xs) ns = tf.Variable(tf.zeros_like(xs, tf.float32)) # Operation to update the zs and the iteration count. # # Note: We keep computing zs after they diverge! This # is very wasteful! There are better, if a little # less simple, ways to do this. # for i in range(200): # Compute the new values of z: z^2 + x zs_ = zs*zs + xs # Have we diverged with this new value? not_diverged = tf.abs(zs_) < 4 zs.assign(zs_), ns.assign_add(tf.cast(not_diverged, tf.float32)) DisplayFractal(ns.numpy()) import tensorflow as tf # Create a Constant op that produces a 1x2 matrix. The op is # added as a node to the default graph. # # The value returned by the constructor represents the output # of the Constant op. matrix1 = tf.constant([[3., 3.]]) # Create another Constant that produces a 2x1 matrix. matrix2 = tf.constant([[2.],[2.]]) # Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs. # The returned value, 'product', represents the result of the matrix # multiplication. product = tf.matmul(matrix1, matrix2) print(product) print(float(product)) # Enter an interactive TensorFlow Session. import tensorflow as tf x = tf.Variable([1.0, 2.0]) a = tf.constant([3.0, 3.0]) # Add an op to subtract 'a' from 'x'. Run it and print the result sub = tf.subtract(x, a) print(sub) print(sub.numpy()) # ==> [-2. -1.] x.assign([4.0, 6.0]) sub = tf.subtract(x, a) print(sub) print(sub.numpy()) ``` ### Introduction to Keras [Keras](https://keras.io/) is a layer on top of Tensorflow that makes it much easier to create neural networks. Rather than define the graphs, like you see above, you define the individual layers of the network with a much more high level API. Unless you are performing research into entirely new structures of deep neural networks it is unlikely that you need to program TensorFlow directly. **For this class, we will use usually use TensorFlow through Keras, rather than direct TensorFlow** Keras is a separate install from TensorFlow. To install Keras, use **pip install keras** after **pip install tensorflow**. ### Simple TensorFlow Regression: MPG This example shows how to encode the MPG dataset for regression. This is slightly more complex than Iris, because: * Input has both numeric and categorical * Input has missing values This example uses functions defined above in this notepad, the "helpful functions". These functions allow you to build the feature vector for a neural network. Consider the following: * Predictors/Inputs * Fill any missing inputs with the median for that column. Use **missing_median**. * Encode textual/categorical values with **encode_text_dummy**. * Encode numeric values with **encode_numeric_zscore**. * Output * Discard rows with missing outputs. * Encode textual/categorical values with **encode_text_index**. * Do not encode output numeric values. * Produce final feature vectors (x) and expected output (y) with **to_xy**. To encode categorical values that are part of the feature vector, use the functions from above. If the categorical value is the target (as was the case with Iris, use the same technique as Iris). The iris technique allows you to decode back to Iris text strings from the predictions. ``` from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import os import requests import numpy as np from sklearn import metrics df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) cars = df['name'] # Handle missing value df['horsepower'] = df['horsepower'].fillna(df['horsepower'].median()) # Pandas to Numpy x = df[['cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'year', 'origin']].values y = df['mpg'].values # regression # Build the neural network model = Sequential() model.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1 model.add(Dense(10, activation='relu')) # Hidden 2 model.add(Dense(1)) # Output model.compile(loss='mean_squared_error', optimizer='adam') model.fit(x,y,verbose=2,epochs=100) ``` ### Introduction to Neural Network Hyperparameters If you look at the above code you will see that the neural network is made up of 4 layers. The first layer is the input layer. This is specified by **input_dim** and it is set to be the number of inputs that the dataset has. One input neuron is needed for ever input (including dummy variables). However, there are also several hidden layers, with 25 and 10 neurons each. You might be wondering how these numbers were chosen? This is one of the most common questions about neural networks. Unfortunately, there is not a good answer. These are hyperparameters. They are settings that can affect neural network performance, yet there is not a clearly defined means of setting them. In general, more hidden neurons mean more capability to fit to complex problems. However, too many neurons can lead to overfitting and lengthy training times. Too few can lead to underfitting the problem and will sacrifice accuracy. Also, how many layers you have is another hyperparameter. In general, more layers allow the neural network to be able to perform more of its own feature engineering and data preprocessing. But this also comes at the expense of training times and risk of overfitting. In general, you will see that neuron counts start out larger near the input layer and tend to shrink towards the output layer in a sort of triangular fashion. There are techniques that use machine learning to optimize these values. These will be discussed in [Module 8.3](t81_558_class_08_3_keras_hyperparameters.ipynb). ### Controlling the Amount of Output One line is produced for each training epoch. You can eliminate this output by setting the verbose setting of the fit command: * **verbose=0** - No progress output (use with Juputer if you do not want output) * **verbose=1** - Display progress bar, does not work well with Jupyter * **verbose=2** - Summary progress output (use with Jupyter if you want to know the loss at each epoch) ### Regression Prediction Next, we will perform actual predictions. These predictions are assigned to the **pred** variable. These are all MPG predictions from the neural network. Notice that this is a 2D array? You can always see the dimensions of what is returned by printing out **pred.shape**. Neural networks can return multiple values, so the result is always an array. Here the neural network only returns 1 value per prediction (there are 398 cars, so 398 predictions). However, a 2D array is needed because the neural network has the potential of returning more than one value. ``` pred = model.predict(x) print("Shape: {}".format(pred.shape)) print(pred) ``` We would like to see how good these predictions are. We know what the correct MPG is for each car, so we can measure how close the neural network was. ``` # Measure RMSE error. RMSE is common for regression. score = np.sqrt(metrics.mean_squared_error(pred,y)) print(f"Final score (RMSE): {score}") ``` This means that, on average the predictions were within +/- 5.89 values of the correct value. This is not very good, but we will soon see how to improve it. We can also print out the first 10 cars, with predictions and actual MPG. ``` # Sample predictions for i in range(10): print(f"{i+1}. Car name: {cars[i]}, MPG: {y[i]}, predicted MPG: {pred[i]}") ``` ### Simple TensorFlow Classification: Iris This is a very simple example of how to perform the Iris classification using TensorFlow. The iris.csv file is used, rather than using the built-in files that many of the Google examples require. **Make sure that you always run previous code blocks. If you run the code block below, without the codeblock above, you will get errors** ``` import pandas as pd import io import requests import numpy as np from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/iris.csv", na_values=['NA', '?']) # Convert to numpy - Classification x = df[['sepal_l', 'sepal_w', 'petal_l', 'petal_w']].values dummies = pd.get_dummies(df['species']) # Classification species = dummies.columns y = dummies.values # Build neural network model = Sequential() model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1 model.add(Dense(25, activation='relu')) # Hidden 2 model.add(Dense(y.shape[1],activation='softmax')) # Output model.compile(loss='categorical_crossentropy', optimizer='adam') model.fit(x,y,verbose=2,epochs=100) # Print out number of species found: print(species) ``` Now that you have a neural network trained, we would like to be able to use it. The following code makes use of our neural network. Exactly like before, we will generate predictions. Notice that 3 values come back for each of the 150 iris flowers. There were 3 types of iris (Iris-setosa, Iris-versicolor, and Iris-virginica). ``` pred = model.predict(x) print("Shape: {pred.shape}") print(pred) # If you would like to turn of scientific notation, the following line can be used: np.set_printoptions(suppress=True) # The to_xy function represented the input in the same way. Each row has only 1.0 value because each row is only one type # of iris. This is the training data, we KNOW what type of iris it is. This is called one-hot encoding. Only one value # is 1.0 (hot) print(y[0:10]) # Usually the column (pred) with the highest prediction is considered to be the prediction of the neural network. It is easy # to convert the predictions to the expected iris species. The argmax function finds the index of the maximum prediction # for each row. predict_classes = np.argmax(pred,axis=1) expected_classes = np.argmax(y,axis=1) print(f"Predictions: {predict_classes}") print(f"Expected: {expected_classes}") # Of course it is very easy to turn these indexes back into iris species. We just use the species list that we created earlier. print(species[predict_classes[1:10]]) from sklearn.metrics import accuracy_score # Accuracy might be a more easily understood error metric. It is essentially a test score. For all of the iris predictions, # what percent were correct? The downside is it does not consider how confident the neural network was in each prediction. correct = accuracy_score(expected_classes,predict_classes) print(f"Accuracy: {correct}") ``` The code below performs two ad hoc predictions. The first prediction is simply a single iris flower. The second predicts two iris flowers. Notice that the argmax in the second prediction requires **axis=1**? Since we have a 2D array now, we must specify which axis to take the argmax over. The value **axis=1** specifies we want the max column index for each row. ``` # ad hoc prediction sample_flower = np.array( [[5.0,3.0,4.0,2.0]], dtype=float) pred = model.predict(sample_flower) print(pred) pred = np.argmax(pred) print(f"Predict that {sample_flower} is: {species[pred]}") # predict two sample flowers sample_flower = np.array( [[5.0,3.0,4.0,2.0],[5.2,3.5,1.5,0.8]], dtype=float) pred = model.predict(sample_flower) print(pred) pred = np.argmax(pred,axis=1) print(f"Predict that these two flowers {sample_flower} are: {species[pred]}") ```
github_jupyter
# Convolutional Neural Networks ## Project: Write an Algorithm for a Dog Identification App --- In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook. --- ### Why We're Here In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!). ![Sample Dog Output](images/sample_dog_output.png) In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience! ### The Road Ahead We break the notebook into separate steps. Feel free to use the links below to navigate the notebook. * [Step 0](#step0): Import Datasets * [Step 1](#step1): Detect Humans * [Step 2](#step2): Detect Dogs * [Step 3](#step3): Create a CNN to Classify Dog Breeds (from Scratch) * [Step 4](#step4): Create a CNN to Classify Dog Breeds (using Transfer Learning) * [Step 5](#step5): Write your Algorithm * [Step 6](#step6): Test Your Algorithm --- <a id='step0'></a> ## Step 0: Import Datasets Make sure that you've downloaded the required human and dog datasets: * Download the [dog dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip). Unzip the folder and place it in this project's home directory, at the location `/dogImages`. * Download the [human dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip). Unzip the folder and place it in the home directory, at location `/lfw`. *Note: If you are using a Windows machine, you are encouraged to use [7zip](http://www.7-zip.org/) to extract the folder.* In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays `human_files` and `dog_files`. ``` import os import numpy as np from glob import glob # load filenames for human and dog images human_files = np.array(glob(os.path.normpath("../Files/lfw/*/*"))) dog_files = np.array(glob(os.path.normpath("../Files/dogImages/*/*/*"))) # print number of images in each dataset print('There are %d total human images.' % len(human_files)) print('There are %d total dog images.' % len(dog_files)) ``` <a id='step1'></a> ## Step 1: Detect Humans In this section, we use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/opencv/opencv/tree/master/data/haarcascades). We have downloaded one of these detectors and stored it in the `haarcascades` directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image. ``` import cv2 import matplotlib.pyplot as plt %matplotlib inline # extract pre-trained face detector face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml') # load color (BGR) image img = cv2.imread(human_files[0]) # convert BGR image to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # find faces in image faces = face_cascade.detectMultiScale(gray) # print number of faces detected in the image print('Number of faces detected:', len(faces)) # get bounding box for each detected face for (x,y,w,h) in faces: # add bounding box to color image cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) # convert BGR image to RGB for plotting cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # display the image, along with bounding box plt.imshow(cv_rgb) plt.show() ``` Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter. In the above code, `faces` is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as `x` and `y`) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as `w` and `h`) specify the width and height of the box. ### Write a Human Face Detector We can use this procedure to write a function that returns `True` if a human face is detected in an image and `False` otherwise. This function, aptly named `face_detector`, takes a string-valued file path to an image as input and appears in the code block below. ``` # returns "True" if face is detected in image stored at img_path def face_detector(img_path): img = cv2.imread(img_path) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray) return len(faces) > 0 ``` ### (IMPLEMENTATION) Assess the Human Face Detector __Question 1:__ Use the code cell below to test the performance of the `face_detector` function. - What percentage of the first 100 images in `human_files` have a detected human face? - What percentage of the first 100 images in `dog_files` have a detected human face? Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays `human_files_short` and `dog_files_short`. __Answer:__ (You can print out your results and/or write your percentages in this cell) ``` from tqdm import tqdm human_files_short = human_files[:100] dog_files_short = dog_files[:100] #-#-# Do NOT modify the code above this line. #-#-# ## TODO: Test the performance of the face_detector algorithm ## on the images in human_files_short and dog_files_short. human_detected = 0 dog_detected = 0 with tqdm(total=len(human_files_short)) as pbar: for human,dog in zip(human_files_short,dog_files_short): if face_detector(human): human_detected += 1 if face_detector(dog): dog_detected += 1 pbar.update(1) print("Human dataset - Detected faces: {}, Actual: {}, Accuracy: {}".format(human_detected,len(human_files_short),(human_detected/len(human_files_short)))) print("Dog dataset - Detected faces: {}, Actual: {}, Accuracy: {}".format(dog_detected,len(dog_files_short),(dog_detected/len(human_files_short)))) ``` We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`. ``` ### (Optional) ### TODO: Test performance of another face detection algorithm. ### Feel free to use as many code cells as needed. ``` --- <a id='step2'></a> ## Step 2: Detect Dogs In this section, we use a [pre-trained model](http://pytorch.org/docs/master/torchvision/models.html) to detect dogs in images. ### Obtain Pre-trained VGG-16 Model The code cell below downloads the VGG-16 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). ``` import torch import torchvision.models as models # define VGG16 model VGG16 = models.vgg16(pretrained=True) # check if CUDA is available use_cuda = torch.cuda.is_available() print(use_cuda) # move model to GPU if CUDA is available if use_cuda: VGG16 = VGG16.cuda() ``` Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image. ### (IMPLEMENTATION) Making Predictions with a Pre-trained Model In the next code cell, you will write a function that accepts a path to an image (such as `'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg'`) as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive. Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the [PyTorch documentation](http://pytorch.org/docs/stable/torchvision/models.html). ``` from PIL import Image import torchvision.transforms as transforms tr = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # takes an image path # returns image tensor def process_image(image_path,transformation): image_path = os.path.normpath(image_path) image = Image.open(image_path) img = transformation(image) return img # takes normalized img_tensor as input # displays image def show_image(img_tensor): # img = np.transpose(img_tensor, (1,2,0)) # img = img.numpy() # mean = np.array([0.485, 0.456, 0.406]) # std = np.array([0.229, 0.224, 0.225]) # image = std * img + mean # image = np.clip(image, 0, 1) image = Image.open(img_tensor) plt.imshow(image) plt.show() imgpath = '../Files/dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg' img = process_image(imgpath,tr) show_image(imgpath) # Set PIL to be tolerant of image files that are truncated. from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True def VGG16_predict(img_path): ''' Use pre-trained VGG-16 model to obtain index corresponding to predicted ImageNet class for image at specified path Args: img_path: path to an image Returns: Index corresponding to VGG-16 model's prediction ''' img_path = os.path.normpath(img_path) ## TODO: Complete the function. ## Load and pre-process an image from the given img_path tr = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) np_tensor = process_image(img_path,tr) np_tensor.unsqueeze_(0) np_tensor = np_tensor.float() if use_cuda: np_tensor = np_tensor.cuda() log_ps = VGG16.forward(np_tensor) ps = torch.exp(log_ps) top_p,top_class = ps.topk(1,dim=1) ## Return the *index* of the predicted class for that image return int(top_class) # predicted class index VGG16_predict('../Files/dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') ``` ### (IMPLEMENTATION) Write a Dog Detector While looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive). Use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not). ``` ### returns "True" if a dog is detected in the image stored at img_path def dog_detector(img_path): ## TODO: Complete the function. index = VGG16_predict(img_path) return index >= 151 and index <= 268 ``` ### (IMPLEMENTATION) Assess the Dog Detector __Question 2:__ Use the code cell below to test the performance of your `dog_detector` function. - What percentage of the images in `human_files_short` have a detected dog? - What percentage of the images in `dog_files_short` have a detected dog? __Answer:__ ``` ### TODO: Test the performance of the dog_detector function ### on the images in human_files_short and dog_files_short. human_detected = 0 dog_detected = 0 with tqdm(total=len(human_files_short)) as pbar: for human,dog in zip(human_files_short,dog_files_short): if face_detector(human): human_detected += 1 if dog_detector(dog): dog_detected += 1 pbar.update(1) print("Human dataset - Detected faces: {}, Actual: {}, Accuracy: {}".format(human_detected,len(human_files_short),(human_detected/len(human_files_short)))) print("Dog dataset - Detected faces: {}, Actual: {}, Accuracy: {}".format(dog_detected,len(dog_files_short),(dog_detected/len(human_files_short)))) ``` We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as [Inception-v3](http://pytorch.org/docs/master/torchvision/models.html#inception-v3), [ResNet-50](http://pytorch.org/docs/master/torchvision/models.html#id3), etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`. ``` ### (Optional) ### TODO: Report the performance of another pre-trained network. ### Feel free to use as many code cells as needed. ``` --- <a id='step3'></a> ## Step 3: Create a CNN to Classify Dog Breeds (from Scratch) Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy. We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel. Brittany | Welsh Springer Spaniel - | - <img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200"> It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels). Curly-Coated Retriever | American Water Spaniel - | - <img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200"> Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed. Yellow Labrador | Chocolate Labrador | Black Labrador - | - <img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220"> We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%. Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun! ### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dogImages/train`, `dogImages/valid`, and `dogImages/test`, respectively). You may find [this documentation on custom datasets](http://pytorch.org/docs/stable/torchvision/datasets.html) to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of [transforms](http://pytorch.org/docs/stable/torchvision/transforms.html?highlight=transform)! ``` import os from torchvision import datasets ### TODO: Write data loaders for training, validation, and test sets ## Specify appropriate transforms, and batch_sizes data_dir = os.path.normpath("../Files/dogImages") train_dir = os.path.normpath(data_dir+'/train') val_dir = os.path.normpath(data_dir+'/valid') test_dir = os.path.normpath(data_dir+'/test') train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.Resize(121), transforms.CenterCrop(120), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_val_transforms = transforms.Compose([transforms.Resize(121), transforms.CenterCrop(120), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # g_transforms = transforms.Compose([transforms.Resize(225), # transforms.CenterCrop(224), # transforms.Grayscale(), # transforms.ToTensor(),]) # train_data = datasets.ImageFolder(train_dir, transform=train_transforms) train_data = datasets.ImageFolder(train_dir, transform=train_transforms) valid_data = datasets.ImageFolder(val_dir, transform=test_val_transforms) test_data = datasets.ImageFolder(test_dir, transform=test_val_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) validloader = torch.utils.data.DataLoader(valid_data, batch_size=32, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) loaders_scratch = { 'train':trainloader, 'valid':validloader, 'test':testloader } ``` **Question 3:** Describe your chosen procedure for preprocessing the data. - How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why? - Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not? **Answer**: The training data does multiple transformations for varying input, such as randomly rotating the image up to 30 degrees, randomly cropping and resizing the image to 224x224 pixels, and applying a random horizontal flip to the image. These steps are taken on the training data to provide more variety and complexity in the training dataset so that the network can more accurately handle a larger variety of test images. The testing and validation datasets are resized and center cropped to 224 pixels. The color channels of all datasets are then normalized to the specified amounts per the ImageNet documentation. ### (IMPLEMENTATION) Model Architecture Create a CNN to classify dog breed. Use the template in the code cell below. ``` import torch.nn as nn import torch.nn.functional as F # define the CNN architecture class Net(nn.Module): ### TODO: choose an architecture, and complete the class def __init__(self): super(Net, self).__init__() ## Define layers of a CNN self.layer1 = nn.Sequential( nn.Conv2d(3, 8, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=4, stride=4)) self.layer2 = nn.Sequential( nn.Conv2d(8, 32, kernel_size=5, stride=1, padding=2), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) # self.layer3 = nn.Sequential( # nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=2), # nn.ReLU(), # nn.MaxPool2d(kernel_size=2, stride=2)) # self.layer4 = nn.Sequential( # nn.Conv2d(48, 96, kernel_size=5, stride=1, padding=2), # nn.ReLU(), # nn.MaxPool2d(kernel_size=2, stride=2)) self.drop_out = nn.Dropout(p=0.2) self.fc1 = nn.Linear(15 * 15 * 32, 4096) self.fc2 = nn.Linear(4096, 1000) self.output = nn.Linear(1000,133) def forward(self, x): ## Define forward behavior x = self.layer1(x) x = self.layer2(x) # x = self.layer3(x) # x = self.layer4(x) x = x.reshape(x.size(0), -1) x = self.drop_out(F.relu(self.fc1(x))) x = self.drop_out(F.relu(self.fc2(x))) x = F.log_softmax(self.output(x), dim=1) return x #-#-# You so NOT have to modify the code below this line. #-#-# # instantiate the CNN model_scratch = Net() # move tensors to GPU if CUDA is available if use_cuda: model_scratch.cuda() ``` __Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. __Answer:__ ### (IMPLEMENTATION) Specify Loss Function and Optimizer Use the next code cell to specify a [loss function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/stable/optim.html). Save the chosen loss function as `criterion_scratch`, and the optimizer as `optimizer_scratch` below. ``` import torch.optim as optim ### TODO: select loss function criterion_scratch = nn.CrossEntropyLoss() ### TODO: select optimizer optimizer_scratch = optim.Adam(model_scratch.parameters(), lr=0.001) ``` ### (IMPLEMENTATION) Train and Validate the Model Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_scratch.pt'`. ``` # the following import is required for training to be robust to truncated images import time from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path): """returns trained model""" # initialize tracker for minimum validation loss valid_loss_min = np.Inf for epoch in range(1, n_epochs+1): # initialize variables to monitor training and validation loss train_loss = 0.0 valid_loss = 0.0 accuracy = 0 start = time.time() ################### # train the model # ################### model.train() for batch_idx, (data, target) in enumerate(loaders['train']): train_start = time.time() # move to GPU if use_cuda: data, target = data.cuda(), target.cuda() ## find the loss and update the model parameters accordingly optimizer.zero_grad() log_ps = model(data) loss = criterion(log_ps,target) loss.backward() optimizer.step() ## record the average training loss, using something like train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss)) # print("TRAIN - Epoch: {}/{}\tBatch: {}/{}\tTime: {}".format(epoch,n_epochs,batch_idx+1,len(loaders['train']),time.time()-train_start)) ###################### # validate the model # ###################### model.eval() with torch.no_grad(): for batch_idx, (data, target) in enumerate(loaders['valid']): valid_start = time.time() # move to GPU if use_cuda: data, target = data.cuda(), target.cuda() log_ps = model(data) loss = criterion(log_ps,target) ## update the average validation loss valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss)) # print("VALID - Epoch: {}/{}\tBatch: {}/{}\tTime: {}".format(epoch,n_epochs,batch_idx+1,len(loaders['valid']),time.time()-valid_start)) ps = torch.exp(log_ps) top_p,top_class = ps.topk(1,dim=1) equals = top_class == target.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) # print training/validation statistics print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f} \t Accuracy: {:.4f}%'.format( epoch, train_loss, valid_loss, (accuracy/len(loaders['valid']))*100 )) print('Time Elapsed: {} seconds\n'.format(time.time()-start)) ## TODO: save the model if validation loss has decreased if valid_loss < valid_loss_min: valid_loss_min = valid_loss torch.save(model.state_dict(), save_path) # return trained model return model # train the model model_scratch = train(20, loaders_scratch, model_scratch, optimizer_scratch, criterion_scratch, use_cuda, 'model_scratch.pt') # load the model that got the best validation accuracy model_scratch.load_state_dict(torch.load('model_scratch.pt')) ``` ### (IMPLEMENTATION) Test the Model Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%. ``` def test(loaders, model, criterion, use_cuda): # monitor test loss and accuracy test_loss = 0. correct = 0. total = 0. model.eval() for batch_idx, (data, target) in enumerate(loaders['test']): # move to GPU if use_cuda: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # update average test loss test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss)) # convert output probabilities to predicted class pred = output.data.max(1, keepdim=True)[1] # compare predictions to true label correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy()) total += data.size(0) print('Test Loss: {:.6f}\n'.format(test_loss)) print('\nTest Accuracy: %2d%% (%2d/%2d)' % ( 100. * correct / total, correct, total)) # call test function test(loaders_scratch, model_scratch, criterion_scratch, use_cuda) ``` --- <a id='step4'></a> ## Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning) You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set. ### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dogImages/train`, `dogImages/valid`, and `dogImages/test`, respectively). If you like, **you are welcome to use the same data loaders from the previous step**, when you created a CNN from scratch. ``` ## TODO: Specify data loaders data_dir = '../Files/dogImages' train_dir = os.path.normpath(data_dir+'/train') val_dir = os.path.normpath(data_dir+'/valid') test_dir = os.path.normpath(data_dir+'/test') train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.Resize(225), transforms.CenterCrop(224), # transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_val_transforms = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # train_data = datasets.ImageFolder(train_dir, transform=train_transforms) train_data = datasets.ImageFolder(train_dir, transform=train_transforms) valid_data = datasets.ImageFolder(val_dir, transform=test_val_transforms) test_data = datasets.ImageFolder(test_dir, transform=test_val_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) validloader = torch.utils.data.DataLoader(valid_data, batch_size=32, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) loaders_transfer = { 'train':trainloader, 'valid':validloader, 'test':testloader } # test model with dataloaders from previous step # loaders_transfer = loaders_scratch ``` ### (IMPLEMENTATION) Model Architecture Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable `model_transfer`. ``` import torchvision.models as models import torch.nn as nn ## TODO: Specify model architecture model_transfer = models.vgg16(pretrained=True) for param in model_transfer.parameters(): param.requires_grad = False ## 120x120 pixels classifier = nn.Sequential(nn.Linear(4608, 2304), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(2304, 665), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(665, 133), nn.LogSoftmax(dim=1)) ## 224x224 pixels classifier_big = nn.Sequential(nn.Linear(25088, 3072), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(3072, 1024), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(1024, 306), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(306, 133), nn.LogSoftmax(dim=1)) classifier = classifier_big model_transfer.classifier = classifier if use_cuda: model_transfer.cuda() ``` __Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem. __Answer:__ ### (IMPLEMENTATION) Specify Loss Function and Optimizer Use the next code cell to specify a [loss function](http://pytorch.org/docs/master/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/master/optim.html). Save the chosen loss function as `criterion_transfer`, and the optimizer as `optimizer_transfer` below. ``` criterion_transfer = nn.NLLLoss() optimizer_transfer = optim.Adam(model_transfer.classifier.parameters(),lr=0.001) ``` ### (IMPLEMENTATION) Train and Validate the Model Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_transfer.pt'`. ``` # train the model n_epochs = 10 model_transfer = train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt') # load the model that got the best validation accuracy (uncomment the line below) checkpoint = os.path.normpath('../Files/model_transfer.pt') device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model_transfer.load_state_dict(torch.load(checkpoint,map_location=device)) ``` ### (IMPLEMENTATION) Test the Model Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%. ``` test(loaders_transfer, model_transfer, criterion_transfer, use_cuda) ``` ### (IMPLEMENTATION) Predict Dog Breed with the Model Write a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan hound`, etc) that is predicted by your model. ``` import random def generate_image_map(dog_files): image_map = {} for file in dog_files: name = os.path.basename(os.path.dirname(file)) name = name.split('.')[1] if name in image_map.keys(): image_map[name].append(file) else: image_map[name] = [file] return image_map def get_breed_image(prediction,image_map): pred = prediction.replace(" ","_") images = image_map[pred] img = random.choice(images) return img img_map = generate_image_map(dog_files) ### TODO: Write a function that takes a path to an image as input ### and returns the dog breed that is predicted by the model. # list of class names by index, i.e. a name can be accessed like class_names[0] class_names = [item[4:].replace("_", " ") for item in train_data.classes] def predict_breed_transfer(img_path): # load the image and return the predicted breed np_tensor = process_image(img_path,test_val_transforms) np_tensor.unsqueeze_(0) np_tensor = np_tensor.float() if use_cuda: np_tensor = np_tensor.cuda() output = model_transfer(np_tensor) pred = output.data.max(1, keepdim=True)[1] pred = pred.cpu() ps = torch.exp(output) top_p,top_class = ps.topk(4,dim=1) top_p = top_p.cpu().detach().numpy() top_class = top_class.cpu().numpy() return class_names[int(pred.numpy())] def predict_breed_transfer2(img_path): # load the image and return the predicted breed np_tensor = process_image(img_path,test_val_transforms) np_tensor.unsqueeze_(0) np_tensor = np_tensor.float() if use_cuda: np_tensor = np_tensor.cuda() output = model_transfer(np_tensor) ps = torch.exp(output) top_p,top_class = ps.topk(4,dim=1) top_p = top_p.cpu().detach().numpy().reshape(-1) top_class = top_class.cpu().numpy().reshape(-1) return top_p,top_class def normalize_predictions(top_p,top_class): norm_preds = {} for p,c in zip(top_p,top_class): total = top_p.sum() norm_preds[c] = p/total return norm_preds def filter_matches(matches): filtered = {'Other':0} for key,val in matches.items(): if val*100 >= 5: filtered[class_names[key]] = val else: filtered['Other'] += val return filtered def show_thumbnail(img_path): # img = np.transpose(img_tensor, (1,2,0)) # img = img.numpy() # mean = np.array([0.485, 0.456, 0.406]) # std = np.array([0.229, 0.224, 0.225]) # image = std * img + mean # image = np.clip(image, 0, 1) image = Image.open(img_path) plt.figure(figsize = (2,2)) plt.imshow(image) plt.set_xticks([]) plt.set_yticks([]) plt.show() file = human_files[random.randint(0,len(human_files)-1)] show_thumbnail(file) ## test def predict2(file): matches = {} for i in range(0,99): top_p,top_class = predict_breed_transfer2(file) preds = normalize_predictions(top_p,top_class) for key,val in preds.items(): if key in matches.keys(): matches[key] += val else: matches[key] = val best = 0 best_match = '' for key,val in matches.items(): matches[key] = val/100 if val > best: best = val best_match = class_names[key] comparison = get_breed_image(best_match,img_map) print("{:.4f}% sure you are a {}\nYour image:".format(best,best_match)) show_image(file) print("{} reference image:".format(best_match)) show_image(comparison) return best_match,filter_matches(matches) file = dog_files[random.randint(0,len(dog_files)-1)] # file = human_files[random.randint(0,len(human_files)-1)] best_match,matches = predict2(file) def show_other_resemblances(best_match,matches): print("Other resemblances . . .\n") for key,val in matches.items(): if key != best_match and key != 'Other': print("{:.2f}% {}".format(val*100,key)) img = get_breed_image(key,img_map) show_thumbnail(img) show_other_resemblances(best_match,matches) ## test def predict(file): matches = {} for i in range(0,99): prediction = predict_breed_transfer(file) if prediction in matches.keys(): matches[prediction] += 1 else: matches[prediction] = 1 best = 0 best_match = '' for key,val in matches.items(): if val > best: best = val best_match = key comparison = get_breed_image(best_match,img_map) print("{}% sure you are a {}".format(best,best_match)) show_image(file) print(comparison) show_image(comparison) # test index = random.randint(0,len(dog_files)-1) file = dog_files[index] predict(file) file = os.path.normpath('../Files/princess.jpg') predict(file) ``` --- <a id='step5'></a> ## Step 5: Write your Algorithm Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then, - if a __dog__ is detected in the image, return the predicted breed. - if a __human__ is detected in the image, return the resembling dog breed. - if __neither__ is detected in the image, provide output that indicates an error. You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the `face_detector` and `dog_detector` functions developed above. You are __required__ to use your CNN from Step 4 to predict dog breed. Some sample output for our algorithm is provided below, but feel free to design your own user experience! ![Sample Human Output](images/sample_human_output.png) ### (IMPLEMENTATION) Write your Algorithm ``` ### TODO: Write your algorithm. ### Feel free to use as many code cells as needed. def run_app(img_path): ## handle cases for a human face, dog, and neither text = "Error, couldn't detect a dog or person" if dog_detector(img_path): text = "Woof Woof hello Doggy!" elif face_detector(img_path) > 0: text = "Hello human!" else: print(text) exit() text = text + " Determining your breed . . ." best_match,matches = predict2(img_path) show_resemblances(best_match,matches) ``` --- <a id='step6'></a> ## Step 6: Test Your Algorithm In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that _you_ look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog? ### (IMPLEMENTATION) Test Your Algorithm on Sample Images! Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images. __Question 6:__ Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm. __Answer:__ (Three possible points for improvement) ``` ## TODO: Execute your algorithm from Step 6 on ## at least 6 images on your computer. ## Feel free to use as many code cells as needed. ## suggested code, below for file in np.hstack((human_files[:3], dog_files[:3])): run_app(file) ```
github_jupyter
# Title `{To be changed}` ## Setup and import libraries ``` # Automatically reloading imported modules %load_ext autoreload %autoreload 2 import sys sys.path.append('../..') import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from src.helpers import * pd.set_option('display.max_columns', None) # Change design of plots sns.set(style="whitegrid") # Change sizes and resolution of plots plt.rcParams['figure.figsize'] = (10, 6) %config InlineBackend.figure_format='retina' plt.rcParams.update({'font.size': 15}) # Hide warnings import warnings warnings.filterwarnings('ignore') ``` ## Load the data ``` df = pd.read_csv('example/data.csv') ``` ## General descriptive analysis Let's check shape of the data - number of rows and attributes: ``` df.shape ``` Overview of the data: ``` df.head() ``` ### Datatypes **Note:** Be careful, attributes with only NaN values are considered as `float64` type by default. ``` df.dtypes ``` ### Basic characteristics ``` df.describe() df.describe(exclude=[np.number]) ``` ### One-value columns Which attributes contain only one value? ``` one_value_attributes_analysis(df) ``` ### Missing values Analysis of missing values in attributes: ``` missing_values_analysis(df) ``` ### Duplicates Are there any duplicates? ``` df.duplicated().any() ``` ## Attributes analysis Analysis of all attributes: ``` skip_attributes = [ ] # attributes to skip in analysis (e.g. id) textual_attributes = [ ] # attributes with text values (e.g. content of article) textual_attributes = list(filter(lambda value: value not in skip_attributes, textual_attributes)) numerical_attributes = list(df.select_dtypes([np.number]).columns) numerical_attributes = list(filter(lambda value: value not in textual_attributes + skip_attributes, numerical_attributes)) categorical_attributes = list(df.select_dtypes(['object', 'category', 'bool']).columns) categorical_attributes = list(filter(lambda value: value not in textual_attributes + skip_attributes, categorical_attributes)) label_column = 'example_label' # attribute considered as "label" ``` ### Label attribute distribution ``` df[label_column].value_counts().plot(kind='pie', title='Distribution of predicted classes'); df[label_column].value_counts().plot(kind='bar', title='Distribution of predicted classes'); ``` ### Numerical attributes Analysis of numerical attributes: ``` analyse_numerical_attributes(df, label_column, numerical_attributes) ``` ### Categorical attributes Analysis of categorical attributes: ``` analyse_categorical_attributes(df, label_column, categorical_attributes) ``` ### Textual attributes Some parts of analysis include preprocessing text. In this case, the following operations are performed: * removing special characters (only letters are preserved), * removing tokens shorter than 3 characters, * removing tokens that are in english stop-words defined by NLTK library, * removing accent marks from tokens. Analysis of textual attributes: ``` analyse_textual_attributes(df, textual_attributes) ``` ## Pairwise analysis Pairwise analysis of attributes (numerical attributes): ### Pair analysis ``` if numerical_attributes and len(numerical_attributes) > 1: sns.pairplot(df, vars=numerical_attributes, hue=label_column); ``` ### Correlations Correlation matrix: ``` if numerical_attributes and len(numerical_attributes) > 1: check_correlations(df, numerical_attributes) ```
github_jupyter
``` from src.model.nli_models import * from src.model.novelty_models import * from src.defaults import * from torchtext.data import Example import pandas as pd import numpy as np import html import random from IPython.core.display import display, HTML from IPython.display import IFrame import numpy as np import matplotlib import matplotlib.pyplot as plt import warnings from transformers import BertTokenizer, DistilBertTokenizer warnings.filterwarnings("ignore") def encode_text(text,field): ex = Example.fromlist([text],[("text",field)]) enc = field.process([ex.text]) return torch.tensor(enc) def load_novelty_model(_id): # load model data check_model(_id) def load_model_data(_id): model_path = os.path.join("./results/", _id, "model.pt") model_data = torch.load(model_path) return model_data field = load_field(_id) model_data = load_model_data(_id) encoder_id = model_data["options"]["load_nli"] check_model(encoder_id) def load_encoder(enc_data): if enc_data["options"].get("attention_layer_param", 0) == 0: enc_data["options"]["use_glove"] = False model = bilstm_snli(enc_data["options"]) elif enc_data["options"].get("r", 0) == 0: enc_data["options"]["use_glove"] = False model = attn_bilstm_snli(enc_data["options"]) else: enc_data["options"]["use_glove"] = False model = struc_attn_snli(enc_data["options"]) model.load_state_dict(enc_data["model_dict"]) return model enc_data = load_encoder_data(encoder_id) encoder = load_encoder(enc_data).encoder model = HAN(model_data["options"],encoder) model.load_state_dict(model_data["model_dict"]) return model,field def decode(inp,field): if hasattr(field.nesting_field,"vocab"): return [[field.nesting_field.vocab.itos[i] for i in sent] for sent in inp] else: tok = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") return [tok.convert_ids_to_tokens(i) for i in inp.tolist()] def attention_combined(inp,field,s_att,w_att=None): tok_str = decode(inp,field) assert len(tok_str) == s_att.shape[0] assert len(tok_str) == w_att.shape[0] assert len(tok_str[0]) == w_att.shape[1] opt = [] for sent in range(len(tok_str)): sent_with_att = [] for word in range(len(tok_str[0])): word_str = tok_str[sent][word] if word_str not in ["<pad>",'[PAD]']: sent_with_att.append((word_str,w_att[sent][word].item())) if sent_with_att!=[]: opt.append((sent_with_att,s_att[sent].item())) return opt def html_string(word,color,new_line = False): template = '<span class="barcode"; style="color: black; background-color: {}">{}</span>' colored_string = template.format(color, '&nbsp' + word + '&nbsp') + ("<br>" if new_line else "") return colored_string def colorize(attention_list): cmap_sent = matplotlib.cm.Blues cmap_word = matplotlib.cm.Reds template = '<span class="barcode"; style="color: black; background-color: {}">{}</span>' colored_string = '' for sent, sent_att in attention_list: sent_color = matplotlib.colors.rgb2hex(cmap_sent(sent_att*5)[:3]) colored_string += html_string('\t---\t ',sent_color) for word,word_att in sent: word_color = matplotlib.colors.rgb2hex(cmap_word(word_att)[:3]) colored_string += html_string(word,word_color) colored_string += "<br>" colored_string += "<br><br><br>" return colored_string seed_torch() def plot_attention(src,trg,model,field,true_cls = False,return_html=False,cuda=False): cmap_word = matplotlib.cm.inferno s_enc = encode_text(src,field) t_enc = encode_text(trg,field) template = '<span class="barcode"; style="color: black; background-color: {}">{}</span>' model.eval() with torch.no_grad(): if cuda == True: s_enc = s_enc.cuda() t_enc = t_enc.cuda() opt,s_att,t_att = model.forward_with_attn(s_enc,t_enc) pred = F.softmax(opt) pred = pred.cpu() s_att = [i.cpu() for i in s_att] t_att = [i.cpu() for i in t_att] src_att_map = attention_combined(s_enc[0],field,s_att[0].permute((1,0)),s_att[1][0]) trg_att_map = attention_combined(t_enc[0],field,t_att[0].permute((1,0)),t_att[1][0]) s_html = colorize(src_att_map) t_html = colorize(trg_att_map) if pred[0][0].item()>0.5: prob = pred[0][0].item() pred_str = "Prediction : " +str(pred[0][0].item())+ " Non-Novel" else: prob = pred[0][1].item() pred_str = "Prediction : " +str(pred[0][1].item())+ " Novel" col = matplotlib.colors.rgb2hex(cmap_word(prob)[:3]) pred_html = template.format(col,pred_str) if true_cls: pred_html += "<br> " +template.format(col," True Class : "+true_cls) if return_html: return s_html+t_html+ "<br><br><br>"+pred_html, pred[0] with open('colorize.html', 'w') as f: f.write(s_html+t_html+ "<br><br><br>"+pred_html ) def disp_attention(): IFrame('./colorize.html',width=1200,height=400) model,field = load_novelty_model('NOV-1146') # 54,46 source = "We also experimented with the document encoder to find if document level pretraining has any impact on the novelty detection performance. We train our document encoder described in on the Reuters dataset with an objective of 10 class classification. The reuters dataset aligns with the dataset we use for novelty detection, the Reuters dataset contains news articles which are to be classified into categories like Investment, Shipping, Crop, Oil and so on" target = "Identifing each of these classes requires the ability to extract features which tell which industry the news is related to. We hypothesise that this information is also essential while calculating the novelty of a document, since knowing if the target document is talking about the same thing or topic is also important. This can be seen as assisting the information filtering task. For this experiment we have 3 settings, we test the impact with and without pretraining for Reuters dataset and Reuters+NLI dataset combined. The settings used are listed below." a = plot_attention(source,target,model,field) IFrame('./colorize.html',width=2200,height=1000) import json with open('.data/dlnd/TAP-DLND-1.0_LREC2018_modified/dlnd.jsonl','r') as f: items = f.readlines() data = [json.loads(i) for i in items] example = data[120] print("Prediction:") plot_attention(example["source"],example["target_text"],model,field,example["DLA"]) print("Actual:") example["DLA"] IFrame('./colorize.html',width=2200,height=2000) lens = [] for i in data: lens.append(len(i['source'])) print(lens.index(min(lens))) lens = [(i,lens[i]) for i in range(len(lens))] model.cuda() from tqdm import tqdm def predict(data,model,field): wrong_pred_path = './results/all_pred/wrong_pred' correct_pred_path = './results/all_pred/correct_pred' if not os.path.exists(correct_pred_path): os.makedirs(wrong_pred_path) os.makedirs(correct_pred_path) for i in tqdm(range(len(data))): src = data[i]['source'] trg = data[i]['target_text'] true = data[i]['DLA'] html_str,pred = plot_attention(src,trg,model,field,true_cls = true,return_html=True,cuda=True) pred_lab = "Non-Novel" if pred[0]>0.5 else "Novel" if pred_lab!=true: html_path = os.path.join(wrong_pred_path,str(i)+".html") with open(html_path,'w') as f: f.write(html_str) else: html_path = os.path.join(correct_pred_path,str(i)+".html") with open(html_path,'w') as f: f.write(html_str) model.cuda() from tqdm import tqdm def predict(data,model,field): wrong_id = [] for i in tqdm(range(len(data))): src = data[i]['source'] trg = data[i]['target_text'] true = data[i]['DLA'] s_enc = encode_text(src,field) t_enc = encode_text(trg,field) model.eval() with torch.no_grad(): opt,s_att,t_att = model.forward_with_attn(s_enc.cuda(),t_enc.cuda()) pred = F.softmax(opt)[0][1].item() if pred > 0.5: pred = "Novel" else: pred = "Non-Novel" if pred!=true: wrong_id.append(i) return wrong_id wrong_id = predict(data,model,field) model.cpu() c=0 for i in sorted(lens,key = lambda x:x[1]): c+=1 if i[0] in wrong_id: print(i) break wrong_id[0] a = 12.92931979 ```
github_jupyter
# Disease Prediction based on Symtoms ``` #Importing Libraries from mpl_toolkits.mplot3d import Axes3D from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt from tkinter import * import numpy as np import pandas as pd import os #List of the symptoms is listed here in list l1. l1=['back_pain','constipation','abdominal_pain','diarrhoea','mild_fever','yellow_urine', 'yellowing_of_eyes','acute_liver_failure','fluid_overload','swelling_of_stomach', 'swelled_lymph_nodes','malaise','blurred_and_distorted_vision','phlegm','throat_irritation', 'redness_of_eyes','sinus_pressure','runny_nose','congestion','chest_pain','weakness_in_limbs', 'fast_heart_rate','pain_during_bowel_movements','pain_in_anal_region','bloody_stool', 'irritation_in_anus','neck_pain','dizziness','cramps','bruising','obesity','swollen_legs', 'swollen_blood_vessels','puffy_face_and_eyes','enlarged_thyroid','brittle_nails', 'swollen_extremeties','excessive_hunger','extra_marital_contacts','drying_and_tingling_lips', 'slurred_speech','knee_pain','hip_joint_pain','muscle_weakness','stiff_neck','swelling_joints', 'movement_stiffness','spinning_movements','loss_of_balance','unsteadiness', 'weakness_of_one_body_side','loss_of_smell','bladder_discomfort','foul_smell_of urine', 'continuous_feel_of_urine','passage_of_gases','internal_itching','toxic_look_(typhos)', 'depression','irritability','muscle_pain','altered_sensorium','red_spots_over_body','belly_pain', 'abnormal_menstruation','dischromic _patches','watering_from_eyes','increased_appetite','polyuria','family_history','mucoid_sputum', 'rusty_sputum','lack_of_concentration','visual_disturbances','receiving_blood_transfusion', 'receiving_unsterile_injections','coma','stomach_bleeding','distention_of_abdomen', 'history_of_alcohol_consumption','fluid_overload','blood_in_sputum','prominent_veins_on_calf', 'palpitations','painful_walking','pus_filled_pimples','blackheads','scurring','skin_peeling', 'silver_like_dusting','small_dents_in_nails','inflammatory_nails','blister','red_sore_around_nose', 'yellow_crust_ooze'] #List of Diseases is listed in list disease. disease=['Fungal infection', 'Allergy', 'GERD', 'Chronic cholestasis', 'Drug Reaction', 'Peptic ulcer diseae', 'AIDS', 'Diabetes ', 'Gastroenteritis', 'Bronchial Asthma', 'Hypertension ', 'Migraine', 'Cervical spondylosis', 'Paralysis (brain hemorrhage)', 'Jaundice', 'Malaria', 'Chicken pox', 'Dengue', 'Typhoid', 'hepatitis A', 'Hepatitis B', 'Hepatitis C', 'Hepatitis D', 'Hepatitis E', 'Alcoholic hepatitis', 'Tuberculosis', 'Common Cold', 'Pneumonia', 'Dimorphic hemmorhoids(piles)', 'Heart attack', 'Varicose veins', 'Hypothyroidism', 'Hyperthyroidism', 'Hypoglycemia', 'Osteoarthristis', 'Arthritis', '(vertigo) Paroymsal Positional Vertigo', 'Acne', 'Urinary tract infection', 'Psoriasis', 'Impetigo'] #disease = [df['prognosis'].unique()] #print(disease) l2=[] for i in range(0,len(l1)): l2.append(0) print(l2) #Reading the training .csv file df=pd.read_csv("training.csv") DF= pd.read_csv('training.csv', index_col='prognosis') #Replace the values in the imported file by pandas by the inbuilt function replace in pandas. df.replace({'prognosis':{'Fungal infection':0,'Allergy':1,'GERD':2,'Chronic cholestasis':3,'Drug Reaction':4, 'Peptic ulcer diseae':5,'AIDS':6,'Diabetes ':7,'Gastroenteritis':8,'Bronchial Asthma':9,'Hypertension ':10, 'Migraine':11,'Cervical spondylosis':12, 'Paralysis (brain hemorrhage)':13,'Jaundice':14,'Malaria':15,'Chicken pox':16,'Dengue':17,'Typhoid':18,'hepatitis A':19, 'Hepatitis B':20,'Hepatitis C':21,'Hepatitis D':22,'Hepatitis E':23,'Alcoholic hepatitis':24,'Tuberculosis':25, 'Common Cold':26,'Pneumonia':27,'Dimorphic hemmorhoids(piles)':28,'Heart attack':29,'Varicose veins':30,'Hypothyroidism':31, 'Hyperthyroidism':32,'Hypoglycemia':33,'Osteoarthristis':34,'Arthritis':35, '(vertigo) Paroymsal Positional Vertigo':36,'Acne':37,'Urinary tract infection':38,'Psoriasis':39, 'Impetigo':40}},inplace=True) #df.head() DF.head() # Distribution graphs (histogram/bar graph) of column data def plotPerColumnDistribution(df1, nGraphShown, nGraphPerRow): nunique = df1.nunique() df1 = df1[[col for col in df if nunique[col] > 1 and nunique[col] < 50]] # For displaying purposes, pick columns that have between 1 and 50 unique values nRow, nCol = df1.shape columnNames = list(df1) nGraphRow = (nCol + nGraphPerRow - 1) / nGraphPerRow plt.figure(num = None, figsize = (6 * nGraphPerRow, 8 * nGraphRow), dpi = 80, facecolor = 'w', edgecolor = 'k') for i in range(min(nCol, nGraphShown)): plt.subplot(nGraphRow, nGraphPerRow, i + 1) columnDf = df.iloc[:, i] if (not np.issubdtype(type(columnDf.iloc[0]), np.number)): valueCounts = columnDf.value_counts() valueCounts.plot.bar() else: columnDf.hist() plt.ylabel('counts') plt.xticks(rotation = 90) plt.title(f'{columnNames[i]} (column {i})') plt.tight_layout(pad = 1.0, w_pad = 1.0, h_pad = 1.0) plt.show() # Scatter and density plots def plotScatterMatrix(df1, plotSize, textSize): df1 = df1.select_dtypes(include =[np.number]) # keep only numerical columns # Remove rows and columns that would lead to df being singular df1 = df1.dropna('columns') df1 = df1[[col for col in df if df[col].nunique() > 1]] # keep columns where there are more than 1 unique values columnNames = list(df) if len(columnNames) > 10: # reduce the number of columns for matrix inversion of kernel density plots columnNames = columnNames[:10] df1 = df1[columnNames] ax = pd.plotting.scatter_matrix(df1, alpha=0.75, figsize=[plotSize, plotSize], diagonal='kde') corrs = df1.corr().values for i, j in zip(*plt.np.triu_indices_from(ax, k = 1)): ax[i, j].annotate('Corr. coef = %.3f' % corrs[i, j], (0.8, 0.2), xycoords='axes fraction', ha='center', va='center', size=textSize) plt.suptitle('Scatter and Density Plot') plt.show() plotPerColumnDistribution(df, 10, 5) plotScatterMatrix(df, 20, 10) X= df[l1] y = df[["prognosis"]] np.ravel(y) print(X) print(y) #Reading the testing.csv file tr=pd.read_csv("testing.csv") #Using inbuilt function replace in pandas for replacing the values tr.replace({'prognosis':{'Fungal infection':0,'Allergy':1,'GERD':2,'Chronic cholestasis':3,'Drug Reaction':4, 'Peptic ulcer diseae':5,'AIDS':6,'Diabetes ':7,'Gastroenteritis':8,'Bronchial Asthma':9,'Hypertension ':10, 'Migraine':11,'Cervical spondylosis':12, 'Paralysis (brain hemorrhage)':13,'Jaundice':14,'Malaria':15,'Chicken pox':16,'Dengue':17,'Typhoid':18,'hepatitis A':19, 'Hepatitis B':20,'Hepatitis C':21,'Hepatitis D':22,'Hepatitis E':23,'Alcoholic hepatitis':24,'Tuberculosis':25, 'Common Cold':26,'Pneumonia':27,'Dimorphic hemmorhoids(piles)':28,'Heart attack':29,'Varicose veins':30,'Hypothyroidism':31, 'Hyperthyroidism':32,'Hypoglycemia':33,'Osteoarthristis':34,'Arthritis':35, '(vertigo) Paroymsal Positional Vertigo':36,'Acne':37,'Urinary tract infection':38,'Psoriasis':39, 'Impetigo':40}},inplace=True) tr.head() plotPerColumnDistribution(tr, 10, 5) plotScatterMatrix(tr, 20, 10) X_test= tr[l1] y_test = tr[["prognosis"]] np.ravel(y_test) print(X_test) print(y_test) ``` **To build the precision of the model, we utilized three distinctive algorithms which are as per the following** * Decision Tree algorithm * Random Forest algorithm * KNearestNeighbour algorithm * Naive Bayes algorithm ``` #list1 = DF['prognosis'].unique() def scatterplt(disea): x = ((DF.loc[disea]).sum())#total sum of symptom reported for given disease x.drop(x[x==0].index,inplace=True)#droping symptoms with values 0 print(x.values) y = x.keys()#storing nameof symptoms in y print(len(x)) print(len(y)) plt.title(disea) plt.scatter(y,x.values) plt.show() def scatterinp(sym1,sym2,sym3,sym4,sym5): x = [sym1,sym2,sym3,sym4,sym5]#storing input symptoms in y y = [0,0,0,0,0]#creating and giving values to the input symptoms if(sym1!='Select Here'): y[0]=1 if(sym2!='Select Here'): y[1]=1 if(sym3!='Select Here'): y[2]=1 if(sym4!='Select Here'): y[3]=1 if(sym5!='Select Here'): y[4]=1 print(x) print(y) plt.scatter(x,y) plt.show() ``` # Decision Tree Algorithm ``` root = Tk() pred1=StringVar() def DecisionTree(): if len(NameEn.get()) == 0: pred1.set(" ") comp=messagebox.askokcancel("System","Kindly Fill the Name") if comp: root.mainloop() elif((Symptom1.get()=="Select Here") or (Symptom2.get()=="Select Here")): pred1.set(" ") sym=messagebox.askokcancel("System","Kindly Fill atleast first two Symptoms") if sym: root.mainloop() else: from sklearn import tree clf3 = tree.DecisionTreeClassifier() clf3 = clf3.fit(X,y) from sklearn.metrics import classification_report,confusion_matrix,accuracy_score y_pred=clf3.predict(X_test) print("Decision Tree") print("Accuracy") print(accuracy_score(y_test, y_pred)) print(accuracy_score(y_test, y_pred,normalize=False)) print("Confusion matrix") conf_matrix=confusion_matrix(y_test,y_pred) print(conf_matrix) psymptoms = [Symptom1.get(),Symptom2.get(),Symptom3.get(),Symptom4.get(),Symptom5.get()] for k in range(0,len(l1)): for z in psymptoms: if(z==l1[k]): l2[k]=1 inputtest = [l2] predict = clf3.predict(inputtest) predicted=predict[0] h='no' for a in range(0,len(disease)): if(predicted == a): h='yes' break if (h=='yes'): pred1.set(" ") pred1.set(disease[a]) else: pred1.set(" ") pred1.set("Not Found") #Creating the database if not exists named as database.db and creating table if not exists named as DecisionTree using sqlite3 import sqlite3 conn = sqlite3.connect('database.db') c = conn.cursor() c.execute("CREATE TABLE IF NOT EXISTS DecisionTree(Name StringVar,Symtom1 StringVar,Symtom2 StringVar,Symtom3 StringVar,Symtom4 TEXT,Symtom5 TEXT,Disease StringVar)") c.execute("INSERT INTO DecisionTree(Name,Symtom1,Symtom2,Symtom3,Symtom4,Symtom5,Disease) VALUES(?,?,?,?,?,?,?)",(NameEn.get(),Symptom1.get(),Symptom2.get(),Symptom3.get(),Symptom4.get(),Symptom5.get(),pred1.get())) conn.commit() c.close() conn.close() #printing scatter plot of input symptoms #printing scatter plot of disease predicted vs its symptoms scatterinp(Symptom1.get(),Symptom2.get(),Symptom3.get(),Symptom4.get(),Symptom5.get()) scatterplt(pred1.get()) ``` # Random Forest Algorithm ``` pred2=StringVar() def randomforest(): if len(NameEn.get()) == 0: pred1.set(" ") comp=messagebox.askokcancel("System","Kindly Fill the Name") if comp: root.mainloop() elif((Symptom1.get()=="Select Here") or (Symptom2.get()=="Select Here")): pred1.set(" ") sym=messagebox.askokcancel("System","Kindly Fill atleast first two Symptoms") if sym: root.mainloop() else: from sklearn.ensemble import RandomForestClassifier clf4 = RandomForestClassifier(n_estimators=100) clf4 = clf4.fit(X,np.ravel(y)) # calculating accuracy from sklearn.metrics import classification_report,confusion_matrix,accuracy_score y_pred=clf4.predict(X_test) print("Random Forest") print("Accuracy") print(accuracy_score(y_test, y_pred)) print(accuracy_score(y_test, y_pred,normalize=False)) print("Confusion matrix") conf_matrix=confusion_matrix(y_test,y_pred) print(conf_matrix) psymptoms = [Symptom1.get(),Symptom2.get(),Symptom3.get(),Symptom4.get(),Symptom5.get()] for k in range(0,len(l1)): for z in psymptoms: if(z==l1[k]): l2[k]=1 inputtest = [l2] predict = clf4.predict(inputtest) predicted=predict[0] h='no' for a in range(0,len(disease)): if(predicted == a): h='yes' break if (h=='yes'): pred2.set(" ") pred2.set(disease[a]) else: pred2.set(" ") pred2.set("Not Found") #Creating the database if not exists named as database.db and creating table if not exists named as RandomForest using sqlite3 import sqlite3 conn = sqlite3.connect('database.db') c = conn.cursor() c.execute("CREATE TABLE IF NOT EXISTS RandomForest(Name StringVar,Symtom1 StringVar,Symtom2 StringVar,Symtom3 StringVar,Symtom4 TEXT,Symtom5 TEXT,Disease StringVar)") c.execute("INSERT INTO RandomForest(Name,Symtom1,Symtom2,Symtom3,Symtom4,Symtom5,Disease) VALUES(?,?,?,?,?,?,?)",(NameEn.get(),Symptom1.get(),Symptom2.get(),Symptom3.get(),Symptom4.get(),Symptom5.get(),pred2.get())) conn.commit() c.close() conn.close() #printing scatter plot of disease predicted vs its symptoms scatterplt(pred2.get()) ``` # KNearestNeighbour Algorithm ``` pred4=StringVar() def KNN(): if len(NameEn.get()) == 0: pred1.set(" ") comp=messagebox.askokcancel("System","Kindly Fill the Name") if comp: root.mainloop() elif((Symptom1.get()=="Select Here") or (Symptom2.get()=="Select Here")): pred1.set(" ") sym=messagebox.askokcancel("System","Kindly Fill atleast first two Symptoms") if sym: root.mainloop() else: from sklearn.neighbors import KNeighborsClassifier knn=KNeighborsClassifier(n_neighbors=5,metric='minkowski',p=2) knn=knn.fit(X,np.ravel(y)) from sklearn.metrics import classification_report,confusion_matrix,accuracy_score y_pred=knn.predict(X_test) print("kNearest Neighbour") print("Accuracy") print(accuracy_score(y_test, y_pred)) print(accuracy_score(y_test, y_pred,normalize=False)) print("Confusion matrix") conf_matrix=confusion_matrix(y_test,y_pred) print(conf_matrix) psymptoms = [Symptom1.get(),Symptom2.get(),Symptom3.get(),Symptom4.get(),Symptom5.get()] for k in range(0,len(l1)): for z in psymptoms: if(z==l1[k]): l2[k]=1 inputtest = [l2] predict = knn.predict(inputtest) predicted=predict[0] h='no' for a in range(0,len(disease)): if(predicted == a): h='yes' break if (h=='yes'): pred4.set(" ") pred4.set(disease[a]) else: pred4.set(" ") pred4.set("Not Found") #Creating the database if not exists named as database.db and creating table if not exists named as KNearestNeighbour using sqlite3 import sqlite3 conn = sqlite3.connect('database.db') c = conn.cursor() c.execute("CREATE TABLE IF NOT EXISTS KNearestNeighbour(Name StringVar,Symtom1 StringVar,Symtom2 StringVar,Symtom3 StringVar,Symtom4 TEXT,Symtom5 TEXT,Disease StringVar)") c.execute("INSERT INTO KNearestNeighbour(Name,Symtom1,Symtom2,Symtom3,Symtom4,Symtom5,Disease) VALUES(?,?,?,?,?,?,?)",(NameEn.get(),Symptom1.get(),Symptom2.get(),Symptom3.get(),Symptom4.get(),Symptom5.get(),pred4.get())) conn.commit() c.close() conn.close() #printing scatter plot of disease predicted vs its symptoms scatterplt(pred4.get()) ``` # Naive Bayes Algorithm ``` pred3=StringVar() def NaiveBayes(): if len(NameEn.get()) == 0: pred1.set(" ") comp=messagebox.askokcancel("System","Kindly Fill the Name") if comp: root.mainloop() elif((Symptom1.get()=="Select Here") or (Symptom2.get()=="Select Here")): pred1.set(" ") sym=messagebox.askokcancel("System","Kindly Fill atleast first two Symptoms") if sym: root.mainloop() else: from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() gnb=gnb.fit(X,np.ravel(y)) from sklearn.metrics import classification_report,confusion_matrix,accuracy_score y_pred=gnb.predict(X_test) print("Naive Bayes") print("Accuracy") print(accuracy_score(y_test, y_pred)) print(accuracy_score(y_test, y_pred,normalize=False)) print("Confusion matrix") conf_matrix=confusion_matrix(y_test,y_pred) print(conf_matrix) psymptoms = [Symptom1.get(),Symptom2.get(),Symptom3.get(),Symptom4.get(),Symptom5.get()] for k in range(0,len(l1)): for z in psymptoms: if(z==l1[k]): l2[k]=1 inputtest = [l2] predict = gnb.predict(inputtest) predicted=predict[0] h='no' for a in range(0,len(disease)): if(predicted == a): h='yes' break if (h=='yes'): pred3.set(" ") pred3.set(disease[a]) else: pred3.set(" ") pred3.set("Not Found") #Creating the database if not exists named as database.db and creating table if not exists named as NaiveBayes using sqlite3 import sqlite3 conn = sqlite3.connect('database.db') c = conn.cursor() c.execute("CREATE TABLE IF NOT EXISTS NaiveBayes(Name StringVar,Symtom1 StringVar,Symtom2 StringVar,Symtom3 StringVar,Symtom4 TEXT,Symtom5 TEXT,Disease StringVar)") c.execute("INSERT INTO NaiveBayes(Name,Symtom1,Symtom2,Symtom3,Symtom4,Symtom5,Disease) VALUES(?,?,?,?,?,?,?)",(NameEn.get(),Symptom1.get(),Symptom2.get(),Symptom3.get(),Symptom4.get(),Symptom5.get(),pred3.get())) conn.commit() c.close() conn.close() #printing scatter plot of disease predicted vs its symptoms scatterplt(pred3.get()) ``` # Building Graphical User Interface ``` #Tk class is used to create a root window root.configure(background='Ivory') root.title('Smart Disease Predictor System') root.resizable(0,0) Symptom1 = StringVar() Symptom1.set("Select Here") Symptom2 = StringVar() Symptom2.set("Select Here") Symptom3 = StringVar() Symptom3.set("Select Here") Symptom4 = StringVar() Symptom4.set("Select Here") Symptom5 = StringVar() Symptom5.set("Select Here") Name = StringVar() prev_win=None def Reset(): global prev_win Symptom1.set("Select Here") Symptom2.set("Select Here") Symptom3.set("Select Here") Symptom4.set("Select Here") Symptom5.set("Select Here") NameEn.delete(first=0,last=100) pred1.set(" ") pred2.set(" ") pred3.set(" ") pred4.set(" ") try: prev_win.destroy() prev_win=None except AttributeError: pass from tkinter import messagebox def Exit(): qExit=messagebox.askyesno("System","Do you want to exit the system") if qExit: root.destroy() exit() #Headings for the GUI written at the top of GUI w2 = Label(root, justify=LEFT, text="Disease Predictor using Machine Learning", fg="Red", bg="Ivory") w2.config(font=("Times",30,"bold italic")) w2.grid(row=1, column=0, columnspan=2, padx=100) w2 = Label(root, justify=LEFT, text="Creator: VINAY GUPTA ", fg="Pink", bg="Ivory") w2.config(font=("Times",30,"bold italic")) w2.grid(row=2, column=0, columnspan=2, padx=100) #Label for the name NameLb = Label(root, text="Name of the Patient *", fg="Red", bg="Ivory") NameLb.config(font=("Times",15,"bold italic")) NameLb.grid(row=6, column=0, pady=15, sticky=W) #Creating Labels for the symtoms S1Lb = Label(root, text="Symptom 1 *", fg="Black", bg="Ivory") S1Lb.config(font=("Times",15,"bold italic")) S1Lb.grid(row=7, column=0, pady=10, sticky=W) S2Lb = Label(root, text="Symptom 2 *", fg="Black", bg="Ivory") S2Lb.config(font=("Times",15,"bold italic")) S2Lb.grid(row=8, column=0, pady=10, sticky=W) S3Lb = Label(root, text="Symptom 3", fg="Black",bg="Ivory") S3Lb.config(font=("Times",15,"bold italic")) S3Lb.grid(row=9, column=0, pady=10, sticky=W) S4Lb = Label(root, text="Symptom 4", fg="Black", bg="Ivory") S4Lb.config(font=("Times",15,"bold italic")) S4Lb.grid(row=10, column=0, pady=10, sticky=W) S5Lb = Label(root, text="Symptom 5", fg="Black", bg="Ivory") S5Lb.config(font=("Times",15,"bold italic")) S5Lb.grid(row=11, column=0, pady=10, sticky=W) #Labels for the different algorithms lrLb = Label(root, text="DecisionTree", fg="white", bg="red", width = 20) lrLb.config(font=("Times",15,"bold italic")) lrLb.grid(row=15, column=0, pady=10,sticky=W) destreeLb = Label(root, text="RandomForest", fg="Red", bg="Orange", width = 20) destreeLb.config(font=("Times",15,"bold italic")) destreeLb.grid(row=17, column=0, pady=10, sticky=W) ranfLb = Label(root, text="NaiveBayes", fg="White", bg="green", width = 20) ranfLb.config(font=("Times",15,"bold italic")) ranfLb.grid(row=19, column=0, pady=10, sticky=W) knnLb = Label(root, text="kNearestNeighbour", fg="Red", bg="Sky Blue", width = 20) knnLb.config(font=("Times",15,"bold italic")) knnLb.grid(row=21, column=0, pady=10, sticky=W) OPTIONS = sorted(l1) #Taking name as input from user NameEn = Entry(root, textvariable=Name) NameEn.grid(row=6, column=1) #Taking Symptoms as input from the dropdown from the user S1 = OptionMenu(root, Symptom1,*OPTIONS) S1.grid(row=7, column=1) S2 = OptionMenu(root, Symptom2,*OPTIONS) S2.grid(row=8, column=1) S3 = OptionMenu(root, Symptom3,*OPTIONS) S3.grid(row=9, column=1) S4 = OptionMenu(root, Symptom4,*OPTIONS) S4.grid(row=10, column=1) S5 = OptionMenu(root, Symptom5,*OPTIONS) S5.grid(row=11, column=1) #Buttons for predicting the disease using different algorithms dst = Button(root, text="Prediction 1", command=DecisionTree,bg="Red",fg="yellow") dst.config(font=("Times",15,"bold italic")) dst.grid(row=6, column=3,padx=10) rnf = Button(root, text="Prediction 2", command=randomforest,bg="Light green",fg="red") rnf.config(font=("Times",15,"bold italic")) rnf.grid(row=7, column=3,padx=10) lr = Button(root, text="Prediction 3", command=NaiveBayes,bg="Blue",fg="white") lr.config(font=("Times",15,"bold italic")) lr.grid(row=8, column=3,padx=10) kn = Button(root, text="Prediction 4", command=KNN,bg="sky blue",fg="red") kn.config(font=("Times",15,"bold italic")) kn.grid(row=9, column=3,padx=10) rs = Button(root,text="Reset Inputs", command=Reset,bg="yellow",fg="purple",width=15) rs.config(font=("Times",15,"bold italic")) rs.grid(row=10,column=3,padx=10) ex = Button(root,text="Exit System", command=Exit,bg="yellow",fg="purple",width=15) ex.config(font=("Times",15,"bold italic")) ex.grid(row=11,column=3,padx=10) #Showing the output of different aldorithms t1=Label(root,font=("Times",15,"bold italic"),text="Decision Tree",height=1,bg="Light green" ,width=40,fg="red",textvariable=pred1,relief="sunken").grid(row=15, column=1, padx=10) t2=Label(root,font=("Times",15,"bold italic"),text="Random Forest",height=1,bg="Purple" ,width=40,fg="white",textvariable=pred2,relief="sunken").grid(row=17, column=1, padx=10) t3=Label(root,font=("Times",15,"bold italic"),text="Naive Bayes",height=1,bg="red" ,width=40,fg="orange",textvariable=pred3,relief="sunken").grid(row=19, column=1, padx=10) t4=Label(root,font=("Times",15,"bold italic"),text="kNearest Neighbour",height=1,bg="Blue" ,width=40,fg="yellow",textvariable=pred4,relief="sunken").grid(row=21, column=1, padx=10) #calling this function because the application is ready to run root.mainloop() ```
github_jupyter
These exercises accompany the tutorial on [imports](https://www.kaggle.com/colinmorris/working-with-external-libraries). There are only four problems in this last set of exercises, but they're all pretty tricky, so be on guard! If you get stuck, don't hesitate to head to the [Learn Forum](https://kaggle.com/learn-forum) to discuss. Run the setup code below before working on the questions (and run it again if you leave this notebook and come back later). ``` # SETUP. You don't need to worry for now about what this code does or how it works. If you're ever curious about the # code behind these exercises, it's available under an open source license here: https://github.com/Kaggle/learntools/ from learntools.core import binder; binder.bind(globals()) from learntools.python.ex7 import * print('Setup complete.') ``` # Exercises ## 1. After completing [the exercises on lists and tuples](https://www.kaggle.com/kernels/fork/1275177), Jimmy noticed that, according to his `estimate_average_slot_payout` function, the slot machines at the Learn Python Casino are actually rigged *against* the house, and are profitable to play in the long run. Starting with $200 in his pocket, Jimmy has played the slots 500 times, recording his new balance in a list after each spin. He used Python's `matplotlib` library to make a graph of his balance over time: ``` # Import the jimmy_slots submodule from learntools.python import jimmy_slots # Call the get_graph() function to get Jimmy's graph graph = jimmy_slots.get_graph() graph ``` As you can see, he's hit a bit of bad luck recently. He wants to tweet this along with some choice emojis, but, as it looks right now, his followers will probably find it confusing. He's asked if you can help him make the following changes: 1. Add the title "Results of 500 slot machine pulls" 2. Make the y-axis start at 0. 3. Add the label "Balance" to the y-axis After calling `type(graph)` you see that Jimmy's graph is of type `matplotlib.axes._subplots.AxesSubplot`. Hm, that's a new one. By calling `dir(graph)`, you find three methods that seem like they'll be useful: `.set_title()`, `.set_ylim()`, and `.set_ylabel()`. Use these methods to complete the function `prettify_graph` according to Jimmy's requests. We've already checked off the first request for you (setting a title). (Remember: if you don't know what these methods do, use the `help()` function!) ``` def prettify_graph(graph): """Modify the given graph according to Jimmy's requests: add a title, make the y-axis start at 0, label the y-axis. (And, if you're feeling ambitious, format the tick marks as dollar amounts using the "$" symbol.) """ graph.set_title("Results of 500 slot machine pulls") # Complete steps 2 and 3 here graph = jimmy_slots.get_graph() prettify_graph(graph) graph ``` **Bonus:** Can you format the numbers on the y-axis so they look like dollar amounts? e.g. $200 instead of just 200. (We're not going to tell you what method(s) to use here. You'll need to go digging yourself with `dir(graph)` and/or `help(graph)`.) ``` #q1.solution() ``` ## 2. <span title="Spicy" style="color: coral">🌶️🌶️</span> Luigi is trying to perform an analysis to determine the best items for winning races on the Mario Kart circuit. He has some data in the form of lists of dictionaries that look like... [ {'name': 'Peach', 'items': ['green shell', 'banana', 'green shell',], 'finish': 3}, {'name': 'Bowser', 'items': ['green shell',], 'finish': 1}, # Sometimes the racer's name wasn't recorded {'name': None, 'items': ['mushroom',], 'finish': 2}, {'name': 'Toad', 'items': ['green shell', 'mushroom'], 'finish': 1}, ] `'items'` is a list of all the power-up items the racer picked up in that race, and `'finish'` was their placement in the race (1 for first place, 3 for third, etc.). He wrote the function below to take a list like this and return a dictionary mapping each item to how many times it was picked up by first-place finishers. ``` def best_items(racers): """Given a list of racer dictionaries, return a dictionary mapping items to the number of times those items were picked up by racers who finished in first place. """ winner_item_counts = {} for i in range(len(racers)): # The i'th racer dictionary racer = racers[i] # We're only interested in racers who finished in first if racer['finish'] == 1: for i in racer['items']: # Add one to the count for this item (adding it to the dict if necessary) if i not in winner_item_counts: winner_item_counts[i] = 0 winner_item_counts[i] += 1 # Data quality issues :/ Print a warning about racers with no name set. We'll take care of it later. if racer['name'] is None: print("WARNING: Encountered racer with unknown name on iteration {}/{} (racer = {})".format( i+1, len(racers), racer['name']) ) return winner_item_counts ``` He tried it on a small example list above and it seemed to work correctly: ``` sample = [ {'name': 'Peach', 'items': ['green shell', 'banana', 'green shell',], 'finish': 3}, {'name': 'Bowser', 'items': ['green shell',], 'finish': 1}, {'name': None, 'items': ['mushroom',], 'finish': 2}, {'name': 'Toad', 'items': ['green shell', 'mushroom'], 'finish': 1}, ] best_items(sample) ``` However, when he tried running it on his full dataset, the program crashed with a `TypeError`. Can you guess why? Try running the code cell below to see the error message Luigi is getting. Once you've identified the bug, fix it in the cell below (so that it runs without any errors). Hint: Luigi's bug is similar to one we encountered in the [tutorial](https://www.kaggle.com/colinmorris/working-with-external-libraries) when we talked about star imports. ``` # Import luigi's full dataset of race data from learntools.python.luigi_analysis import full_dataset # Fix me! def best_items(racers): winner_item_counts = {} for i in range(len(racers)): # The i'th racer dictionary racer = racers[i] # We're only interested in racers who finished in first if racer['finish'] == 1: for i in racer['items']: # Add one to the count for this item (adding it to the dict if necessary) if i not in winner_item_counts: winner_item_counts[i] = 0 winner_item_counts[i] += 1 # Data quality issues :/ Print a warning about racers with no name set. We'll take care of it later. if racer['name'] is None: print("WARNING: Encountered racer with unknown name on iteration {}/{} (racer = {})".format( i+1, len(racers), racer['name']) ) return winner_item_counts # Try analyzing the imported full dataset best_items(full_dataset) #q2.hint() #q2.solution() ``` ## 3. <span title="A bit spicy" style="color: darkgreen ">🌶️</span> Suppose we wanted to create a new type to represent hands in blackjack. One thing we might want to do with this type is overload the comparison operators like `>` and `<=` so that we could use them to check whether one hand beats another. e.g. it'd be cool if we could do this: ```python >>> hand1 = BlackjackHand(['K', 'A']) >>> hand2 = BlackjackHand(['7', '10', 'A']) >>> hand1 > hand2 True ``` Well, we're not going to do all that in this question (defining custom classes is a bit beyond the scope of these lessons), but the code we're asking you to write in the function below is very similar to what we'd have to write if we were defining our own `BlackjackHand` class. (We'd put it in the `__gt__` magic method to define our custom behaviour for `>`.) Fill in the body of the `blackjack_hand_greater_than` function according to the docstring. ``` def blackjack_hand_greater_than(hand_1, hand_2): """ Return True if hand_1 beats hand_2, and False otherwise. In order for hand_1 to beat hand_2 the following must be true: - The total of hand_1 must not exceed 21 - The total of hand_1 must exceed the total of hand_2 OR hand_2's total must exceed 21 Hands are represented as a list of cards. Each card is represented by a string. When adding up a hand's total, cards with numbers count for that many points. Face cards ('J', 'Q', and 'K') are worth 10 points. 'A' can count for 1 or 11. When determining a hand's total, you should try to count aces in the way that maximizes the hand's total without going over 21. e.g. the total of ['A', 'A', '9'] is 21, the total of ['A', 'A', '9', '3'] is 14. Examples: >>> blackjack_hand_greater_than(['K'], ['3', '4']) True >>> blackjack_hand_greater_than(['K'], ['10']) False >>> blackjack_hand_greater_than(['K', 'K', '2'], ['3']) False """ pass q3.check() #q3.hint() #q3.solution() ``` ## 4. <span title="Spicy" style="color: coral">🌶️🌶️</span> In [the previous set of exercises](https://www.kaggle.com/kernels/fork/1275185), you heard a tip-off that the roulette tables at the Learn Python Casino had some quirk where the probability of landing on a particular number was partly dependent on the number the wheel most recently landed on. You wrote a function `conditional_roulette_probs` which returned a dictionary with counts of how often the wheel landed on `x` then `y` for each value of `x` and `y`. After analyzing the output of your function, you've come to the following conclusion: for each wheel in the casino, there is exactly one pair of numbers `a` and `b`, such that, after the wheel lands on `a`, it's significantly more likely to land on `b` than any other number. If the last spin landed on anything other than `a`, then it acts like a normal roulette wheel, with equal probability of landing on any of the 11 numbers (* the casino's wheels are unusually small - they only have the numbers from 0 to 10 inclusive). It's time to exploit this quirk for fun and profit. You'll be writing a roulette-playing agent to beat the house. When called, your agent will have an opportunity to sit down at one of the casino's wheels for 100 spins. You don't need to bet on every spin. For example, the agent below bets on a random number unless the last spin landed on 4 (in which case it just watches). ``` from learntools.python import roulette import random def random_and_superstitious(wheel): """Interact with the given wheel over 100 spins with the following strategy: - if the wheel lands on 4, don't bet on the next spin - otherwise, bet on a random number on the wheel (from 0 to 10) """ last_number = 0 while wheel.num_remaining_spins() > 0: if last_number == 4: # Unlucky! Don't bet anything. guess = None else: guess = random.randint(0, 10) last_number = wheel.spin(number_to_bet_on=guess) roulette.evaluate_roulette_strategy(random_and_superstitious) ``` As you might have guessed, our random/superstitious agent bleeds money. Can you write an agent that beats the house? (i.e. can you make "Average gain per simulation" positive?) For more information on the type of object your agent will be passed, try calling `help(roulette.RouletteSession)`. You can also call `help(roulette.evaluate_roulette_strategy)` to see some optional parameters you can change regarding the conditions under which we test your agent. HINT: it might help to go back to your [strings and dictionaries exercise notebook](https://www.kaggle.com/kernels/fork/1275185) and review your code for `conditional_roulette_probs` for inspiration. ``` def my_agent(wheel): pass roulette.evaluate_roulette_strategy(my_agent) ``` How much profit are you able to reach? Post your results on the forums to see how your strategy compares to others'. ## The end You've finished the Python course. Congrats! As always, if you have any questions about these exercises, or anything else you encountered in the course, come to the [Learn Forum](https://kaggle.com/learn-forum). You probably didn't put in all these hours of learning Python just to play silly games of chance, right? If you're interested in applying your newfound Python skills to some data science tasks, check out some of our other lessons on [Kaggle Learn](https://www.kaggle.com/learn/overview). Most of them are in Python, including our courses on... - [Pandas for data manipulation](https://www.kaggle.com/learn/pandas) - [Machine learning with scikit-learn](https://www.kaggle.com/learn/machine-learning) - [Data visualization](https://www.kaggle.com/learn/data-visualisation) - [Deep learning with TensorFlow](https://www.kaggle.com/learn/deep-learning) Happy Pythoning!
github_jupyter
# 人生苦短 我用FP ![](./img/lifeShortIUseFP.png) **任何一门学问刚起步时,你对它的了解不深,这很容易让你混淆所做之事与所用之物。** ## 程序设计的基本元素 将简单的认识组合起来形成更复杂认识的方法 procedure -- The thing that driects a process is pattern of rules. ## 管理复杂度的三种技术 * Blackbox Abstraction (黑盒抽象) * Conventional Interface (约定接口) * Metalinguistic Abstraction (元语抽象)Image / Filter / Project 面对一个新的语言我们应该首先考察它的: * Primitive elements * The means of combination * The means of abstraction ``` # 对于python,它的 primitive elements 有: int float ... # The means of combination 1 + 1 # + -- the name for the primitive method of adding things # 1 -- the name of Plato's concept of number 1 # 1 + 1 -- apply the sum operator to 2 numbers # The means of abstraction -- function (procedure) def do_something(*args): return ``` # 高阶过程 ![image.png](attachment:image.png) ``` def sum_i(a, b): if a > b: return 0 return a + sum_i(a+1, b) sum_i(1,5) ``` ![image.png](attachment:image.png) ``` square = lambda x: x*x def sum_i_squared(a, b): if a > b: return 0 return square(a) + sum_i_squared(a+1, b) sum_i_squared(1,5) ``` ![image.png](attachment:image.png) ``` def pi_sum(a, b): if a > b: return 0 return (1 / (a*(a+2))) + pi_sum(a+4, b) pi_sum(1, 5) ``` ## 复合过程就是提取计算中与数值无关,只确定了计算的“形状”的成分 ``` def _sum_(term, a, _next, b): if a > b: return 0 return term(a) + _sum_(term, _next(a), _next, b) _sum_( lambda x: x, 1, lambda x: x+1, 5 ) _sum_( lambda x: x*x, 1, lambda x: x+1, 5 ) _sum_( lambda x: (1 / (x*(x+2))), 1, lambda x: x+4, 5 ) def sum_i_v2(a, b): def _sum_i(term, _next): return _sum_( term, a, _next, b ) return _sum_i( lambda x: x, lambda x: x+1, ) sum_i_v2(1,5) def sum_i_squared_v2(a, b): def _sum_i(term, _next): return _sum_( term, a, _next, b ) return _sum_i( lambda x: x*x, lambda x: x+1, ) sum_i_squared_v2(1,5) def pi_sum_v2(a, b): def _sum_i(term, _next): return _sum_( term, a, _next, b ) return _sum_i( lambda x: (1 / (x*(x+2))), lambda x: x+4, ) pi_sum_v2(1,5) ``` ## 定积分的近似值 ![image.png](attachment:image.png) ``` def integral(f, a, b, dx): def add_dx(x): return x + dx return dx * _sum_(f, ) ``` ## 牛顿法求函数的零点--另一个复合过程的🌰 To find a y such that f(y)=0 start with a guess y_0 ![image.png](attachment:image.png) # Generic Operator ``` def add(x, y): return x + y def add_rat(x, y): return Rat( numer = (x.numer * y.denom) + (y.numer * x.denom), denom = x.denom * y.denom ) class RatNum: def __init__(self, numer, denom): self.numer = numer self.denom = denom RatNum(2,3) from dataclasses import dataclass class typedData: def __init__(core): self.core = core def __call__(self): def attach_type_property(core): core_type = core.__name__ return return attach_type_property(self.core) @dataclass class Rat: numer: float denom: float @property def type(self): return Rat.__name__ Rat(2,3).type Rat.__name__ add_rat.__name__ ```
github_jupyter
``` import pickle import pathlib from collections import defaultdict import attr import nltk import elasticsearch as es from tabulate import tabulate from bs4 import BeautifulSoup as bs from tqdm import tqdm_notebook as tqdm CTX = {'index': 'articles', 'doc_type': 'doc'} e = es.Elasticsearch([{ 'host': 'localhost', 'port': 9200 }]) e.ping() ``` # Topics ``` datasets = pathlib.Path('../../ungol-data/CLEF/truth/') if not datasets.exists(): raise Exception() @attr.s class Topic(): top_id: str = attr.ib() title: str = attr.ib() description: str = attr.ib() narrative: str = attr.ib() def __str__(self): return "[{}] {}\n- Description: {}\n- Narrative: {}\n".format( self.top_id, self.title, self.description, self.narrative) topics = {} with datasets.joinpath('CLEF2003_ah-mono-de_topics.xml').open(mode='r', encoding='utf-8') as f: topics_raw = f.read() def soup2topic(topic_node): args = 'identifier', 'title', 'description', 'narrative' return Topic(*[topic_node.find(s).string for s in args]) fmt_path = 'truth/CLEF{}_ah-mono-de_topics.xml' soup = bs(topics_raw, 'xml') for topic_node in soup.find_all('topic'): topic = soup2topic(topic_node) topics[topic.top_id] = topic print(' imported {} topics'.format(len(topics))) print('\nexample:') print(topics['150-AH'], '\n') for topic in topics.values(): print(topic) ``` # Ground Truth ``` with datasets.joinpath('CLEF2003_ah-mono-de.txt').open(mode='r', encoding='ascii') as f: truth_raw = [line for line in f.readlines() if len(line.strip()) > 0] print('read {} ground truth items'.format(len(truth_raw))) def read_truth(raw): truth = defaultdict(dict) sample_count = len(raw) for line in raw: top_id, _, doc_id, val = line.split() assert val == '1' or val == '0' assert top_id in topics truth[top_id][doc_id] = True if val == '1' else False assert sample_count == sum([len(v) for v in truth.values()]) return truth truth = read_truth(truth_raw) print(' imported {} ground truth topics'.format(len(truth))) tab_data = [] for top_id, mapping in truth.items(): correct = sum([flag for flag in mapping.values()]) tab_data.append((top_id, correct, len(mapping) - correct)) print(tabulate(tab_data, headers=('topic', 'true', 'false'))) positives = [t for t, v in truth['174-AH'].items() if v] positives ``` ## Write pools to opt ``` basepath = pathlib.Path('../opt/raw') notfound = defaultdict(list) tab_data = [] for top_id in tqdm(truth): folder = basepath / top_id folder.mkdir(exist_ok=True) topic = e.get(id=top_id, **{'index': 'topics', 'doc_type': 'doc'})['_source'] tab_data.append([top_id, topic['title'], 0]) # write topic and truth with (folder / 'topic.txt').open('w') as fd: fd.write('\n\n'.join((topic['title'], topic['description'], topic['narrative']))) with (folder / 'truth.pickle').open('wb') as fd: pickle.dump(truth[top_id], fd) with (folder / 'truth.txt').open('w') as fd: for doc_id in truth[top_id]: flag = '1' if truth[top_id][doc_id] else '0' fd.write(doc_id + ' ' + flag + '\n') # write documents title_mapper = {} for doc_id, flag in tqdm(truth[top_id].items(), position=1, leave=False, desc=top_id): try: item = e.get(id=doc_id, **CTX)['_source'] title = item['title'] fname = doc_id + '.txt' folder_text = folder / 'text' folder_text.mkdir(exist_ok=True) with (folder_text / fname).open('w') as fd: fd.write('\n\n'.join((title, item['content']))) tab_data[-1][-1] += 1 title_mapper[fname] = title except es.NotFoundError: notfound[top_id].append(doc_id) # write fname -> title mapping with (folder / 'titlemap.pickle').open('wb') as fd: pickle.dump(title_mapper, fd) print(tabulate(tab_data)) for top_id in notfound: print('not found in topic {}'.format(top_id)) for doc_id in notfound[top_id]: print(' - ', doc_id) ``` # Retrieve documents from elasticsearch ``` TOPIC = '150-AH' positives = set() print('\nsearching for documents containing the topic query:') res = e.search(body={'query': {'match': {'content': topics[TOPIC].narrative}}, 'size': 20}) for i, hit in enumerate(res['hits']['hits']): doc = hit['_source'] score = hit['_score'] doc_id = hit['_id'] title = doc.get('title', '<kein titel>') correct = 'correct' if doc_id in truth[TOPIC] and truth[TOPIC][doc_id] else 'wrong' if doc_id in truth[TOPIC] and truth[TOPIC][doc_id]: positives.add(doc_id) print() fmt = '[{}] {:2.5}: {}\n{} - {}\n\n{}...' print(fmt.format(i + 1, score, correct, doc_id, title, doc['content'][:300])) print('\n', '-' * 120) ``` # Other relevant articles (not found) ``` desired = [doc_id for doc_id in truth[TOPIC] if truth[TOPIC][doc_id] and not doc_id.startswith('SDA')] print('found {} of {} articles'.format(len(positives), len(desired))) for doc_id in truth[TOPIC]: if doc_id.startswith('SDA'): continue if truth[TOPIC][doc_id] and doc_id not in positives: hit = e.get(id=doc_id, **CTX) doc = hit['_source'] doc_id = hit['_id'] title = doc.get('title', '<kein titel>') print() fmt = '{} - {}\n\n{}...' print(fmt.format(doc_id, title, doc['content'][:300])) print('\n', '-' * 120) ```
github_jupyter
# Non-boltzman (enhanced) sampling ideas ``` import matplotlib.pyplot as plt import numpy as np import scipy as sp import ipywidgets as widgets from numpy.random import rand, random, randint, choice, normal, uniform from numba import jit, njit import warnings warnings.filterwarnings('ignore') ``` Suppse we have a system in NVT ensemble described by energy function $E_{\nu}$ which makes it hard to sample via straight MCMC. We can introduce a bias in the energy function $E^0_{\nu}$ with an objective to accelerate the sampling and eliminate the energetic barriers present with the original system. $$E^0_{\nu} = E_{\nu} - \Delta E$$ - $E_{\nu}$ Energy function of an unbiased system - $E^0_{\nu}$ Energy function of a biased system. - $\Delta E_{\nu}$ bias term. #### Boltzman exponent factorization to the rescue! $$Z = \sum_{\nu} e^{-\beta E_{\nu}} = \frac{Z_0}{Z_0} \sum_{\nu} e^{-\beta E^0_{\nu}} e^{-\beta \Delta E_{\nu}} = Z_0 \langle e^{-\beta \Delta E_{\nu}} \rangle_0 $$ $$\boxed{\frac{Z}{Z_0} = e^{-\beta(F-F_0)} = \langle e^{-\beta \Delta E_{\nu}} \rangle_0}$$ Where $\langle ... \rangle_0$ indicates canonical average with $E^0_{\nu}$. Any observable of interest can now be expressed in terms of ensemble average conducted with biased energy function $E_0$ $$\langle G\rangle = \frac{1}{Z}\sum_{\nu} G_{\nu}e^{-\beta E_{\nu}} = \frac{Z_0}{Z} \langle G_{\nu} e^{-\beta \Delta E_{\nu}} \rangle_0 $$ ## Umbrella Sampling $$E_{\nu} = E^0_{\nu}+ W_{\nu}$$ - Where $W_{\nu} \approx 0$ in the interesting regions <br><br> - Where $W_{\nu} \gg 0$ outside of interesting regions <br><br> - Read more about US in [J Kastner; WIREs Comp Mol Sci (2011)](https://onlinelibrary.wiley.com/doi/full/10.1002/wcms.66?casa_token=KxOXA8oqPO4AAAAA%3AR78Iv0cMhe3NaCNe86T0AnQBJ57euERM3h20FTeSRcM9wOzEHIJq-6FD7J4_kZHg4UZ1RKuGm_OoCPw) ![](./figs/umbrella-1.png) ### Connection between biased and unbiased simulations: - Free energy as a function of order parameter defined as log of partial sum of parition function $$Z(m) = \sum_{\nu}e^{-\beta E(s_{\nu})}\cdot \delta(M(s_{\nu})-m) = e^{-\beta F(m)}$$ - Histogram of order parameter in the simulatons as a ratio of partial and full partition functions $$p(m) = \frac{Z(m)}{Z} = \langle \delta(M(s)-m) \rangle = e^{-\beta(F(m)-F)}$$ - Free eenergy computed as a histogram of order parameter $$F(m) = -\frac{1}{\beta}log\, Z(m) = -\frac{1}{\beta}log\, p(m) +const$$ ### Connection between biased and unbiased simulations $$p_0(m) = \langle \delta(M(s)-m) \rangle_0 =e^{-\beta w(m)} \frac{\sum_{\nu} \delta(M(s_{\nu})-m) e^{-\beta E_{\nu}}}{Z_0} = \frac{e^{-\beta w(m)} p(m)}{Z_0/Z}$$ $$F(m) = F^0(m) - w(m)+C$$ For simulation with multiple windows $w_i$ there is an undetermined constant which is optimized to obtain smooth free energy profile and reduce the statistical noise $$F_i(m) = F_i^0(m) - w_i(m)+C_i$$ ``` #Ising2D python code optimized for speed import numpy as np import pandas as pd from numba import jit, njit @njit def compute_ising2d(spins, J, B): '''Computes thermodynamic variables given the spin lattice''' N = len(spins) E=0 for i in range(N): for j in range(N): z = spins[(i+1)%N, j] + spins[(i-1)%N, j] +spins[i,(j+1)%N] + spins[i,(j-1)%N] E += -J*z*spins[i,j]/4 # Since we overcounted interactions 4 times divide by 4. #Magnetization M = np.mean(spins) #Energy E = E/N**2 - B*M return M, E @njit def run_ising2d(spins, J, B, T, n_steps, out_freq, umbrella = np.array([-1.0, 1.0])): #Initialize data arrays Magn, Ener, traj = [], [], [] N = len(spins) for step in range(n_steps): i, j = np.random.randint(N), np.random.randint(N) z = spins[(i+1)%N, j] + spins[(i-1)%N, j] + spins[i, (j+1)%N] + spins[i, (j-1)%N] dE = 2*spins[i,j]*(J*z + B) #Metropolis condition if np.exp(-dE/T) > np.random.rand() and ( umbrella.min() < np.mean(spins) - 2*spins[i,j]/N**2 < umbrella.max() ): spins[i,j] *= -1 #Compute and store data if step % out_freq == 0: M_t, E_t = compute_ising2d(spins, J, B) Magn.append(M_t) Ener.append(E_t) traj.append(spins.copy()) return traj, Ener, Magn ``` ![](./figs/umbrealla-2.png) ``` # Estimate lattice size needed by calculating difference in magnetization caused by a spin flip 2/N**2 traj, E, M = run_ising2d(spins=np.ones((20,20)), J=1, B=0, T=2, n_steps=1e6, out_freq=1e3, umbrella = np.array([0.8, 1]) ) plt.plot(M) print(M[-1], min(M)) umb = np.array([M[-1]-0.2, M[-1] ]) traj2, E2, M2 = run_ising2d(spins=traj[-1], J=1, B=0, T=2, n_steps=1e6, out_freq=1e3, umbrella = umb ) plt.hist(M,density=True) plt.hist(M2,density=True) @widgets.interact(i=(0,10)) def plot_image(i=1): plt.imshow(traj[i],origin='lower') ``` ## Simulated annealing Monte Carlo technique for the **numerical optimization** of functions 1. Original paper: [S. Kirkpatrick, C. D. Gelatt, Jr., M. P. Vecchi, Science 220, 671-680 (1983)](http://science.sciencemag.org/content/220/4598/671.long) 2. Remember: it is difficult to find the **global** extremum of a function 3. Idea: Monte Carlo search of the function domain ### Finding energy minima of any thermal system $\leftrightarrow$ finding minimum of any abstract function - Introduce (artificial) temperature parameter $T$ - Metropolis algorithm with acceptance probability min$(1, e^{-\Delta f/T})$ - Here $f$ can be any function we want to minimize (not only energy) - For maximum simply change the sign: min$(1, e^{+\Delta f/T})$ - Slowly reduce the temperature ### "Slow cooling" is the main idea of simulated annealing very high $T$ | very low $T$ ---------------------------------------------|------------------------------------ almost all updates are accepted | only updates that decrease the energy are accepted random configurations/explore entire space | descend towards minimum high energy | low energy but might get stuck in local minimum - if we slowly cool from high $T$ to low $T$ we will explore the entire space until we converge to the (hopefully) global minimum - success is not guaranteed, but the methods works very well with good cooling schemes - Inspiration: annealing in metallurgy. - This is a great method to tackle **NP-hard** optimization problems, such as the traveling salesman! ### Cooling schedules - slow cooling is essential: otherwise the system will "freeze" into a local minimum - but too slow cooling is inefficient... - initial temperature should be high enough so that the system is essentially random and equilibrates quickly - final temperature should be small enough so that we are essentially in the ground state (system no longer changes) - exponential **cooling schedule** is commonly used $$\boxed{T(t)=T_0e^{-t/\tau}}$$ where $t$ is the Monte Carlo time and the constant $\tau$ needs to be determined (usually empirically) - alternative cooling schedules: linear: $T(t)=T_0 - t/\tau$ (also widely used) logarithmic: $T(t) = c/\log(1+t/\tau)$ ### **Example:** find global minimum of the function via simulated annealing: $f(x) = x^2 -\cos (4\pi x)$ ``` f = lambda x: x*x - np.cos(4*np.pi*x) xvals = np.arange(-3,3,0.01) plt.plot(xvals, f(xvals),lw=3) plt.xlabel("$x$",fontsize=20) plt.ylabel("$f(x)$",fontsize=20) plt.grid(True) ``` #### Search for global minima of $f(x)$ using simulated annealing ``` def MCupdate(T, x, mean, sigma): '''Making a new move by incremening a normal distributed step We are exploring function diffusivefly, e.g doing random walk! T: temp mean, sigma: parameters of random walk ''' xnew = x + normal(mean, sigma) delta_f = f(xnew) - f(x) if delta_f < 0 or np.exp(-delta_f/T) > rand(): x = xnew return x def cool(T, cool_t): '''Function for educing T after every MC step. cool_t: cooling time tau for exponential schedule Alternatively we could reduce every N steps.''' return T*np.exp(-1/cool_t) def sim_anneal(T=10, T_min=1e-4, cool_t=1e4, x=2, mean=0, sigma=1): '''Simulated annealing search for min of function: T=T0: starting temperature T_min: minimal temp reaching which sim should stop cool_t: colling pace/time x=x0: starting position mean, sigma: parameters for diffusive exploration of x''' xlog = [] while T > T_min: x = MCupdate(T, x, mean, sigma) T = cool(T, cool_t) xlog.append(x) return xlog xlog = sim_anneal() plt.plot(xlog) plt.xlabel('MC time') plt.ylabel('x') print('Final search result for the global minimum: ', xlog[-1]) @widgets.interact(t_sim=(1,1000)) def viz_anneal(t_sim=1): T = 4 x = 2 cool_t = 1e4 mean, sigma = 0, 1 plt.plot(xvals, f(xvals),lw=3, color='green') for t in range(t_sim): x = MCupdate(T, x, mean=0, sigma=1) T = cool(T, cool_t) plt.plot(x, f(x),'o',color='red',ms=20,alpha=0.5) plt.ylim(-1,8) plt.xlim(-3,3) plt.grid(True) plt.xlabel('$x$',fontsize=20) plt.ylabel('$f(x)$',fontsize=20) ``` #### Lessons learened In this example: for finding minimum of a continuous function - always slightly above true minimum if $T>0$ - best combined with a steepest descent method ### Simulated annealing applied to MCMC sampling of 2D Ising model ``` temperature = 10.0 # initial temperature tempmin = 1e-4 # minimal temperature (stop annealing when this is reached) cooltime = 1e4 # cooling time tau for exponential schedule # how long it will take to cool to minimal temperature in MC steps MCtime = -cooltime*np.log(tempmin/temperature) # after every MC step we reduce the temperature def cool(temperature): return temperature*np.exp(-1/cooltime) ``` ### Parallel tempering 1. **Simulated annealing is not guaranteed to find the global extremum** - Unless you cool infinitely slowly. - Usually need to repeat search multiple times using independent simulations. <br><br> 2. **Automating and generalizing sim annealing: parallel tempering (aka Replica Exchange MCMC)** - Simulate several copies of the system in parallel - Each copy is at a different constant temperature $T$ - Usual Metropolis updates for each copy - Every certain number of steps attempt to exchange copies at neighboring temperatures - Exchange acceptance probability is min(1, $e^{-\Delta f\Delta\beta}$) - If temperature difference small enough, the energy histograms of the copies will overlap and exhcanges will happen often. <br><br> 3. **Advantages of replica exchange:** - Exchanges allow to explore different extrema - More successful for complex functions/energy landscapes. Random walk in temperature space! - Detailed balance is maintained! (regular simulated annealing breaks detailed balance) ### How to choose Temperature distributions for replica exchange MCMC - A dense temperature grid increases the exchange acceptance rates - But dense T grid takes longer to simulate and more steps are needed to move from one temperature to another - There are many options, often trial and error is needed - exchange acceptance probability should be between about 20% and 80% - exchange acceptance probability should be approximately temperature-independent - commonly used: geometric progression for $N$ temperatures $T_n$ between and including $T_{\rm min}$ and $T_{\rm max}$ (ensures more steps around $T_{\rm min}$) $$T_n = T_{\rm min}\left(\frac{T_{\rm max}}{T_{\rm min}}\right)^{\frac{n-1}{N-1}}$$ - make sure to spend enough time before swapping to achieve equilibrium! ### parallel tempering simulation ``` ######## Ising 2D+US ########### @njit def mcmc(spins, N, J, B, T, n_steps = 10000, out_freq = 1000): '''mcmc takes spin configuration and samples with given N,J,B,T for n_steps outputing results every out_freq''' confs = [] for step in range(n_steps): #Pick random spin i, j = randint(N), randint(N) #Compute energy change z = spins[(i+1)%N, j] + spins[(i-1)%N, j] + spins[i, (j+1)%N] + spins[i, (j-1)%N] dE = 2*spins[i,j]*(J*z + B) #Metropolis condition if dE <= 0 or np.exp(-dE/T) > rand(): spins[i,j] *= -1 #Store the spin configuration if step % out_freq == 0: confs.append(spins.copy()) return confs @njit def getM(spins): return np.mean(spins) @njit def getE(spins,N,J,B): E = 0 for i in range(N): for j in range(N): z = spins[(i+1)%N, j] + spins[(i-1)%N, j] +spins[i,(j+1)%N] + spins[i,(j-1)%N] E += -J*z*spins[i,j]/4 # Since we overcounted interactions 4 times divide by 4. return E - B*np.sum(spins) #Field contribution added @jit def temper(configs, N_repl): '''Randomly pick two adjacent replicas and attempt an exchange''' i = np.random.randint(N_repl-1) j = i+1 deltaBeta = 1/T[i] - 1/T[j] deltaEnergy = getE(configs[i][-1],N,J,B) - getE(configs[j][-1],N,J,B) if deltaBeta*deltaEnergy < 0 or np.exp(-deltaBeta*deltaEnergy) > rand(): configs[i][-1], configs[j][-1] = configs[j][-1], configs[i][-1] return configs @jit def pt_mcmc(N, J, B, T=[1, 0.1], n_exch=1000, n_steps=10000, out_freq=1000): N_repl = len(T) configs = [[choice([-1,1], (N,N))] for i in range(N_repl)] for exch_attempt in range(n_exch): #Exchange attemps configs = temper(configs, N_repl) for i in range(N_repl): #mcmc in between exchange attemps configs_new = mcmc(configs[i][-1],N, J, B, T[i]) configs[i].extend(configs_new) return configs N = 20 # size of lattice in each direction J = 1 # interaction parameter B = 0 # magnetic field T=[5.0, 0.01, 0.0008, 0.0007, 0.00016, 0.00010] n_exch = 1000 n_steps=10000 out_freq=100 configs = pt_mcmc(N, J, B, T, n_exch, n_steps, out_freq) E1 = [getE(spins,N,J,B) for spins in configs[1]] plt.plot(E1) @widgets.interact(i=(0,999)) def plot_image(i=1): fig,ax = plt.subplots(figsize=(8,8)) ax.imshow(configs[0][i]) ``` ### Problems 1. **Umbrella Sampliing** Use umbrealla sampling to obtain free energy profile as a function of magnetization below $T_c$ at $T_c$ and above $T_c$. E.g $T=2, 2.5, 3$ Consider using the inputs from adjecent umbrella simulations. E.g input for umbrella 4 can come from umbrella 3 to speed up simulations. 2. **Simulated Annealing** Complete the simulated annealing part of the code for finding minimum energy in Ising models. Test your code with field on and off. 3. **Parallel Tempering** Use parallell tempering to enhanced sampling at $T =1$ by coupling 8 replicas with $T>1$. Find the optimal T spacing between replicas, Calculate histograms of magnetization to show enhancment of sampling with repsect to constant T MCMC.
github_jupyter
# Implementing a Neural Network In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset. ``` # A bit of setup import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.neural_net import TwoLayerNet from __future__ import print_function %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) ``` We will use the class `TwoLayerNet` in the file `cs231n/classifiers/neural_net.py` to represent instances of our network. The network parameters are stored in the instance variable `self.params` where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation. ``` # Create a small net and some toy data to check your implementations. # Note that we set the random seed for repeatable experiments. input_size = 4 hidden_size = 10 num_classes = 3 num_inputs = 5 def init_toy_model(): np.random.seed(0) return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1) def init_toy_data(): np.random.seed(1) X = 10 * np.random.randn(num_inputs, input_size) y = np.array([0, 1, 2, 2, 1]) return X, y net = init_toy_model() X, y = init_toy_data() ``` # Forward pass: compute scores Open the file `cs231n/classifiers/neural_net.py` and look at the method `TwoLayerNet.loss`. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters. Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs. ``` scores = net.loss(X) print('Your scores:') print(scores) print() print('correct scores:') correct_scores = np.asarray([ [-0.81233741, -1.27654624, -0.70335995], [-0.17129677, -1.18803311, -0.47310444], [-0.51590475, -1.01354314, -0.8504215 ], [-0.15419291, -0.48629638, -0.52901952], [-0.00618733, -0.12435261, -0.15226949]]) print(correct_scores) print() # The difference should be very small. We get < 1e-7 print('Difference between your scores and correct scores:') print(np.sum(np.abs(scores - correct_scores))) ``` # Forward pass: compute loss In the same function, implement the second part that computes the data and regularizaion loss. ``` loss, _ = net.loss(X, y, reg=0.05) correct_loss = 1.30378789133 # should be very small, we get < 1e-12 print('Difference between your loss and correct loss:') print(np.sum(np.abs(loss - correct_loss))) ``` # Backward pass Implement the rest of the function. This will compute the gradient of the loss with respect to the variables `W1`, `b1`, `W2`, and `b2`. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check: ``` from cs231n.gradient_check import eval_numerical_gradient # Use numeric gradient checking to check your implementation of the backward pass. # If your implementation is correct, the difference between the numeric and # analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2. loss, grads = net.loss(X, y, reg=0.05) # these should all be less than 1e-8 or so for param_name in grads: f = lambda W: net.loss(X, y, reg=0.05)[0] param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False) print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))) ``` # Train the network To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function `TwoLayerNet.train` and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement `TwoLayerNet.predict`, as the training process periodically performs prediction to keep track of accuracy over time while the network trains. Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2. ``` net = init_toy_model() stats = net.train(X, y, X, y, learning_rate=1e-1, reg=5e-6, num_iters=100, verbose=False) print('Final training loss: ', stats['loss_history'][-1]) # plot the loss history plt.plot(stats['loss_history']) plt.xlabel('iteration') plt.ylabel('training loss') plt.title('Training Loss history') plt.show() ``` # Load the data Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset. ``` from cs231n.data_utils import load_CIFAR10 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the two-layer neural net classifier. These are the same steps as we used for the SVM, but condensed to a single function. """ # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = list(range(num_training, num_training + num_validation)) X_val = X_train[mask] y_val = y_train[mask] mask = list(range(num_training)) X_train = X_train[mask] y_train = y_train[mask] mask = list(range(num_test)) X_test = X_test[mask] y_test = y_test[mask] # Normalize the data: subtract the mean image mean_image = np.mean(X_train, axis=0) X_train -= mean_image X_val -= mean_image X_test -= mean_image # Reshape data to rows X_train = X_train.reshape(num_training, -1) X_val = X_val.reshape(num_validation, -1) X_test = X_test.reshape(num_test, -1) return X_train, y_train, X_val, y_val, X_test, y_test # Cleaning up variables to prevent loading data multiple times (which may cause memory issue) try: del X_train, y_train del X_test, y_test print('Clear previously loaded data.') except: pass # Invoke the above function to get our data. X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() print('Train data shape: ', X_train.shape) print('Train labels shape: ', y_train.shape) print('Validation data shape: ', X_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', X_test.shape) print('Test labels shape: ', y_test.shape) ``` # Train a network To train our network we will use SGD. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate. ``` input_size = 32 * 32 * 3 hidden_size = 100 num_classes = 10 net = TwoLayerNet(input_size, hidden_size, num_classes) # Train the network stats = net.train(X_train, y_train, X_val, y_val, num_iters=1000, batch_size=300, learning_rate=1e-4, learning_rate_decay=0.8, reg=0.9, verbose=True) # Predict on the validation set val_acc = (net.predict(X_val) == y_val).mean() print('Validation accuracy: ', val_acc) ``` # Debug the training With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good. One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization. Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized. ``` # Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.xlabel('Epoch') plt.ylabel('Clasification accuracy') plt.legend() plt.show() from cs231n.vis_utils import visualize_grid # Visualize the weights of the network def show_net_weights(net): W1 = net.params['W1'] W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2) plt.imshow(visualize_grid(W1, padding=3).astype('uint8')) plt.gca().axis('off') plt.show() show_net_weights(net) ``` # Tune your hyperparameters **What's wrong?**. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy. **Tuning**. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value. **Approximate results**. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set. **Experiment**: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.). ``` best_net = None # store the best model into this ################################################################################# # TODO: Tune hyperparameters using the validation set. Store your best trained # # model in best_net. # # # # To help debug your network, it may help to use visualizations similar to the # # ones we used above; these visualizations will have significant qualitative # # differences from the ones we saw above for the poorly tuned network. # # # # Tweaking hyperparameters by hand can be fun, but you might find it useful to # # write code to sweep through possible combinations of hyperparameters # # automatically like we did on the previous exercises. # ################################################################################# hidden_size = [60, 80, 100, 120] learning_rate = [1e-4, 5e-4, 1e-3, 5e-3] reg = [0.2, 0.4, 0.6, 0.8, 1] best_acc = -1 for hs in hidden_size: for lr in learning_rate: for r in reg: net = TwoLayerNet(input_size, hs, num_classes) net.train(X_train, y_train, X_val, y_val, num_iters=2000, batch_size=300, learning_rate=lr, learning_rate_decay=0.8, reg=r, verbose=False) val_acc = (net.predict(X_val) == y_val).mean() print('for hs: %e, lr: %e and r: %e, valid accuracy is: %f' % (hs, lr, r, val_acc)) if val_acc > best_acc: best_net = net best_acc = val_acc print('Best Networks has an Accuracy of: %f' % best_acc) ################################################################################# # END OF YOUR CODE # ################################################################################# # visualize the weights of the best network show_net_weights(best_net) ``` # Run on the test set When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%. ``` test_acc = (best_net.predict(X_test) == y_test).mean() print('Test accuracy: ', test_acc) ``` **Inline Question** Now that you have trained a Neural Network classifier, you may find that your testing accuracy is much lower than the training accuracy. In what ways can we decrease this gap? Select all that apply. 1. Train on a larger dataset. 2. Add more hidden units. 3. Increase the regularization strength. 4. None of the above. *Your answer*: 1 and 3 *Your explanation:* The more train data, the more diversity and generalization, and in the result better test accuarcy. By increasing regularization, you attend less on the single features of the train data, and usually, it increases the generality of the model.
github_jupyter
``` #EDA Packages import pandas as pd import numpy as np # ML Packages For Vectorization of Text For Feature Extraction from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer # Visualization Packages import matplotlib.pyplot as plt import seaborn as sns # Dataset from https://archive.ics.uci.edu/ml/datasets/YouTube+Spam+Collection# df1 = pd.read_csv("Youtube01-Psy.csv") df1.head() # Load all our dataset to merge them df2 = pd.read_csv("Youtube02-KatyPerry.csv") df3 = pd.read_csv("Youtube03-LMFAO.csv") df4 = pd.read_csv("Youtube04-Eminem.csv") df5 = pd.read_csv("Youtube05-Shakira.csv") frames = [df1,df2,df3,df4,df5] # Merging or Concatenating our DF df_merged = pd.concat(frames) # Total Size df_merged.shape # Merging with Keys keys = ["Psy","KatyPerry","LMFAO","Eminem","Shakira"] df_with_keys = pd.concat(frames,keys=keys) df_with_keys # Checking for Only Comments on Shakira df_with_keys.loc['Shakira'] # Save and Write Merged Data to csv df_with_keys.to_csv("YoutubeSpamMergeddata.csv") df = df_with_keys df.size ``` ## Data Cleaning ``` # Checking for Consistent Column Name df.columns # Checking for Datatypes df.dtypes # Check for missing nan df.isnull().isnull().sum() # Checking for Date df["DATE"] df.AUTHOR # Convert the Author Name to First Name and Last Name #df[["FIRSTNAME","LASTNAME"]] = df['AUTHOR'].str.split(expand=True) ``` ## Working With Text Content ``` df_data = df[["CONTENT","CLASS"]] df_data.columns df_x = df_data['CONTENT'] df_y = df_data['CLASS'] ``` ## Feature Extraction From Text ## CountVectorizer ## TfidfVectorizer ``` cv = CountVectorizer() ex = cv.fit_transform(["Great song but check this out","What is this song?"]) ex.toarray() cv.get_feature_names() # Extract Feature With CountVectorizer corpus = df_x cv = CountVectorizer() X = cv.fit_transform(corpus) # Fit the Data X.toarray() # get the feature names cv.get_feature_names() ``` ## Model Building ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, df_y, test_size=0.33, random_state=42) X_train # Naive Bayes Classifier from sklearn.naive_bayes import MultinomialNB clf = MultinomialNB() clf.fit(X_train,y_train) clf.score(X_test,y_test) # Accuracy of our Model print("Accuracy of Model",clf.score(X_test,y_test)*100,"%") ## Predicting with our model clf.predict(X_test) # Sample Prediciton comment = ["Check this out"] vect = cv.transform(comment).toarray() clf.predict(vect) class_dict = {'ham':0,'spam':1} class_dict.values() if clf.predict(vect) == 1: print("Spam") else: print("Ham") # Sample Prediciton 2 comment1 = ["Great song Friend"] vect = cv.transform(comment1).toarray() clf.predict(vect) ``` ## Save The Model ``` import pickle naivebayesML = open("YtbSpam_model.pkl","wb") pickle.dump(clf,naivebayesML) naivebayesML.close() # Load the model ytb_model = open("YtbSpam_model.pkl","rb") new_model = pickle.load(ytb_model) new_model # Sample Prediciton 3 comment2 = ["Hey Music Fans I really appreciate all of you,but see this song too"] vect = cv.transform(comment2).toarray() new_model.predict(vect) if new_model.predict(vect) == 1: print("Spam") else: print("Ham") ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#mulearn" data-toc-modified-id="mulearn-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>mulearn</a></span><ul class="toc-item"><li><span><a href="#Install" data-toc-modified-id="Install-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Install</a></span></li><li><span><a href="#How-to-use" data-toc-modified-id="How-to-use-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>How to use</a></span><ul class="toc-item"><li><span><a href="#Fuzzifier" data-toc-modified-id="Fuzzifier-1.2.1"><span class="toc-item-num">1.2.1&nbsp;&nbsp;</span>Fuzzifier</a></span></li><li><span><a href="#Kernel" data-toc-modified-id="Kernel-1.2.2"><span class="toc-item-num">1.2.2&nbsp;&nbsp;</span>Kernel</a></span></li></ul></li></ul></li></ul></div> ``` #hide from mulearn import kernel, fuzzifier, FuzzyInductor import mulearn.optimization as opt ``` # mulearn mulearn is a python package implementing the metodology for data-driven induction of fuzzy sets described in - D. Malchiodi and W. Pedrycz, _Learning Membership Functions for Fuzzy Sets through Modified Support Vector Clustering_, in F. Masulli, G. Pasi e R. Yager (Eds.), Fuzzy Logic and Applications. 10th International Workshop, WILF 2013, Genoa, Italy, November 19–22, 2013. Proceedings., Vol. 8256, Springer International Publishing, Switzerland, Lecture Notes on Artificial Intelligence, 2013; - D. Malchiodi and A. G. B. Tettamanzi, _Predicting the Possibilistic Score of OWL Axioms through Modified Support Vector Clustering_, in H. Haddad, R. L. Wainwright e R. Chbeir (Eds.), SAC'18: Proceedings of the 33rd Annual ACM Symposium on Applied Computing, ACM (ISBN 9781450351911), 1984–1991, 2018. ## Install The package can easily be installed via `pip`: `pip install mulearn` or using the source code available at https://github.com/dariomalchiodi/mulearn. ## How to use Consider the Iris dataset, whose 150 observations describe each a flower of the Iris species in terms of its sepal and petal width and length, as well as of its class (Setosa, Versicolor, and Virginica), as exemplified here below. ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np from sklearn.decomposition import PCA source = 'https://archive.ics.uci.edu/ml/'\ 'machine-learning-databases/iris/iris.data' iris_df = pd.read_csv(source, header=None) iris_df.columns=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'class'] iris_df.head() ``` Focusing on the flower class as fuzzy concept to be learnt from data, ``` iris_values = iris_df.iloc[:,0:4].values iris_labels = iris_df.iloc[:,4].values pca_2d = PCA(n_components=2) iris_values_2d = pca_2d.fit_transform(iris_values) def gr_dataset(): for lab, col in zip(('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'), ('blue', 'green', 'red')): plt.scatter(iris_values_2d[iris_labels==lab, 0], iris_values_2d[iris_labels==lab, 1], label=lab, c=col) gr_dataset() def to_membership_values(labels, target): return [1 if l==target else 0 for l in labels] mu = {} for target in ('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'): mu[target] = to_membership_values(iris_labels, target) def gr_membership_contour(estimated_membership): x = np.linspace(-4, 4, 50) y = np.linspace(-4, 4, 50) X, Y = np.meshgrid(x, y) zs = np.array([estimated_membership((x, y)) for x,y in zip(np.ravel(X), np.ravel(Y))]) Z = zs.reshape(X.shape) membership_contour = plt.contour(X, Y, Z, levels=(.1, .3, .5, .95), colors='k') plt.clabel(membership_contour, inline=1) ``` The main class of the package, called `FuzzySVEstimator`, learns the membership function $\mu_A$ to a fuzzy set $A$ from a sample of vectors labeled according to the corresponding membership grades to $A$. This class exposes an interface analogous to that of estimators in Scikit-Learn, thus learning happens through invokation of the `fit` method on an insance of the class, specifying objects and targets as arguments. Once this method returns, the `estimated_membership_` attribute contains a reference to the learnt membership function. ``` from mulearn import FuzzyInductor f = FuzzyInductor() f.fit(iris_values_2d, mu['Iris-virginica']) gr_dataset() gr_membership_contour(f.estimated_membership_) plt.show() ``` Alternatively, it is possible to predict the membership of an object through invokation of the `predict` method. ``` f.predict([[2, 0]]) ``` Hyper-parameters of the learning algorithm, that according to the interface required by Scikit-learn should be specified during object creation, are described here below. ### Fuzzifier This hyper-parameter, regulating how the learnt membership function decreases from 1 to 0, is specified through the `fuzzifier` argument. The corresponding value should be set to a pair containing a class in the `mulearn.fuzzifier` module and a dictionary of options be used when the former class is instantiated. The simplest fuzzifier linearly decreases from 1 to 0. It is specified via the `mulearn.fuzzifier.LinearFuzzifier` class, which in its simplest form does not require specific options. ``` from mulearn import fuzzifier f = FuzzyInductor(fuzzifier=(fuzzifier.LinearFuzzifier, {})) f.fit(iris_values_2d, mu['Iris-virginica']) gr_dataset() gr_membership_contour(f.estimated_membership_) plt.show() ``` When the dictionary provided along with the fuzzifier class is empty, the former is typically tuned according to the data provided to the learning algorithm. However, it is possible to directly specify options in order to set a specific behaviour for the fuzzifier to be created. For instance, the following cell relies on an `ExponentialFuzzifier`, whose exponential decay rate from 1 to 0 is manually set specifying the `'profile'` and `'alpha'` keys in the dictionary. ``` f = FuzzyInductor(fuzzifier=(fuzzifier.ExponentialFuzzifier, {'profile': 'alpha', 'alpha': 0.25})) f.fit(iris_values_2d, mu['Iris-virginica']) gr_dataset() gr_membership_contour(f.estimated_membership_) plt.show() ``` ### Kernel ``` from mulearn import kernel f = FuzzyInductor(k=kernel.GaussianKernel(.3)) f.fit(iris_values_2d, mu['Iris-virginica']) gr_dataset() gr_membership_contour(f.estimated_membership_) plt.show() from mulearn import optimization as opt try: f = FuzzyInductor(solve_strategy=(opt.solve_optimization_gurobi, {})) f.fit(iris_values_2d, mu['Iris-virginica']) gr_dataset() gr_membership_contour(f.estimated_membership_) plt.show() except (ModuleNotFoundError, ValueError): print('Gurobi not available') f = FuzzyInductor(fuzzifier=(fuzzifier.ExponentialFuzzifier, {'profile': 'alpha', 'alpha': 0.15}), k=kernel.GaussianKernel(1.5), solve_strategy=(opt.solve_optimization_tensorflow, {'n_iter': 20}), return_profile=True) f.fit(iris_values_2d, mu['Iris-virginica']) gr_dataset() gr_membership_contour(f.estimated_membership_) plt.show() plt.plot(f.profile_[0], mu['Iris-virginica'], '.') plt.plot(f.profile_[1], f.profile_[2]) plt.ylim((-0.1, 1.1)) plt.show() sigmas = [.225,.5] parameters = {'c': [1,10,100], 'k': [kernel.GaussianKernel(s) for s in sigmas]} from sklearn.model_selection import GridSearchCV from sklearn.exceptions import FitFailedWarning import logging import warnings logging.getLogger('mulearn').setLevel(logging.ERROR) f = FuzzyInductor() with warnings.catch_warnings(): warnings.simplefilter('ignore', FitFailedWarning) virginica = GridSearchCV(f, param_grid=parameters, cv=2) virginica.fit(iris_values_2d, mu['Iris-virginica']) gr_dataset() gr_membership_contour(virginica.best_estimator_.estimated_membership_) plt.show() import pickle saved_estimator = pickle.dumps(virginica.best_estimator_) loaded_estimator = pickle.loads(saved_estimator) gr_dataset() gr_membership_contour(loaded_estimator.estimated_membership_) plt.show() ```
github_jupyter
<a href="https://colab.research.google.com/github/ewotawa/secure_private_ai/blob/master/Section_4_Encrypted_Deep_Learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Section: Encrypted Deep Learning - Lesson: Reviewing Additive Secret Sharing - Lesson: Encrypted Subtraction and Public/Scalar Multiplication - Lesson: Encrypted Computation in PySyft - Project: Build an Encrypted Database - Lesson: Encrypted Deep Learning in PyTorch - Lesson: Encrypted Deep Learning in Keras - Final Project ``` # PySyft !pip install syft import syft as sy # PyTorch !pip install torch !pip install torchvision import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import TensorDataset, DataLoader import torchvision from torchvision import datasets, transforms # Numpy import numpy as np # time import time ``` # Lesson: Reviewing Additive Secret Sharing _For more great information about SMPC protocols like this one, visit https://mortendahl.github.io. With permission, Morten's work directly inspired this first teaching segment._ ``` import random import numpy as np BASE = 10 PRECISION_INTEGRAL = 8 PRECISION_FRACTIONAL = 8 Q = 293973345475167247070445277780365744413 PRECISION = PRECISION_INTEGRAL + PRECISION_FRACTIONAL assert(Q > BASE**PRECISION) def encode(rational): upscaled = int(rational * BASE**PRECISION_FRACTIONAL) field_element = upscaled % Q return field_element def decode(field_element): upscaled = field_element if field_element <= Q/2 else field_element - Q rational = upscaled / BASE**PRECISION_FRACTIONAL return rational def encrypt(secret): first = random.randrange(Q) second = random.randrange(Q) third = (secret - first - second) % Q return [first, second, third] def decrypt(sharing): return sum(sharing) % Q def add(a, b): c = list() for i in range(len(a)): c.append((a[i] + b[i]) % Q) return tuple(c) x = encrypt(encode(5.5)) x y = encrypt(encode(2.3)) y z = add(x,y) z decode(decrypt(z)) ``` # Lesson: Encrypted Subtraction and Public/Scalar Multiplication ``` field = 23740629843760239486723 x = 5 bob_x_share = 2372385723 # random number alices_x_share = field - bob_x_share + x (bob_x_share + alices_x_share) % field field = 10 x = 5 bob_x_share = 8 alice_x_share = field - bob_x_share + x y = 1 bob_y_share = 9 alice_y_share = field - bob_y_share + y ((bob_x_share + alice_x_share) - (bob_y_share + alice_y_share)) % field ((bob_x_share - bob_y_share) + (alice_x_share - alice_y_share)) % field bob_x_share + alice_x_share + bob_y_share + alice_y_share bob_z_share = (bob_x_share - bob_y_share) alice_z_share = (alice_x_share - alice_y_share) (bob_z_share + alice_z_share) % field def sub(a, b): c = list() for i in range(len(a)): c.append((a[i] - b[i]) % Q) return tuple(c) field = 10 x = 5 bob_x_share = 8 alice_x_share = field - bob_x_share + x y = 1 bob_y_share = 9 alice_y_share = field - bob_y_share + y bob_x_share + alice_x_share bob_y_share + alice_y_share ((bob_y_share * 3) + (alice_y_share * 3)) % field def imul(a, scalar): # logic here which can multiply by a public scalar c = list() for i in range(len(a)): c.append((a[i] * scalar) % Q) return tuple(c) x = encrypt(encode(5.5)) x z = imul(x, 3) decode(decrypt(z)) ``` # Lesson: Encrypted Computation in PySyft ``` import syft as sy import torch as th hook = sy.TorchHook(th) from torch import nn, optim bob = sy.VirtualWorker(hook, id="bob").add_worker(sy.local_worker) alice = sy.VirtualWorker(hook, id="alice").add_worker(sy.local_worker) secure_worker = sy.VirtualWorker(hook, id="secure_worker").add_worker(sy.local_worker) x = th.tensor([1,2,3,4]) y = th.tensor([2,-1,1,0]) x = x.share(bob, alice, crypto_provider=secure_worker) y = y.share(bob, alice, crypto_provider=secure_worker) z = x + y z.get() z = x - y z.get() z = x * y z.get() z = x > y z.get() z = x < y z.get() z = x == y z.get() x = th.tensor([1,2,3,4]) y = th.tensor([2,-1,1,0]) x = x.fix_precision().share(bob, alice, crypto_provider=secure_worker) y = y.fix_precision().share(bob, alice, crypto_provider=secure_worker) z = x + y z.get().float_precision() z = x - y z.get().float_precision() z = x * y z.get().float_precision() z = x > y z.get().float_precision() z = x < y z.get().float_precision() z = x == y z.get().float_precision() ``` # Project: Build an Encrypted Database ``` # try this project here! ``` ###Instructor's work Rationale for exercise: * to show the general nature of encrypted computation - it can be based off of tensor computations. Process: * take individual strings - key-value database of string representations * convert strongs to tensor operations. * show how tesor operations can be used to perform queries. Feature * encrypted database: makes it so that the database owner * can't see any of the data (the data is encrypted while it's in the database) * can't see what people are querying * can't see the results of the queries * because all the values are encrypted using secure MPC (multi-party computation), you can have a group of database owners: multiple individuals that have joint ownership and joint governance over people's ability to query and use information. Come up with numerical representations for strings. Two general options: * one-hot representation * integer representations ``` # choose what subset of characters we want to support on our database. # create lookup tables that map characters to integers char2index = {} index2char = {} import string # what do we want to encode for i,char in enumerate(' ' + string.ascii_lowercase + '1234567890' + string.punctuation): char2index[char] = i index2char[i] = char str_input = "Hello" max_length = 8 # integer representation def string2values(str_input, max_length): # trim any strings that are too long, convert them to lowercase str_input = str_input[:max_length].lower() # if a string is too short, pad it with . if(len(str_input) < max_length): str_input = str_input + "." * (max_length - len(str_input)) values = list() for char in str_input: values.append(char2index[char]) values_t = torch.tensor(values).long() return values_t string2values("howdy!",8) # one-hot encoding def one_hot(index, length): # start with a tensor of zeros vect = torch.zeros(length).long() # whatever the index position is, set it to one vect[index] = 1 return vect one_hot(3,5) # representing characters one_hot(char2index['p'], len(index2char)) # integer representation def string2one_hot_matrix(str_input, max_length): # trim any strings that are too long, convert them to lowercase str_input = str_input[:max_length].lower() # if a string is too short, pad it with . if(len(str_input) < max_length): str_input = str_input + "." * (max_length - len(str_input)) char_vector = list() for char in str_input: char_v = one_hot(char2index[char], len(index2char)).unsqueeze(0) char_vector.append(char_v) result = torch.cat(char_vector, dim=0) return result string2one_hot_matrix("Hello", 8) # 8 rows representing 8 characters, 69 columns representing the 69 possible characters to be encoded. matrix = string2one_hot_matrix("Hello", 8) matrix.shape # database: store keys using one-hot encoding, store values using string-to-one-hot-matrix. # test for characters in common m_a = string2one_hot_matrix("abcdefg", 8) m_b = string2one_hot_matrix("hijklmnop", 8) # only positions where the same character is present in both strings will return a one. No matches returns a tensor of zero. (m_a * m_b).sum() # test for characters in common m_a = string2one_hot_matrix("ThingOne", 8) m_b = string2one_hot_matrix("ThingTwo", 8) # only positions where the same character is present in both strings will return a one. No matches returns a tensor of zero. (m_a * m_b).sum() # Useful tool: allows us to use multiplication to tell whether or not a given key is a perfect match with a particular query. # test for characters in common m_a = string2one_hot_matrix("ThingOne", 8) m_b = string2one_hot_matrix("ThingTwo", 8) # sum along the 1 dimension: see how many characters are overlapping (m_a * m_b).sum(1) # Note: we mostly care whether or not the strings match completely. vect = (m_a * m_b).sum(1) print(vect) x = vect[0] for i in range(vect.shape[0] - 1): x = x * vect[i + 1] # Boolean value. 1: strings match, 2: strings do not match. # note: using only multiplication to encode a Boolean bit that determines whether or not the two keys match. key_match = x print(key_match) # create a dummy key-value pair keys = list() values = list() keys.append(string2one_hot_matrix("key0", 8)) values.append(string2values("value0", 8)) keys.append(string2one_hot_matrix("key1", 8)) values.append(string2values("value1", 8)) def string_equal(str_a, str_b): vect = (str_a * str_b).sum(1) x = vect[0] for i in range(vect.shape[0] - 1): x = x * vect[i + 1] str_match = x return str_match # a query to a key-value store first needs to compute whether the query matches any of the other keys. query_str = "key1" # convert to the correct representation query_matrix = string2one_hot_matrix(query_str, 8) # perform the query: iterate all keys and figure out whether any of them match key_matches = list() for key in keys: key_match = string_equal(key, query_matrix) key_matches.append(key_match) print(key_matches) # we can mask out all the values that don't have matching keys. print(values) # result is the numerical representation of matching values # adds up all the values but first masks out the ones that don't have matching keys. result = values[0] * key_matches[0] for i in range(len(values) - 1): result += values[i+1] * key_matches[i+1] print(result) # how do we get a tensor that is strings instead of numbers? def values2string(input_values): s = "" for value in input_values: s += index2char[int(value)] return s values2string(result).replace(".", "") def query(query_str): query_matrix = string2one_hot_matrix(query_str, 8) key_matches = list() for key in keys: key_match = string_equal(key, query_matrix) key_matches.append(key_match) result = values[0] * key_matches[0] for i in range(len(values) - 1): result += values[i+1] * key_matches[i+1] return values2string(result).replace(".", "") query("key0") # how do we pack this into a database? # package the convenience functions # integer representation def string2values(str_input, max_length): # trim any strings that are too long, convert them to lowercase str_input = str_input[:max_length].lower() # if a string is too short, pad it with . if(len(str_input) < max_length): str_input = str_input + "." * (max_length - len(str_input)) values = list() for char in str_input: values.append(char2index[char]) values_t = torch.tensor(values).long() return values_t # one-hot encoding def one_hot(index, length): # start with a tensor of zeros vect = torch.zeros(length).long() # whatever the index position is, set it to one vect[index] = 1 return vect # integer representation def string2one_hot_matrix(str_input, max_length): # trim any strings that are too long, convert them to lowercase str_input = str_input[:max_length].lower() # if a string is too short, pad it with . if(len(str_input) < max_length): str_input = str_input + "." * (max_length - len(str_input)) char_vector = list() for char in str_input: char_v = one_hot(char2index[char], len(index2char)).unsqueeze(0) char_vector.append(char_v) result = torch.cat(char_vector, dim=0) return result def string_equal(str_a, str_b): vect = (str_a * str_b).sum(1) x = vect[0] for i in range(vect.shape[0] - 1): x = x * vect[i + 1] str_match = x return str_match def values2string(input_values): s = "" for value in input_values: s += index2char[int(value)] return s class DB(): def __init__(self, max_key_len=8, max_val_len=8): self.max_key_len = 8 self.max_val_len = 8 self.keys = list() self.values = list() self.keys.append(string2one_hot_matrix("key0", 8)) self.values.append(string2values("value0", 8)) self.keys.append(string2one_hot_matrix("key1", 8)) self.values.append(string2values("value1", 8)) def add_entry(self, key, value): self.keys.append(string2one_hot_matrix(key, self.max_key_len)) self.values.append(string2values(value, self.max_val_len)) def query(self, query_str): query_matrix = string2one_hot_matrix(query_str, 8) key_matches = list() for key in self.keys: key_match = string_equal(key, query_matrix) key_matches.append(key_match) result = self.values[0] * key_matches[0] for i in range(len(self.values) - 1): result += self.values[i+1] * key_matches[i+1] return values2string(result).replace(".", "") db = DB() db.query("key0") db.add_entry("key2", "value2") db.add_entry("key3", "value3") db.query("key3") # now add encryption class EncryptedDB(): def __init__(self, *owners, max_key_len=8, max_val_len=8): self.max_key_len = 8 self.max_val_len = 8 self.keys = list() self.values = list() self.owners = owners def add_entry(self, key, value): key = string2one_hot_matrix(key, self.max_key_len) key = key.share(*self.owners) self.keys.append(key) value = string2one_hot_matrix(value, self.max_val_len) value = value.share(*self.owners) self.values.append(value) def query(self, query_str): query_matrix = string2one_hot_matrix(query_str, 8) query_matrix = query_matrix.share(*self.owners) key_matches = list() for key in self.keys: key_match = string_equal(key, query_matrix) key_matches.append(key_match) result = self.values[0] * key_matches[0] for i in range(len(self.values) - 1): result += self.values[i+1] * key_matches[i+1] result = result.get() return values2string(result).replace(".", "") edb = EncryptedDB(bob, alice, secure_worker) edb.add_entry("key0", "value0") edb.add_entry("key1", "value1") edb.query("key0") ``` # Lesson: Encrypted Deep Learning in PyTorch ### Train a Model ``` from torch import nn from torch import optim import torch.nn.functional as F # A Toy Dataset data = th.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True) target = th.tensor([[0],[0],[1],[1.]], requires_grad=True) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(2, 20) self.fc2 = nn.Linear(20, 1) def forward(self, x): x = self.fc1(x) x = F.relu(x) x = self.fc2(x) return x # A Toy Model model = Net() def train(): # Training Logic opt = optim.SGD(params=model.parameters(),lr=0.1) for iter in range(20): # 1) erase previous gradients (if they exist) opt.zero_grad() # 2) make a prediction pred = model(data) # 3) calculate how much we missed loss = ((pred - target)**2).sum() # 4) figure out which weights caused us to miss loss.backward() # 5) change those weights opt.step() # 6) print our progress print(loss.data) train() model(data) ``` ## Encrypt the Model and Data ``` encrypted_model = model.fix_precision().share(alice, bob, crypto_provider=secure_worker) list(encrypted_model.parameters()) encrypted_data = data.fix_precision().share(alice, bob, crypto_provider=secure_worker) encrypted_data encrypted_prediction = encrypted_model(encrypted_data) encrypted_prediction.get().float_precision() ``` # Lesson: Encrypted Deep Learning in Keras ## Step 1: Public Training Welcome to this tutorial! In the following notebooks you will learn how to provide private predictions. By private predictions, we mean that the data is constantly encrypted throughout the entire process. At no point is the user sharing raw data, only encrypted (that is, secret shared) data. In order to provide these private predictions, Syft Keras uses a library called [TF Encrypted](https://github.com/tf-encrypted/tf-encrypted) under the hood. TF Encrypted combines cutting-edge cryptographic and machine learning techniques, but you don't have to worry about this and can focus on your machine learning application. You can start serving private predictions with only three steps: - **Step 1**: train your model with normal Keras. - **Step 2**: secure and serve your machine learning model (server). - **Step 3**: query the secured model to receive private predictions (client). Alright, let's go through these three steps so you can deploy impactful machine learning services without sacrificing user privacy or model security. Huge shoutout to the Dropout Labs ([@dropoutlabs](https://twitter.com/dropoutlabs)) and TF Encrypted ([@tf_encrypted](https://twitter.com/tf_encrypted)) teams for their great work which makes this demo possible, especially: Jason Mancuso ([@jvmancuso](https://twitter.com/jvmancuso)), Yann Dupis ([@YannDupis](https://twitter.com/YannDupis)), and Morten Dahl ([@mortendahlcs](https://github.com/mortendahlcs)). _Demo Ref: https://github.com/OpenMined/PySyft/tree/dev/examples/tutorials_ ## Train Your Model in Keras To use privacy-preserving machine learning techniques for your projects you should not have to learn a new machine learning framework. If you have basic [Keras](https://keras.io/) knowledge, you can start using these techniques with Syft Keras. If you have never used Keras before, you can learn a bit more about it through the [Keras documentation](https://keras.io). Before serving private predictions, the first step is to train your model with normal Keras. As an example, we will train a model to classify handwritten digits. To train this model we will use the canonical [MNIST dataset](http://yann.lecun.com/exdb/mnist/). We borrow [this example](https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py) from the reference Keras repository. To train your classification model, you just run the cell below. ``` from __future__ import print_function import tensorflow.keras as keras from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Flatten from tensorflow.keras.layers import Conv2D, AveragePooling2D from tensorflow.keras.layers import Activation batch_size = 128 num_classes = 10 epochs = 2 # input image dimensions img_rows, img_cols = 28, 28 # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(10, (3, 3), input_shape=input_shape)) model.add(AveragePooling2D((2, 2))) model.add(Activation('relu')) model.add(Conv2D(32, (3, 3))) model.add(AveragePooling2D((2, 2))) model.add(Activation('relu')) model.add(Conv2D(64, (3, 3))) model.add(AveragePooling2D((2, 2))) model.add(Activation('relu')) model.add(Flatten()) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ## Save your model's weights for future private prediction model.save('short-conv-mnist.h5') ``` ## Step 2: Load and Serve the Model Now that you have a trained model with normal Keras, you are ready to serve some private predictions. We can do that using Syft Keras. To secure and serve this model, we will need three TFEWorkers (servers). This is because TF Encrypted under the hood uses an encryption technique called [multi-party computation (MPC)](https://en.wikipedia.org/wiki/Secure_multi-party_computation). The idea is to split the model weights and input data into shares, then send a share of each value to the different servers. The key property is that if you look at the share on one server, it reveals nothing about the original value (input data or model weights). We'll define a Syft Keras model like we did in the previous notebook. However, there is a trick: before instantiating this model, we'll run `hook = sy.KerasHook(tf.keras)`. This will add three important new methods to the Keras Sequential class: - `share`: will secure your model via secret sharing; by default, it will use the SecureNN protocol from TF Encrypted to secret share your model between each of the three TFEWorkers. Most importantly, this will add the capability of providing predictions on encrypted data. - `serve`: this function will launch a serving queue, so that the TFEWorkers can can accept prediction requests on the secured model from external clients. - `shutdown_workers`: once you are done providing private predictions, you can shut down your model by running this function. It will direct you to shutdown the server processes manually if you've opted to manually manage each worker. If you want learn more about MPC, you can read this excellent [blog](https://mortendahl.github.io/2017/04/17/private-deep-learning-with-mpc/). ``` import numpy as np import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.layers import AveragePooling2D, Conv2D, Dense, Activation, Flatten, ReLU, Activation import syft as sy hook = sy.KerasHook(tf.keras) ``` ## Model As you can see, we define almost the exact same model as before, except we provide a `batch_input_shape`. This allows TF Encrypted to better optimize the secure computations via predefined tensor shapes. For this MNIST demo, we'll send input data with the shape of (1, 28, 28, 1). We also return the logit instead of softmax because this operation is complex to perform using MPC, and we don't need it to serve prediction requests. ``` num_classes = 10 input_shape = (1, 28, 28, 1) model = Sequential() model.add(Conv2D(10, (3, 3), batch_input_shape=input_shape)) model.add(AveragePooling2D((2, 2))) model.add(Activation('relu')) model.add(Conv2D(32, (3, 3))) model.add(AveragePooling2D((2, 2))) model.add(Activation('relu')) model.add(Conv2D(64, (3, 3))) model.add(AveragePooling2D((2, 2))) model.add(Activation('relu')) model.add(Flatten()) model.add(Dense(num_classes, name="logit")) ``` ### Load Pre-trained Weights With `load_weights` you can easily load the weights you have saved previously after training your model. ``` pre_trained_weights = 'short-conv-mnist.h5' model.load_weights(pre_trained_weights) ``` ## Step 3: Setup Your Worker Connectors Let's now connect to the TFEWorkers (`alice`, `bob`, and `carol`) required by TF Encrypted to perform private predictions. For each TFEWorker, you just have to specify a host. These workers run a [TensorFlow server](https://www.tensorflow.org/api_docs/python/tf/distribute/Server), which you can either manage manually (`AUTO = False`) or ask the workers to manage for you (`AUTO = True`). If choosing to manually manage them, you will be instructed to execute a terminal command on each worker's host device after calling `model.share()` below. If all workers are hosted on a single device (e.g. `localhost`), you can choose to have Syft automatically manage the worker's TensorFlow server. ``` AUTO = False alice = sy.TFEWorker(host='localhost:4000', auto_managed=AUTO) bob = sy.TFEWorker(host='localhost:4001', auto_managed=AUTO) carol = sy.TFEWorker(host='localhost:4002', auto_managed=AUTO) ``` ## Step 4: Split the Model Into Shares Thanks to `sy.KerasHook(tf.keras)` you can call the `share` method to transform your model into a TF Encrypted Keras model. If you have asked to manually manage servers above then this step will not complete until they have all been launched. Note that your firewall may ask for Python to accept incoming connection. ``` model.share(alice, bob, carol) ``` ## Step 5: Launch 3 Servers ``` python -m tf_encrypted.player --config /tmp/tfe.config server0 python -m tf_encrypted.player --config /tmp/tfe.config server1 python -m tf_encrypted.player --config /tmp/tfe.config server2``` ## Step 6: Serve the Model Perfect! Now by calling `model.serve`, your model is ready to provide some private predictions. You can set `num_requests` to set a limit on the number of predictions requests served by the model; if not specified then the model will be served until interrupted. ``` model.serve(num_requests=3) ``` ## Step 7: Run the Client At this point open up and run the companion notebook: Section 4b - Encrytped Keras Client ## Step 8: Shutdown the Servers Once your request limit above, the model will no longer be available for serving requests, but it's still secret shared between the three workers above. You can kill the workers by executing the cell below. **Congratulations** on finishing Part 12: Secure Classification with Syft Keras and TFE! ``` model.shutdown_workers() if not AUTO: process_ids = !ps aux | grep '[p]ython -m tf_encrypted.player --config /tmp/tfe.config' | awk '{print $2}' for process_id in process_ids: !kill {process_id} print("Process ID {id} has been killed.".format(id=process_id)) ``` # Keystone Project - Mix and Match What You've Learned Description: Take two of the concepts you've learned about in this course (Encrypted Computation, Federated Learning, Differential Privacy) and combine them for a use case of your own design. Extra credit if you can get your demo working with [WebSocketWorkers](https://github.com/OpenMined/PySyft/tree/dev/examples/tutorials/advanced/websockets-example-MNIST) instead of VirtualWorkers! Then take your demo or example application, write a blogpost, and share that blogpost in #general-discussion on OpenMined's slack!!! Inspiration: - This Course's Code: https://github.com/Udacity/private-ai - OpenMined's Tutorials: https://github.com/OpenMined/PySyft/tree/dev/examples/tutorials - OpenMined's Blog: https://blog.openmined.org ``` ```
github_jupyter
``` #default_exp vision.utils #export from fastai2.torch_basics import * from fastai2.data.all import * from fastai2.vision.core import * #hide from nbdev.showdoc import * path = untar_data(URLs.IMAGENETTE) path ``` # Vision utils > Some utils function to quickly download a bunch of images, check them and pre-resize them ``` #export def _download_image_inner(dest, inp, timeout=4): i,url = inp suffix = re.findall(r'\.\w+?(?=(?:\?|$))', url) suffix = suffix[0] if len(suffix)>0 else '.jpg' try: download_url(url, dest/f"{i:08d}{suffix}", overwrite=True, show_progress=False, timeout=timeout) except Exception as e: f"Couldn't download {url}." with tempfile.TemporaryDirectory() as d: d = Path(d) url = "https://www.fast.ai/images/jh-head" _download_image_inner(d, (125,url)) assert (d/'00000125.jpg').is_file() #export def download_images(dest, url_file=None, urls=None, max_pics=1000, n_workers=8, timeout=4): "Download images listed in text file `url_file` to path `dest`, at most `max_pics`" if urls is None: urls = url_file.read().strip().split("\n")[:max_pics] dest = Path(dest) dest.mkdir(exist_ok=True) parallel(partial(_download_image_inner, dest, timeout=timeout), list(enumerate(urls)), n_workers=n_workers) with tempfile.TemporaryDirectory() as d: d = Path(d) url_file = d/'urls.txt' url_file.write("\n".join([f"https://www.fast.ai/images/{n}" for n in "jh-head thomas.JPG sg-head".split()])) download_images(d, url_file) for i in [0,2]: assert (d/f'0000000{i}.jpg').is_file() assert (d/f'00000001.JPG').is_file() #export def resize_to(img, targ_sz, use_min=False): "Size to resize to, to hit `targ_sz` at same aspect ratio, in PIL coords (i.e w*h)" w,h = img.size min_sz = (min if use_min else max)(w,h) ratio = targ_sz/min_sz return int(w*ratio),int(h*ratio) class _FakeImg(): def __init__(self, size): self.size=size img = _FakeImg((200,500)) test_eq(resize_to(img, 400), [160,400]) test_eq(resize_to(img, 400, use_min=True), [400,1000]) #export def verify_image(fn): "Confirm that `fn` can be opened" try: im = Image.open(fn) im.draft(im.mode, (32,32)) im.load() return True except: return False #export def verify_images(fns): "Find images in `fns` that can't be opened" return L(fns[i] for i,o in enumerate(parallel(verify_image, fns)) if not o) ``` # Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
# Using the OpenACC Kernels Directive In the [main part](README.ipynb) of this lab you learned how to use the `acc parallel loop` directive to accelerate a simple application on multicore CPUs and GPUs. The `acc parallel loop` directive is really nice, because it's simple to understand what it does: it begins parallel execution and it runs the following loop in parallel. This works great if I'm sure I know which loops can and should be parallelized, but what if I'm not sure? As an alternative, OpenACC provides the `acc kernels` directive, which essentially states that the contained region of code is potentially interesting, but needs more analysis. It's then up to the compiler to decide whether the loops in that region can and should be parallelized for the processor you're targeting. Here's hou ## Kernels Directive The kernels directive allows the programmer to step back, and rely solely on the compiler. Let's look at the syntax: ```fortran !$acc kernels do i = 1, N < loop code > end do !$acc end kernels ``` Just like in the parallel directive example, we are parallelizing a single loop. Recall that when using the parallel directive, it must always be paired with the loop directive, otherwise the code will be improperly parallelized. The kernels directive does not follow the same rule, and in some compilers, adding the loop directive may limit the compilers ability to optimize the code. As said previously, the kernels directive is the exact opposite of the parallel directive. This means that the compiler is making a lot of assumptions, and may even override the programmers decision to parallelize code. Also, by default, the compiler will attempt to optimize the loop. The compiler is generally pretty good at optimizing loops, and sometimes may be able to optimize the loop in a way that the programmer cannot describe. However, usually, the programmer will be able to achieve better performance by optimizing the loop themself. If you run into a situation where the compiler refuses to parallelize a loop, you may override the compilers decision. (however, keep in mind that by overriding the compilers decision, you are taking responsibility for any mistakes the occur from parallelizing the code!) In this code segment, we are using the independent clause to ensure the compiler that we think the loop is parallelizable. ```fortran !$acc kernels loop independent do i = 1, N < loop code > end do ``` One of the largest advantages of the kernels directive is its ability to parallelize many loops at once. For example, in the following code segment, we are able to effectively parallelize two loops at once by utilizing a kernels region (similar to a parallel region, that we saw earlier.) ```fortran !$acc kernels do i = 1, N < loop code > end do < some other sequential code > do j = 1, N < loop code > end do !$acc end kernels ``` By using the kernels directive, we can parallelize more than one loop (as many loops as we want, actually.) We are also able to include sequential code between the loops, without needing to include multiple directives. Similar to before, let's look at a visual example of how the kernels directive works. ![kernels1](../images/kernels1f.png) ![kernels2](../images/kernels2f.png) OK, now it's your turn to try the `kernels` approach. Open **laplace2d.f90** (File -> Open -> laplace2d.f90) again and replace your `acc parallel loop` directives with `acc kernels` and rerun the code. Don't forget to save the code after making edits. ``` ! pgfortran -fast -ta=tesla:managed -Minfo=accel -o laplace laplace2d.f90 jacobi.f90 && echo "Compilation Successful!" && ./laplace ``` We should see similar performance to the previous version, but for some reason we don't. Let's see if the compiler output tells us anything. ``` calcnext: 58, Generating implicit copyout(anew(1:n-2,1:m-2)) Generating implicit copyin(a(:n-1,:m-1)) 59, Loop is parallelizable 60, Loop is parallelizable Accelerator kernel generated Generating Tesla code 59, !$acc loop gang, vector(4) ! blockidx%y threadidx%y 60, !$acc loop gang, vector(32) ! blockidx%x threadidx%x 63, Generating implicit reduction(max:error) ``` You should now see the performance back where it was previously. Notice that you did not need to identify the reduction on the variable `error` like you did with `parallel loop`. When using the `kernels` directive it is the compiler's responsibility to ensure that a loop is safe to parallelize, rather than the programmer, so the PGI compiler detects and implicitly handles the reduction in cases like this one. If the performance or answers look wrong, feel free to take a peek at **our solution** (File -> Open -> solutions/laplace2d.kernels.f90). ## Conclusions Let's recap the two approaches OpenACC provides for parallelizing your application. * The `parallel loop` directive gives a lot of control to the programmer. The programmer decides what to parallelize, and how it will be parallelized. Any mistakes made by the parallelization is at the fault of the programmer. It is recommended to use a `parallel loop` directive for each loop you want to parallelize. * The `kernels` directive leaves majority of the control to the compiler. The compiler will analyze the loops, and decide which ones to parallelize. It may refuse to parallelize certain loops, but the programmer can override this decision. You may use the kernels directive to parallelize large portions of code, and these portions may include multiple loops. So which approach should you use in your application? It's really mostly personal preference. The `kernels` directive is nice because when it works properly it requires very little thought by the programmer, but if the compiler is at all unsure about whether a loop is safe to parallelize it will not parallelize that loop. On the other hand, the compiler will always parallelize loops with the `parallel loop` directive, because the programmer has promised that it's safe to do so. At the end of the day, for most loops it's possible to get very similar performance using either approach, so you should use the one that you feel most comfortable and productive with.
github_jupyter
``` %matplotlib inline from itertools import chain from tqdm import trange import numpy as np from scipy import linalg import matplotlib.pyplot as plt import matplotlib as mpl from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import seaborn as sns sns.set(style="white") random_seed = 123 def generate_data(num_mode, except_num, radius=2, center=(0, 0), sigma=0.1, num_data_per_class=100000): total_data = {} t = np.linspace(0, 2*np.pi, 13) x = np.cos(t)*radius + center[0] y = np.sin(t)*radius + center[1] plt.figure() plt.plot(x,y) modes = np.vstack([x, y]).T for idx, mode in enumerate(modes[except_num:]): x = np.random.normal(mode[0], sigma, num_data_per_class) y = np.random.normal(mode[1], sigma, num_data_per_class) total_data[idx] = np.vstack([x, y]).T plt.plot(x, y) all_points = np.vstack([values for values in total_data.values()]) data_x, data_y = all_points[:,0], all_points[:,1] return total_data, all_points A_data_with_class, A_data = generate_data(13, 3, radius=2, center=(-2, -2)) B_data_with_class, B_data = generate_data(13, 3, radius=2, center=(2, 2)) A_train_np, A_test_np = train_test_split(A_data, test_size=0.33, random_state=random_seed) B_train_np, B_test_np = train_test_split(B_data, test_size=0.33, random_state=random_seed) def plot(data, color): plt.plot(data[:,0], data[:,1], color) plt.figure() #sns.kdeplot(data_x, data_y) plot(A_data, 'r.') plot(B_data, 'b.') def plot_with_class(data_with_class): for key, value in data_with_class.items(): plot(value, '.') plt.figure() plot_with_class(A_data_with_class) plot_with_class(B_data_with_class) import torch from torch import nn import torch.nn.functional as F from torch.autograd import Variable from torch.utils.data import TensorDataset, DataLoader class ListModule(nn.Module): def __init__(self, *args): super(ListModule, self).__init__() idx = 0 for module in args: self.add_module(str(idx), module) idx += 1 def __getitem__(self, idx): if idx < 0 or idx >= len(self._modules): raise IndexError('index {} is out of range'.format(idx)) it = iter(self._modules.values()) for i in range(idx): next(it) return next(it) def __iter__(self): return iter(self._modules.values()) def __len__(self): return len(self._modules) # dataset loader batch_size = 200 shuffle = False A_train, A_test = torch.from_numpy(A_train_np).float(), torch.from_numpy(A_test_np).float() B_train, B_test = torch.from_numpy(B_train_np).float(), torch.from_numpy(B_test_np).float() A_train_loader = DataLoader( TensorDataset(A_train, A_train), batch_size=batch_size, shuffle=shuffle) A_test_loader = DataLoader( TensorDataset(A_test, A_test), batch_size=batch_size, shuffle=shuffle) B_train_loader = DataLoader( TensorDataset(B_train, B_train), batch_size=batch_size, shuffle=shuffle) B_test_loader = DataLoader( TensorDataset(B_test, B_test), batch_size=batch_size, shuffle=shuffle) class Generator(nn.Module): def __init__(self, input_size, output_size, hidden_dims): super(Generator, self).__init__() self.layers = [] prev_dim = input_size for hidden_dim in hidden_dims: self.layers.append(nn.Linear(prev_dim, hidden_dim)) self.layers.append(nn.ReLU(True)) prev_dim = hidden_dim self.layers.append(nn.Linear(prev_dim, output_size)) self.layer_module = ListModule(*self.layers) def forward(self, x): out = x for layer in self.layers: out = layer(out) return out class Discriminator(nn.Module): def __init__(self, input_size, output_size, hidden_dims): super(Discriminator, self).__init__() self.layers = [] prev_dim = input_size for idx, hidden_dim in enumerate(hidden_dims): self.layers.append(nn.Linear(prev_dim, hidden_dim)) self.layers.append(nn.ReLU(True)) prev_dim = hidden_dim self.layers.append(nn.Linear(prev_dim, output_size)) self.layers.append(nn.Sigmoid()) self.layer_module = ListModule(*self.layers) def forward(self, x): out = x for layer in self.layers: out = layer(out) return out.view(-1, 1) # network hidden_dim = 128 g_num_layer = 3 d_num_layer = 5 G_AB = Generator(2, 2, [hidden_dim] * g_num_layer) G_BA = Generator(2, 2, [hidden_dim] * g_num_layer) D_A = Discriminator(2, 1, [hidden_dim] * d_num_layer) D_B = Discriminator(2, 1, [hidden_dim] * d_num_layer) G_AB.cuda() G_BA.cuda() D_A.cuda() D_B.cuda() # optimizer lr = 0.0002 beta1 = 0.5 beta2 = 0.999 d = nn.MSELoss() bce = nn.BCELoss() optimizer_d = torch.optim.Adam( chain(D_A.parameters(), D_B.parameters()), lr=lr, betas=(beta1, beta2)) optimizer_g = torch.optim.Adam( chain(G_AB.parameters(), G_BA.parameters()), lr=lr, betas=(beta1, beta2)) # training num_epoch = 50000 real_label = 1 fake_label = 0 real_tensor = Variable(torch.FloatTensor(batch_size).cuda()) _ = real_tensor.data.fill_(real_label) print real_tensor.sum() fake_tensor = Variable(torch.FloatTensor(batch_size).cuda()) _ = fake_tensor.data.fill_(fake_label) print fake_tensor.sum() max_iteration = 50000 idx = 0 A_loader, B_loader = iter(A_train_loader), iter(B_train_loader) for idx in trange(max_iteration): try: x_A, x_B = A_loader.next()[0], B_loader.next()[0] except StopIteration: A_loader, B_loader = iter(A_train_loader), iter(B_train_loader) x_A, x_B = A_loader.next()[0], B_loader.next()[0] x_A, x_B = Variable(x_A.cuda()), Variable(x_B.cuda()) batch_size = x_A.size(0) real_tensor.data.resize_(batch_size).fill_(real_label) fake_tensor.data.resize_(batch_size).fill_(fake_label) # update D network D_A.zero_grad() D_B.zero_grad() x_AB = G_AB(x_A).detach() x_BA = G_BA(x_B).detach() x_ABA = G_BA(x_AB).detach() x_BAB = G_AB(x_BA).detach() l_d_A_real, l_d_A_fake = bce(D_A(x_A), real_tensor), bce(D_A(x_BA), fake_tensor) l_d_B_real, l_d_B_fake = bce(D_B(x_B), real_tensor), bce(D_B(x_AB), fake_tensor) l_d_A = l_d_A_real + l_d_A_fake l_d_B = l_d_B_real + l_d_B_fake l_d = l_d_A + l_d_B l_d.backward() optimizer_d.step() # update G network G_AB.zero_grad() G_BA.zero_grad() x_AB = G_AB(x_A) x_BA = G_BA(x_B) x_ABA = G_BA(x_AB) x_BAB = G_AB(x_BA) l_const_A = d(x_ABA, x_A) l_const_B = d(x_BAB, x_B) l_gan_A = bce(D_A(x_BA), real_tensor) l_gan_B = bce(D_B(x_AB), real_tensor) l_g = l_gan_A + l_gan_B + l_const_A + l_const_B l_g.backward() optimizer_g.step() if idx % (max_iteration/20) == 0: #print("[{}/{}] Loss_D: {:.4f} Loss_G: {:.4f}". \ # format(idx, max_iteration, l_d.data[0], l_g.data[0])) #print("[{}/{}] l_d_A_real: {:.4f} l_d_A_fake: {:.4f}, l_d_B_real: {:.4f}, l_d_B_fake: {:.4f}". \ # format(idx, max_iteration, l_d_A_real.data[0], l_d_A_fake.data[0], # l_d_B_real.data[0], l_d_B_fake.data[0])) #print("[{}/{}] l_const_A: {:.4f} l_const_B: {:.4f}, l_gan_A: {:.4f}, l_gan_B: {:.4f}". \ # format(idx, max_iteration, l_const_A.data[0], l_const_B.data[0], # l_gan_A.data[0], l_gan_B.data[0])) plt.figure() ax = sns.kdeplot(B_test_np[:1000], cmap="Reds", shade=True, shade_lowest=False) plot(B_test_np[:1000], 'k.') for key, value in A_data_with_class.items(): data = torch.from_numpy(value[:1000]).float() pred = G_AB(Variable(data).cuda()).data.cpu().numpy() plot(pred, '.') plt.figure() ax = sns.kdeplot(A_test_np[:1000], cmap="Blues", shade=True, shade_lowest=False) plot(A_test_np[:1000], 'k.') for key, value in B_data_with_class.items(): data = torch.from_numpy(value[:1000]).float() pred = G_BA(Variable(data).cuda()).data.cpu().numpy() plot(pred, '.') #ax = sns.kdeplot(G_AB(Variable(A_test[:1000]).cuda()).data.cpu().numpy(), # cmap="Reds", shade=True, shade_lowest=False) #ax = sns.kdeplot(G_BA(Variable(B_test[:1000]).cuda()).data.cpu().numpy(), # cmap="Blues", shade=True, shade_lowest=False) plt.show() plt.pause(0.05) Variable(torch.FloatTensor(batch_size)).cuda() ```
github_jupyter
# "pix2code" (PyTorch Implementation) #### Version: *alpha (wip)* This is a PyTorch-based implementation of the [work done by Tony Beltramelli on pix2code](https://arxiv.org/abs/1705.07962), using the Bootstrap/DSL dataset created for the paper. The implementation is an image-captioning pair of encoder and decoder models that use the feature extraction of ResNet-152 as a base. Heavily influenced by these to get to a working prototype: - [PyTorch tutorial on image captioning (GitHub)](https://github.com/yunjey/pytorch-tutorial/tree/master/tutorials/03-advanced/image_captioning) - [FloydHub blog post on using keras to transform screenshots to code](https://blog.floydhub.com/Turning-design-mockups-into-code-with-deep-learning/) ``` import pdb import os import torch import torchvision from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import Dataset, DataLoader from torchvision import datasets, models, transforms import numpy as np from PIL import Image # Hyperparams batch_size = 4 embed_size = 256 num_epochs = 1000 learning_rate = 0.001 hidden_size = 512 num_layers = 1 # Other params shuffle = True num_workers = 2 # Logging Variables save_after_x_epochs = 50 log_step = 5 # Paths data_dir = './processed_data/data_train/' # For testing purposes, we use a pre-split dataset rather than do it here model_path = './models/' vocab_path = './bootstrap.vocab' # DO NOT CHANGE: crop_size = 224 # Required by resnet152 ``` # Building Vocabulary We use the provided DSL vocabulary from pix2code's dataset, consisting of 18 tokens, each of which map to Bootstrap-based HTML code. ``` def load_doc(filename): file = open(filename, 'r') text = file.read() file.close() return text class Vocabulary (object): def __init__ (self): self.word2idx = {} self.idx2word = {} self.idx = 0 def add_word (self, word): if not word in self.word2idx: self.word2idx[word] = self.idx self.idx2word[self.idx] = word self.idx += 1 def __call__ (self, word): if not word in self.word2idx: return self.word2idx['<unk>'] return self.word2idx[word] def __len__ (self): return len(self.word2idx) def build_vocab (vocab_file_path): vocab = Vocabulary() # Load the vocab file (super basic split()) words_raw = load_doc(vocab_file_path) words = set(words_raw.split(' ')) for i, word in enumerate(words): vocab.add_word(word) vocab.add_word(' ') vocab.add_word('<unk>') # If we find an unknown word print('Created vocabulary of ' + str(len(vocab)) + ' items from ' + vocab_file_path) return vocab # Load vocabulary vocab = build_vocab(vocab_path) vocab_size = len(vocab) ``` # Build Dataset (images and captions) Due to the way the dataset was provided ([.gui and .png files in the same folder](https://github.com/tonybeltramelli/pix2code/tree/master/datasets)), we create a custom PyTorch dataloader which stores captions in memory, but loads images on-demand. ``` class ImageHTMLDataSet (Dataset): def __init__ (self, data_dir, vocab, transform): self.data_dir = data_dir self.vocab = vocab self.transform = transform self.raw_image_names = [] self.raw_captions = [] # Fetch all files from our data directoruy self.filenames = os.listdir(data_dir) self.filenames.sort() # Sort files based on their filetype # Assume associated training examples have same filenames for filename in self.filenames: if filename[-3:] == 'png': # Store image filename self.raw_image_names.append(filename) elif filename[-3:] == 'gui': # Load .gui file data = load_doc(data_dir + filename) self.raw_captions.append(data) print('Created dataset of ' + str(len(self)) + ' items from ' + data_dir) def __len__ (self): return len(self.raw_image_names) def __getitem__ (self, idx): img_path, raw_caption = self.raw_image_names[idx], self.raw_captions[idx] # Get image from filesystem image = Image.open(os.path.join(self.data_dir, img_path)).convert('RGB') image = self.transform(image) # Convert caption (string) to list of vocab ID's caption = [] caption.append(self.vocab('<START>')) # Remove newlines, separate words with spaces tokens = ' '.join(raw_caption.split()) # Add space after each comma tokens = tokens.replace(',', ' ,') # Split into words tokens = tokens.split(' ') caption.extend([self.vocab(token) for token in tokens]) caption.append(self.vocab('<END>')) target = torch.Tensor(caption) return image, target # See https://github.com/yunjey/pytorch-tutorial/tree/master/tutorials/03-advanced/image_captioning def collate_fn (data): # Sort datalist by caption length; descending order data.sort(key = lambda data_pair: len(data_pair[1]), reverse=True) images, captions = zip(*data) # Merge images (from tuple of 3D Tensor to 4D Tensor) images = torch.stack(images, 0) # Merge captions (from tuple of 1D tensor to 2D tensor) lengths = [len(caption) for caption in captions] # List of caption lengths targets = torch.zeros(len(captions), max(lengths)).long() for i, caption in enumerate(captions): end = lengths[i] targets[i, :end] = caption[:end] return images, targets, lengths # Transform to modify images for pre-trained ResNet base transform = transforms.Compose([ transforms.Resize((crop_size, crop_size)), # Match resnet size transforms.ToTensor(), # See for magic #'s: http://pytorch.org/docs/master/torchvision/models.html transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) # Create data loader img_html_dataset = ImageHTMLDataSet(data_dir=data_dir, vocab=vocab, transform=transform) data_loader = DataLoader(dataset=img_html_dataset, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers, collate_fn=collate_fn) ``` # Encoder Model Takes in input matrix of images and outputs features based on ResNet-152. ``` class EncoderCNN (nn.Module): def __init__ (self, embed_size): super(EncoderCNN, self).__init__() # Load pretrained resnet model resnet = models.resnet152(pretrained = True) # Remove the fully connected layers modules = list(resnet.children())[:-1] self.resnet = nn.Sequential(*modules) # Create our replacement layers # We reuse the in_feature size of the resnet fc layer for our first replacement layer = 2048 as of creation self.linear = nn.Linear(in_features = resnet.fc.in_features, out_features = embed_size) self.bn = nn.BatchNorm1d(num_features = embed_size, momentum = 0.01) print('EncoderCNN created with embed_size: ' + str(embed_size)) def forward (self, images): # Get the expected output from the fully connected layers # Fn: AvgPool2d(kernel_size=7, stride=1, padding=0, ceil_mode=False, count_include_pad=True) # Output: torch.Size([batch_size, 2048, 1, 1]) features = self.resnet(images) # Resize the features for our linear function features = features.view(features.size(0), -1) # Fn: Linear(in_features=2048, out_features=embed_size, bias=True) # Output: torch.Size([batch_size, embed_size]) features = self.linear(features) # Fn: BatchNorm1d(embed_size, eps=1e-05, momentum=0.01, affine=True) # Output: torch.Size([batch_size, embed_size]) features = self.bn(features) return features ``` # Decoder Model We can substitute the LSTM for a GRU to speed up our training (as suggested in the FloydHub post). ``` class DecoderRNN (nn.Module): def __init__ (self, embed_size, hidden_size, vocab_size, num_layers): super(DecoderRNN, self).__init__() # 19 word vocabulary, embed_size dimensional embeddings self.embed = nn.Embedding(num_embeddings = vocab_size, embedding_dim = embed_size) self.lstm = nn.LSTM(input_size = embed_size, hidden_size = hidden_size, num_layers = num_layers, batch_first=True) self.linear = nn.Linear(in_features = hidden_size, out_features = vocab_size) # Store the embed size for use when sampling self.embed_size = embed_size print('DecoderRNN created with embed_size: ' + str(embed_size)) def forward (self, features, captions, lengths): # 'captions' enters as shape torch.Size([batch_size, len(longest caption)]) # Fn: Embedding(vocab_size, embed_size) # Input: LongTensor (N = mini_batch, W = # of indices to extract per mini-batch) # Output: (N, W, embedding_dim) => torch.Size([batch_size, len(longest caption), embed_size]) embeddings = self.embed(captions) # Match features dimensions to embedding's features = features.unsqueeze(1) # torch.Size([4, 128]) => torch.Size([4, 1, 128]) embeddings = torch.cat((features, embeddings), 1) packed = nn.utils.rnn.pack_padded_sequence(input = embeddings, lengths = lengths, batch_first = True) # Fn: LSTM(embed_size, hidden_size, batch_first = True) hiddens, _ = self.lstm(packed) outputs = self.linear(hiddens[0]) return outputs # Sample method used for testing our model def sample (self, features, states=None): sampled_ids = [] inputs = features.unsqueeze(1) # Put the features input through our decoder for i iterations # TODO: Put this range into a parameter? for i in range(100): hiddens, states = self.lstm(inputs, states) outputs = self.linear(hiddens.squeeze(1)) predicted = outputs.max(dim = 1, keepdim = True)[1] sampled_ids.append(predicted) inputs = self.embed(predicted) inputs = inputs.view(-1, 1, self.embed_size) sampled_ids = torch.cat(sampled_ids, 1) return sampled_ids.squeeze() ``` # Training ``` encoder = EncoderCNN(embed_size) decoder = DecoderRNN(embed_size, hidden_size, vocab_size, num_layers) criterion = nn.CrossEntropyLoss() params = list(decoder.parameters()) + list(encoder.linear.parameters()) + list(encoder.bn.parameters()) optimizer = torch.optim.Adam(params, lr = learning_rate) if torch.cuda.is_available(): encoder.cuda() decoder.cuda() print('CUDA activated.') encoder.train() decoder.train() batch_count = len(data_loader) for epoch in range(num_epochs): for i, (images, captions, lengths) in enumerate(data_loader): # Shape: torch.Size([batch_size, 3, crop_size, crop_size]) images = Variable(images.cuda()) # Shape: torch.Size([batch_size, len(longest caption)]) captions = Variable(captions.cuda()) # lengths is a list of how long captions are in descending order (e.g., [77, 77, 75, 25]) # We remove the paddings from captions that are padded and then pack them into a single sequence # Our data loader's collate_fn adds extra zeros to the end of sequences that are too short # Shape: torch.Size([sum(lengths)]) targets = nn.utils.rnn.pack_padded_sequence(input = captions, lengths = lengths, batch_first = True)[0] # Zero out buffers encoder.zero_grad() decoder.zero_grad() # Forward, Backward, and Optimize features = encoder(images) # Outputs features of torch.Size([batch_size, embed_size]) outputs = decoder(features, captions, lengths) # CrossEntropyLoss is expecting: # Input: (N, C) where C = number of classes loss = criterion(outputs, targets) loss.backward() optimizer.step() if epoch % log_step == 0 and i == 0: print('Epoch [#%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, loss.data[0], np.exp(loss.data[0]))) if (epoch + 1) % save_after_x_epochs == 0 and i == 0: # Save our models print('!!! saving models at epoch: ' + str(epoch)) torch.save(decoder.state_dict(),os.path.join(model_path, 'decoder-%d-%d.pkl' %(epoch+1, i+1))) torch.save(encoder.state_dict(), os.path.join(model_path, 'encoder-%d-%d.pkl' %(epoch+1, i+1))) print('done!') torch.save(encoder.state_dict(), os.path.join(model_path, 'encoder-1000-1.pkl')) torch.save(decoder.state_dict(), os.path.join(model_path, 'decoder-1000-1.pkl')) ``` # Testing ``` from nltk.translate.bleu_score import corpus_bleu def transform_idx_to_words (input): sampled_caption = [] for idx in input: word = vocab.idx2word[idx] sampled_caption.append(word) if word == '<END>': break output = ' '.join(sampled_caption[1:-1]) output = output.replace(' ,', ',') return output.split(' ') dev_data_dir = './processed_data/data_dev/' # Assume format: "encoder-# epoch-# iter.pkl" models_to_test = [ '100-1', '200-1', '300-1', '400-1', '500-1', '600-1', '700-1', '800-1', '900-1', '1000-1', ] bleu_scores = [] for model_idx, model_name in enumerate(models_to_test): encoder_model_path = os.path.join(model_path, 'encoder-{}.pkl'.format(model_name)) decoder_model_path = os.path.join(model_path, 'decoder-{}.pkl'.format(model_name)) # Create a data loader for our cross validation testing dev_img_html_dataset = ImageHTMLDataSet(data_dir=dev_data_dir, vocab=vocab, transform=transform) dev_data_loader = DataLoader(dataset=dev_img_html_dataset, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers, collate_fn=collate_fn) # Load trained models dev_encoder = EncoderCNN(embed_size) dev_decoder = DecoderRNN(embed_size, hidden_size, len(vocab), num_layers) dev_encoder.load_state_dict(torch.load(encoder_model_path)) dev_decoder.load_state_dict(torch.load(decoder_model_path)) if torch.cuda.is_available(): dev_encoder.cuda() dev_decoder.cuda() dev_encoder.eval() dev_decoder.eval() dev_data_count = len(dev_data_loader.dataset) predicted, actual = list(), list() for i in range(dev_data_count): image, caption = dev_data_loader.dataset.__getitem__(i) image_tensor = Variable(image.unsqueeze(0).cuda()) features = dev_encoder(image_tensor) sampled_ids = dev_decoder.sample(features) sampled_ids = sampled_ids.cpu().data.numpy() predicted.append(sampled_ids) actual.append(caption.numpy()) predicted = [transform_idx_to_words(item) for item in predicted] actual = [[transform_idx_to_words(item)] for item in actual] bleu = corpus_bleu(actual, predicted) bleu_scores.append((model_name, bleu)) print('done with {} items for model: {}'.format(str(len(predicted)), model_name)) bleu_scores ```
github_jupyter
# Machine Learning - This project applies Logistic Regression to train a machine learning model using **Titanic Dataset** to predict the number of passengers on th ship that survived the incident. ## Environment Setup - Import required libraries ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ``` ## Load Dataset & Perform Data Analysis - First I will load the **Dataset** and perform **Exploratory Data Analysis** to evaluate my Data ``` train = pd.read_csv('titanic_train.csv') train.head() ``` - Now I will go ahead to check what is missing in my data ``` train.isnull() ``` - Notice that in the table above, I will get a False, when it is not null and True, when it is null. - Next with the Dataframe, I will create a heatmap ``` sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis') ``` ## Results Analysis - Notice that each yellow bar represents a True, where we have a missing data. So we can observe that the dataset have many missing valaues in 'Age' and 'Cabin'. - So roughly about 20% of the age information is missing and the protion of that age missing is slightly small enough to be replaced by another forms of data. - But looking at the Cabin column, it appears that we are missing a bunch of data to actually do something meaningful with the data column. So I will go ahead and drop the **Cabin** Column. - But before then I will perform more Data Analysis to get more understanding of the Dataset ``` sns.set_style('whitegrid') ``` - I will go ahead and do the count plot of those who survived and those who did not. Just to see the ratio of the actual traget plots ``` sns.countplot(x='Survived',data=train) ``` - **0**: Non survival - **1**: Survived - **It seems like the number of people that did not survive (550) is more compared to the number of people that did survived (350).** - Next I will try to understand the ratio of male to female that did or didn't survive ``` sns.countplot(x='Survived',hue='Sex',data=train, palette='RdBu_r') ``` - **Now I can see that the dataset is begining to provide more meaningful results**. - From the above analysis, it looks like people who did not survive are more likely to be male - And the number of female who survived are almost twice the number of males who survived. **Next I will explore the Survival Dataset by the Class of ticket that was bought** ``` sns.countplot(x='Survived',hue='Pclass',data=train) ``` - Again, I can see that people who did not survive are overwhemly people in the lower class (3rd class). Mostly the people that did not pay a bit more. **Next, I try to get the age range of the people in the titanic**. Note that I will drop the 'null' values. ``` sns.distplot(train['Age'].dropna(),kde=False,bins=30) ``` - Interestingly the distribution seems to have a **bimodal-distribution** where there is young passangers between **ages 0-10**, so there is quite a few children in that zone. - But after that we have more of an average age between **ages 20-30**. - So there is quite a **skew towards the younger passenger**. - The older people get the less older passengers we get onboard. ## Alternative method ``` train['Age'].plot.hist(bins=30) ``` - Next I will explore other columns - Use **.info()** to get information about the dataset ``` train.info() sns.countplot(x='SibSp',data=train) ``` ## Analysis Interpretation - From the graph above, it can be said that many passengers on board, did not have either children or spouse on board - And looking at the second most popular bar, which appears to be most likely a spouse and those having just one child on board. - The second most common type of passenger in our dataset,is someone with either just one sibling or spouse or just couple onbard. - **Another column that I am yet to still explore, is the amount paid**. - So next I will explore the **Fair column** ``` train['Fare'] train['Fare'].hist(bins=40,figsize=(10,4)) ``` - Results from this analysis indicated that majority of the **Fair price** range from 0-50 - So the results makes more sense as most of the passengers were in the **3rd class**(Cheaper ticket class). **Next I will use cufflinks to plot out this plot to create an interactive plots** ``` import cufflinks as cf cf.go_offline() train['Fare'].iplot(kind='hist',bins=50) ``` ## See next notebook for model training - Now that I have gained useful information about my Dataset and completed the **Exploratory Data Analysis**, I will go ahead and train my predictive model.
github_jupyter
# Hierarchical Clustering **Hierarchical clustering** refers to a class of clustering methods that seek to build a **hierarchy** of clusters, in which some clusters contain others. In this assignment, we will explore a top-down approach, recursively bipartitioning the data using k-means. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. ## Import packages ``` import graphlab import matplotlib.pyplot as plt import numpy as np import sys import os import time from scipy.sparse import csr_matrix from sklearn.cluster import KMeans from sklearn.metrics import pairwise_distances %matplotlib inline ``` ## Load the Wikipedia dataset ``` wiki = graphlab.SFrame('people_wiki.gl/') ``` As we did in previous assignments, let's extract the TF-IDF features: ``` wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text']) ``` To run k-means on this dataset, we should convert the data matrix into a sparse matrix. ``` from em_utilities import sframe_to_scipy # converter # This will take about a minute or two. tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf') ``` To be consistent with the k-means assignment, let's normalize all vectors to have unit norm. ``` from sklearn.preprocessing import normalize tf_idf = normalize(tf_idf) ``` ## Bipartition the Wikipedia dataset using k-means Recall our workflow for clustering text data with k-means: 1. Load the dataframe containing a dataset, such as the Wikipedia text dataset. 2. Extract the data matrix from the dataframe. 3. Run k-means on the data matrix with some value of k. 4. Visualize the clustering results using the centroids, cluster assignments, and the original dataframe. We keep the original dataframe around because the data matrix does not keep auxiliary information (in the case of the text dataset, the title of each article). Let us modify the workflow to perform bipartitioning: 1. Load the dataframe containing a dataset, such as the Wikipedia text dataset. 2. Extract the data matrix from the dataframe. 3. Run k-means on the data matrix with k=2. 4. Divide the data matrix into two parts using the cluster assignments. 5. Divide the dataframe into two parts, again using the cluster assignments. This step is necessary to allow for visualization. 6. Visualize the bipartition of data. We'd like to be able to repeat Steps 3-6 multiple times to produce a **hierarchy** of clusters such as the following: ``` (root) | +------------+-------------+ | | Cluster Cluster +------+-----+ +------+-----+ | | | | Cluster Cluster Cluster Cluster ``` Each **parent cluster** is bipartitioned to produce two **child clusters**. At the very top is the **root cluster**, which consists of the entire dataset. Now we write a wrapper function to bipartition a given cluster using k-means. There are three variables that together comprise the cluster: * `dataframe`: a subset of the original dataframe that correspond to member rows of the cluster * `matrix`: same set of rows, stored in sparse matrix format * `centroid`: the centroid of the cluster (not applicable for the root cluster) Rather than passing around the three variables separately, we package them into a Python dictionary. The wrapper function takes a single dictionary (representing a parent cluster) and returns two dictionaries (representing the child clusters). ``` def bipartition(cluster, maxiter=400, num_runs=4, seed=None): '''cluster: should be a dictionary containing the following keys * dataframe: original dataframe * matrix: same data, in matrix format * centroid: centroid for this particular cluster''' data_matrix = cluster['matrix'] dataframe = cluster['dataframe'] # Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow. kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=-1) kmeans_model.fit(data_matrix) centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_ # Divide the data matrix into two parts using the cluster assignments. data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \ data_matrix[cluster_assignment==1] # Divide the dataframe into two parts, again using the cluster assignments. cluster_assignment_sa = graphlab.SArray(cluster_assignment) # minor format conversion dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \ dataframe[cluster_assignment_sa==1] # Package relevant variables for the child clusters cluster_left_child = {'matrix': data_matrix_left_child, 'dataframe': dataframe_left_child, 'centroid': centroids[0]} cluster_right_child = {'matrix': data_matrix_right_child, 'dataframe': dataframe_right_child, 'centroid': centroids[1]} return (cluster_left_child, cluster_right_child) ``` The following cell performs bipartitioning of the Wikipedia dataset. Allow 20-60 seconds to finish. Note. For the purpose of the assignment, we set an explicit seed (`seed=1`) to produce identical outputs for every run. In pratical applications, you might want to use different random seeds for all runs. ``` wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=8, seed=1) ``` Let's examine the contents of one of the two clusters, which we call the `left_child`, referring to the tree visualization above. ``` left_child ``` And here is the content of the other cluster we named `right_child`. ``` right_child ``` ## Visualize the bipartition We provide you with a modified version of the visualization function from the k-means assignment. For each cluster, we print the top 5 words with highest TF-IDF weights in the centroid and display excerpts for the 8 nearest neighbors of the centroid. ``` def display_single_tf_idf_cluster(cluster, map_index_to_word): '''map_index_to_word: SFrame specifying the mapping betweeen words and column indices''' wiki_subset = cluster['dataframe'] tf_idf_subset = cluster['matrix'] centroid = cluster['centroid'] # Print top 5 words with largest TF-IDF weights in the cluster idx = centroid.argsort()[::-1] for i in xrange(5): print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroid[idx[i]])), print('') # Compute distances from the centroid to all data points in the cluster. distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten() # compute nearest neighbors of the centroid within the cluster. nearest_neighbors = distances.argsort() # For 8 nearest neighbors, print the title as well as first 180 characters of text. # Wrap the text at 80-character mark. for i in xrange(8): text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25]) print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'], distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else '')) print('') ``` Let's visualize the two child clusters: ``` display_single_tf_idf_cluster(left_child, map_index_to_word) display_single_tf_idf_cluster(right_child, map_index_to_word) ``` The left cluster consists of athletes, whereas the right cluster consists of non-athletes. So far, we have a single-level hierarchy consisting of two clusters, as follows: ``` Wikipedia + | +--------------------------+--------------------+ | | + + Athletes Non-athletes ``` Is this hierarchy good enough? **When building a hierarchy of clusters, we must keep our particular application in mind.** For instance, we might want to build a **directory** for Wikipedia articles. A good directory would let you quickly narrow down your search to a small set of related articles. The categories of athletes and non-athletes are too general to facilitate efficient search. For this reason, we decide to build another level into our hierarchy of clusters with the goal of getting more specific cluster structure at the lower level. To that end, we subdivide both the `athletes` and `non-athletes` clusters. ## Perform recursive bipartitioning ### Cluster of athletes To help identify the clusters we've built so far, let's give them easy-to-read aliases: ``` athletes = left_child non_athletes = right_child ``` Using the bipartition function, we produce two child clusters of the athlete cluster: ``` # Bipartition the cluster of athletes left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=8, seed=1) ``` The left child cluster mainly consists of baseball players: ``` display_single_tf_idf_cluster(left_child_athletes, map_index_to_word) ``` On the other hand, the right child cluster is a mix of football players and ice hockey players: ``` display_single_tf_idf_cluster(right_child_athletes, map_index_to_word) ``` **Note**. Concerning use of "football" The occurrences of the word "football" above refer to [association football](https://en.wikipedia.org/wiki/Association_football). This sports is also known as "soccer" in United States (to avoid confusion with [American football](https://en.wikipedia.org/wiki/American_football)). We will use "football" throughout when discussing topic representation. Our hierarchy of clusters now looks like this: ``` Wikipedia + | +--------------------------+--------------------+ | | + + Athletes Non-athletes + | +-----------+--------+ | | | + + football/ baseball ice hockey ``` Should we keep subdividing the clusters? If so, which cluster should we subdivide? To answer this question, we again think about our application. Since we organize our directory by topics, it would be nice to have topics that are about as coarse as each other. For instance, if one cluster is about baseball, we expect some other clusters about football, basketball, volleyball, and so forth. That is, **we would like to achieve similar level of granularity for all clusters.** Notice that the right child cluster is more coarse than the left child cluster. The right cluster posseses a greater variety of topics than the left (ice hockey/football vs. baseball). So the right child cluster should be subdivided further to produce finer child clusters. Let's give the clusters aliases as well: ``` baseball = left_child_athletes ice_hockey_football = right_child_athletes ``` ### Cluster of ice hockey players and football players In answering the following quiz question, take a look at the topics represented in the top documents (those closest to the centroid), as well as the list of words with highest TF-IDF weights. **Quiz Question**. Bipartition the cluster of ice hockey and football players. Which of the two child clusters should be futher subdivided? **Note**. To achieve consistent results, use the arguments `maxiter=100, num_runs=8, seed=1` when calling the `bipartition` function. 1. The left child cluster 2. The right child cluster ``` left_child_ice_hockey_football, right_child_ice_hockey_football = bipartition(ice_hockey_football, maxiter=100, num_runs=8, seed=1) display_single_tf_idf_cluster(left_child_ice_hockey_football, map_index_to_word) display_single_tf_idf_cluster(right_child_ice_hockey_football, map_index_to_word) ``` **Caution**. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters. * **If a cluster is highly mixed, the top articles and words may not convey the full picture of the cluster.** Thus, we may be misled if we judge the purity of clusters solely by their top documents and words. * **Many interesting topics are hidden somewhere inside the clusters but do not appear in the visualization.** We may need to subdivide further to discover new topics. For instance, subdividing the `ice_hockey_football` cluster led to the appearance of golf. **Quiz Question**. Which diagram best describes the hierarchy right after splitting the `ice_hockey_football` cluster? Refer to the quiz form for the diagrams. ### Cluster of non-athletes Now let us subdivide the cluster of non-athletes. ``` # Bipartition the cluster of non-athletes left_child_non_athletes, right_child_non_athletes = bipartition(non_athletes, maxiter=100, num_runs=8, seed=1) display_single_tf_idf_cluster(left_child_non_athletes, map_index_to_word) display_single_tf_idf_cluster(right_child_non_athletes, map_index_to_word) ``` The first cluster consists of scholars, politicians, and government officials whereas the second consists of musicians, artists, and actors. Run the following code cell to make convenient aliases for the clusters. ``` scholars_politicians_etc = left_child_non_athletes musicians_artists_etc = right_child_non_athletes ``` **Quiz Question**. Let us bipartition the clusters `scholars_politicians_etc` and `musicians_artists_etc`. Which diagram best describes the resulting hierarchy of clusters for the non-athletes? Refer to the quiz for the diagrams. **Note**. Use `maxiter=100, num_runs=8, seed=1` for consistency of output. ``` left_child_scholars_politicians_etc, right_child_scholars_politicians_etc = bipartition(scholars_politicians_etc, maxiter=100, num_runs=8, seed=1) display_single_tf_idf_cluster(left_child_scholars_politicians_etc, map_index_to_word) display_single_tf_idf_cluster(right_child_scholars_politicians_etc, map_index_to_word) left_child_musicians_artists_etc, right_child_musicians_artists_etc = bipartition(musicians_artists_etc, maxiter=100, num_runs=8, seed=1) display_single_tf_idf_cluster(left_child_musicians_artists_etc, map_index_to_word) display_single_tf_idf_cluster(right_child_musicians_artists_etc, map_index_to_word) ```
github_jupyter
# Model Selection This notebook will attempt to provide some justification for the particular model chosen for use throughout my final year project. I'll run through my thinking, show how the neural network approach performs in relation to some other likely choices and re-create my hyper-parameter tuning process. ## Requirements One goal of my project is to answer a fairly broad question: can we use machine learning to make inferences from the readings we'll be taking. To answer this, we need to do some kind of machine learning. There will be many sub-tasks, both classification and regression. For each one, we could go through a rigorous model selection approach, but it's nice to have one or two good general purpose models that we can use. We don't care so much about the question 'what is the absolute best performance we could achieve here' - all we want to know is whether ML can be useful in solving a given problem. So we want to find one or two models that can generalize to the different tasks we'll be trying. The chosen models will need to have the following charachteristics: - General purpose. Some algorithms might work for one specific problem, but we want a model that can be easily adapted to the different situations - Robust. We will be using fairly small training datasets, so the model will need some way of combatting overfitting. There is a danger that an overly complicated model will simply fit the noise in the training data and not generalize well. - Capable of fitting complex functions. Simple linear models might work for some of the problems considered, but a good model would be able to handle the non-linearities inherent in our experimental setup. Light scatters, diffracts, reflects and attenuates. A more complex model would probably be better suited to this kind of problem. With this in mind, let's load up some data and begin our experiments ``` # Load up the required libraries from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression # Load the first dataset df1 = pd.read_csv('dataset_2q.csv') df1 = df1.drop('Unnamed: 0', axis=1) df1.head() ``` This data comes from the ring of 8. Each row contains 72 readings. 8 ADC readings from the PTs with no LEDs illuminated, 8 with the first LED lit, 8 with the second etc. These will be the inputs to our model. The quadrant and object columns are the output variables. We want a model to be able to predict these values based on the inputs. For a given model, we'll split the data into a training set and a test set. We train the model with the training data, then see how well it performs on the (never before seen) test data. Here, I'm only showing the score for one run. When doing these tests, I generally repeat many times for different (random) test and train sets generated from the initial dataset. This is called cross-validation, and later in this notebook I'll switch to using that to get more accurate comparative scores for choosing the best model. For now, we'll take the more simplistic approach. ``` # Split the data. X will be the inputs, Y the desired output X = df1[[str(i) for i in range(72)]] # The readings y = df1['object'] # What object is in the ring X_train, X_test, y_train, y_test = train_test_split(X, y) # Initiate a model. Let's try a simple linear model first. Since this is a classification task, # the logistic regression model (aka MaxEnt, logit regression, log-linear classifier) is the typical choice model = LogisticRegression() # We'll leave the parameters as default for now # Fit the model to the training data model.fit(X_train, y_train) # Score the model on the training data and the test data print('Score (training data)', model.score(X_train, y_train)) print('Score (test data)', model.score(X_test, y_test)) ``` 72% accuracy. Not bad! Notice that the score is beter for the training data (expected) but still decent for the test data. This is because by default the model includes a regularization parameter to prevent overfitting. We'll see some models that don't do as well on the test but score nearly 100% on the training data because they don't address this problem. The next model we'll try is a classic for classification :) Support vector machines. ``` from sklearn.svm import LinearSVC model = LinearSVC() model.fit(X_train, y_train) print('Score (training data)', model.score(X_train, y_train)) print('Score (test data)', model.score(X_test, y_test)) from sklearn.svm import SVC model = SVC() model.fit(X_train, y_train) print('Score (training data)', model.score(X_train, y_train)) print('Score (test data)', model.score(X_test, y_test)) ``` The first one does pretty badly, and the second is clearly suffering from overfitting. I'll increase the regularization parameter untill we start to get a more generalizeable model. We do this by setting the 'gamma' parameter. Lower gamma means more emphasis on the regularization term. ``` for g in [0.1, 0.01, 0.001, 0.0001, 0.00001]: model = SVC(gamma=g) model.fit(X_train, y_train) print('Score (test data)', model.score(X_test, y_test), 'Gamma:', g) ``` This introduces the idea of hyperparameter tuning. We're tweaking one of the defining attributes of the model and seeing how we can maximise performance. In this case, with gamma=0.0001 we see much better performance than with the default value. This is now our best performing model. We could probably get our score up a little bit more by tuning some other parameters and tweaking the gamma ever so slightly. But let's move on to other types of models. Here, I'll try a decision tree approach, followed my one of my favourite (for nostalgic reasons) models - Random Forest Classification. Decision tree: ``` from sklearn import tree model = tree.DecisionTreeClassifier() model.fit(X_train, y_train) print('Score (training data)', model.score(X_train, y_train)) print('Score (test data)', model.score(X_test, y_test)) # And after some tuning, removed for brevity, we still get pretty much the same result model = tree.DecisionTreeClassifier(max_depth = 8, min_samples_leaf=1) model.fit(X_train, y_train) print('Score (training data)', model.score(X_train, y_train)) print('Score (test data)', model.score(X_test, y_test)) ``` Not great. Dtrees are not really used much - too simple and prone to overfitting. Random Forest ``` # First with defualt values from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier() model.fit(X_train, y_train) print('Score (training data)', model.score(X_train, y_train)) print('Score (test data)', model.score(X_test, y_test)) # Trying some better values for the important parameters. I arrive at these via intuition and # a bit of experimentation. I'm pretty good at it - see the leaderboard of the competition # ongoing at zindi.africa :) There is a general method which one can follow to optimize a random forest model model = RandomForestClassifier(n_estimators=200, max_depth=15, min_samples_leaf=2) model.fit(X_train, y_train) print('Score (training data)', model.score(X_train, y_train)) print('Score (test data)', model.score(X_test, y_test)) ``` This is roughly as good as the best SVC model, and we haven't yet done much tuning. But scoring on this one test set is a bit misleading, and we might end up picking paramteres that only work for this one random split. So at this point I'll do some housekeeping and introduce a better scoring function. ``` from sklearn.model_selection import KFold, cross_val_score # Our new scoring function. Takes a model as input, as well as X and y. Does 5 splits and returns the sum of the scores def cvs(model, X, y, ns=5): k_fold = KFold(n_splits=ns) score = 0 for trains, tests in k_fold.split(X): model.fit(X.values[trains], y.values[trains]) score += model.score(X.values[tests],y.values[tests]) return score model1 = RandomForestClassifier(n_estimators=200, max_depth=15, min_samples_leaf=2) model2 = model = SVC(gamma=0.0001) print('Rforest score:', cvs(model1, X, y)) print('SVC score:', cvs(model2, X, y)) ``` The scoring is simple - bigger is better. We could adjust the function to give a meaningful value but all I care about for now is comparison. Notice it takes ages, so I'll stick to the old way untill we need to settle a close contest. Now for the fun part - neural networks. I have big hopes for this approach, and not just because I've done this before. They're good at this sort of thing. Veng-Pedersen, P. and Modi, N.B., 1992. say they are "are recognized mainly in terms of their adaptive learning and self-organization features and their nonlinear processing capability and are considered most suitable to deal with complex multivariate systems that are poorly understood and difficult to model by classical inductive, logically structured modeling techniques.” ``` from sklearn.neural_network import MLPClassifier model = MLPClassifier() model.fit(X_train, y_train) print('Score (training data)', model.score(X_train, y_train)) print('Score (test data)', model.score(X_test, y_test)) ``` What!? That's terrible! Almost as bad as random guessing (there are 5 classes here). What could we be doing wrong? Well, one important fact: NNs are messed up by input scaling. So the first thing we should do is scale our inputs down to a sensoble range. This is a common problem with a library to solve it for us, so here we go. Take two: ``` from sklearn.preprocessing import StandardScaler # This will look at the maximum values and scale accordingly to keep inputs small and manageable scaler = StandardScaler() scaler.fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) model = MLPClassifier() model.fit(X_train_scaled, y_train) print('Score (training data)', model.score(X_train_scaled, y_train)) print('Score (test data)', model.score(X_test_scaled, y_test)) ``` Bam. Top score right away: ``` cvs(model, pd.DataFrame(scaler.transform(X)), y) # And the parameters I used for the project. model = MLPClassifier(hidden_layer_sizes=(20, 20, 20), max_iter=1000) model.fit(X_train_scaled, y_train) print('Score (training data)', model.score(X_train_scaled, y_train)) print('Score (test data)', model.score(X_test_scaled, y_test)) cvs(model, pd.DataFrame(scaler.transform(X)), y) ``` So why these parameters? Lets vary some things and show that this is the right choice: ``` # Less hidden layers model = MLPClassifier(hidden_layer_sizes=(20, 20), max_iter=1000) model.fit(X_train_scaled, y_train) print('Score (training data)', model.score(X_train_scaled, y_train)) print('Score (test data)', model.score(X_test_scaled, y_test)) cvs(model, pd.DataFrame(scaler.transform(X)), y) # Even less hidden layers model = MLPClassifier(hidden_layer_sizes=(20), max_iter=1000) model.fit(X_train_scaled, y_train) print('Score (training data)', model.score(X_train_scaled, y_train)) print('Score (test data)', model.score(X_test_scaled, y_test)) cvs(model, pd.DataFrame(scaler.transform(X)), y) # More hidden layers model = MLPClassifier(hidden_layer_sizes=(20, 20, 20, 20), max_iter=1000) model.fit(X_train_scaled, y_train) print('Score (training data)', model.score(X_train_scaled, y_train)) print('Score (test data)', model.score(X_test_scaled, y_test)) cvs(model, pd.DataFrame(scaler.transform(X)), y) # Even More hidden layers model = MLPClassifier(hidden_layer_sizes=(20, 20, 20, 20, 20), max_iter=1000) model.fit(X_train_scaled, y_train) print('Score (training data)', model.score(X_train_scaled, y_train)) print('Score (test data)', model.score(X_test_scaled, y_test)) cvs(model, pd.DataFrame(scaler.transform(X)), y) ``` Let's just plot those results quickly, to drive my point home: ``` plt.plot([1, 2, 3, 4, 5], [3.46, 3.67, 3.725, 3.655, 3.337]) plt.xlabel('Number of Hidden Layers') plt.ylabel('Score') # Varying hidden layer size scores = [] ns = [1, 3, 5, 10, 20, 50, 100] for n in ns: model = MLPClassifier(hidden_layer_sizes=(n, n, n), max_iter=1000) model.fit(X_train_scaled, y_train) scores.append(cvs(model, pd.DataFrame(scaler.transform(X)), y)) plt.plot(ns, scores) plt.xlabel('Size of Hidden Layers') plt.ylabel('Score') ``` Score stops improving around n=20. Any more, and we're making a model that is needlessly complicated. I haven't found a good rule of thumb for this. More complex tasks (eg image inference) need more neurons, very simple tasks do better with ~10 in each hidden layer. #### So, we have a model that outperforms the rest. We might be able to tweak the random forest model to match it's performance, or tweak the model further to squeeze some extra performance out of it. We could also try engineering extra features etc etc. But let's call it good enough, and see if we can use a similar model for some of the other problems we want to try and solve: ## Task 2 - Regression (position inference) ``` # Load the data r8 = pd.read_csv('posinf8_500_readings.csv') ```
github_jupyter
# Constants_Sequences_and_Random_Values ``` from __future__ import print_function import tensorflow as tf import numpy as np from datetime import date date.today() author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises" tf.__version__ np.__version__ sess = tf.InteractiveSession() ``` NOTE on notation * _x, _y, _z, ...: NumPy 0-d or 1-d arrays * _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays * x, y, z, ...: 0-d or 1-d tensors * X, Y, Z, ...: 2-d or higher dimensional tensors ## Constant Value Tensors Q1. Create a tensor of the shape [2, 3] with all elements set to zero. ``` out = tf.zeros([2, 3]) print(out.eval()) assert np.allclose(out.eval(), np.zeros([2, 3])) # tf.zeros == np.zeros ``` Q2. Let X be a tensor of [[1,2,3], [4,5,6]]. <br />Create a tensor of the same shape and dtype as X with all elements set to zero. ``` _X = np.array([[1,2,3], [4,5,6]]) X = tf.convert_to_tensor(_X) out = tf.zeros_like(X) print(out.eval()) assert np.allclose(out.eval(), np.zeros_like(_X)) # tf.zeros_like == np.zeros_like ``` Q3. Create a tensor of shape [2, 3] with all elements set to one. ``` out = tf.ones([2, 3]) print(out.eval()) assert np.allclose(out.eval(), np.ones([2, 3])) # tf.ones == np.ones ``` Q4. Let X be a tensor of [[1,2,3], [4,5,6]]. <br />Create a tensor of the same shape and dtype as X with all elements set to one. ``` _X = np.array([[1,2,3], [4,5,6]]) X = tf.convert_to_tensor(_X) out = tf.ones_like(X) print(out.eval()) assert np.allclose(out.eval(), np.ones_like(_X)) # tf.ones_like == np.ones_like ``` Q5. Create a tensor of the shape [3, 2], with all elements of 5. ``` out1 = tf.fill([3, 2], 5) out2 = tf.ones([3, 2]) * 5 out3 = tf.constant(5, shape=[3, 2]) assert np.allclose(out1.eval(), out2.eval()) assert np.allclose(out1.eval(), out3.eval()) assert np.allclose(out1.eval(), np.full([3, 2], 5)) print(out1.eval()) ``` Q6. Create a constant tensor of [[1, 3, 5], [4, 6, 8]], with dtype=float32 ``` out = tf.constant([[1, 3, 5], [4, 6, 8]], dtype=tf.float32) print(out.eval()) assert np.allclose(out.eval(), np.array([[1, 3, 5], [4, 6, 8]], dtype=np.float32)) ``` Q7. Create a constant tensor of the shape [2, 3], with all elements set to 4. ``` out = tf.constant(4, shape=[2, 3]) print(out.eval()) assert np.allclose(out.eval(), np.full([2, 3], 4)) ``` ## Sequences Q8. Create a 1-D tensor of 50 evenly spaced elements between 5 and 10 inclusive. ``` out = tf.linspace(5., 10., 50) print(out.eval()) assert np.allclose(out.eval(), np.linspace(5., 10., 50)) # tf.linspace == np.linspace ``` Q9. Create a tensor which looks like [10, 12, 14, 16, ..., 100]. ``` out = tf.range(10, 101, 2) print(out.eval()) assert np.allclose(out.eval(), np.arange(10, 101, 2)) # tf.range == np.arange # Note that the end is exlcuded unlike tf.linspace ``` ## Random Tensors Q10. Create a random tensor of the shape [3, 2], with elements from a normal distribution of mean=0, standard deviation=2. ``` X = tf.random_normal([3, 2], 0, 2.) print(X.eval()) # tf.random_normal is almost equivalent to np.random.normal # But the order of the arguments is differnt. # _X = np.random.normal(0, 2., [3, 2]) ``` Q11. Create a random tensor of the shape [3, 2], with elements from a normal distribution of mean=0, standard deviation=1 such that any values don't exceed 2 standard deviations from the mean. ``` out = tf.truncated_normal([3, 2]) print(out.eval()) ``` Q12. Create a random tensor of the shape [3, 2], with all elements from a uniform distribution that ranges from 0 to 2 (exclusive). ``` out = tf.random_uniform([3, 2], 0, 2) print(out.eval()) # tf.random_uniform is almost equivalent to np.random.uniform # But the order of the arguments is differnt. # _X = np.random.uniform(0, 2., [3, 2]) ``` Q13. Let X be a tensor of [[1, 2], [3, 4], [5, 6]]. Shuffle X along its first dimension. ``` _X = np.array([[1, 2], [3, 4], [5, 6]]) X = tf.constant(_X) out = tf.random_shuffle(X) print(out.eval()) # tf.random_shuffle() is not a in-place function unlike np.random_shuffle(). # np.random.shuffle(_X) # print(_X) ``` Q14. Let X be a random tensor of the shape [10, 10, 3], with elements from a unit normal distribution. Crop X with the shape of [5, 5, 3]. ``` X = tf.random_normal([10, 10, 3]) out = tf.random_crop(X, [5, 5, 3]) print(out.eval()) ```
github_jupyter
# Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. ``` %matplotlib inline %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` ## Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! ``` data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() ``` ## Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. ``` rides[:24*10].plot(x='dteday', y='cnt') ``` ### Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`. ``` dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() ``` ### Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. ``` quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std ``` ### Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. ``` # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] ``` We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). ``` # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] ``` ## Time to build the network Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*. > **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function. 2. Implement the forward pass in the `train` method. 3. Implement the backpropagation algorithm in the `train` method, including calculating the output error. 4. Implement the forward pass in the `run` method. ``` ############# # In the my_answers.py file, fill out the TODO sections as specified ############# from my_answers import NeuralNetwork def MSE(y, Y): return np.mean((y-Y)**2) ``` ## Unit tests Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project. ``` import unittest inputs = np.array([[0.5, -0.2, 0.1]]) targets = np.array([[0.4]]) test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]]) test_w_h_o = np.array([[0.3], [-0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328], [-0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, -0.20185996], [0.39775194, 0.50074398], [-0.29887597, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) ``` ## Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. ### Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing. ### Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. ### Choose the number of hidden nodes In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes. ``` import sys #################### ### Set the hyperparameters in you myanswers.py file ### #################### from my_answers import iterations, learning_rate, hidden_nodes, output_nodes N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.loc[batch].values, train_targets.loc[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim() ``` ## Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. ``` fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.loc[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) ``` ## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric). Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? > **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter #### Your answer below
github_jupyter
``` import ast import gc import glob import os import cv2 import joblib import logging import matplotlib.pyplot as plt import numpy as np import pandas as pd import pretrainedmodels import skimage import torch import torchvision from albumentations import (CenterCrop, Compose, HorizontalFlip, Normalize, PadIfNeeded, RandomCrop, RandomScale, Resize, VerticalFlip) from torch import nn from torch.nn import functional as F from torch.utils import data from torchvision import models from tqdm import tqdm from PIL import Image, ImageDraw %matplotlib inline %load_ext autoreload %autoreload 2 class AverageMeter(object): """Computes and stores the average and current value""" def __init__(self): self.reset() def reset(self): self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count def accuracy(output, target, topk=(1,)): """Computes the accuracy over the k top predictions for the specified values of k""" with torch.no_grad(): maxk = max(topk) batch_size = target.size(0) _, pred = output.topk(maxk, 1, True, True) pred = pred.t() correct = pred.eq(target.view(1, -1).expand_as(pred)) res = [] for k in topk: correct_k = correct[:k].view(-1).float().sum(0, keepdim=True) res.append(correct_k.mul_(100.0 / batch_size)) return res def apk(actual, predicted, k=3): """ Source: https://github.com/benhamner/Metrics/blob/master/Python/ml_metrics/average_precision.py """ if len(predicted) > k: predicted = predicted[:k] score = 0.0 num_hits = 0.0 for i, p in enumerate(predicted): if p in actual and p not in predicted[:i]: num_hits += 1.0 score += num_hits / (i + 1.0) if not actual: return 0.0 return score / min(len(actual), k) def mapk(actual, predicted, k=3): """ Source: https://github.com/benhamner/Metrics/blob/master/Python/ml_metrics/average_precision.py """ return np.mean([apk(a, p, k) for a, p in zip(actual, predicted)]) def preds2catids(predictions): return pd.DataFrame(np.argsort(-predictions, axis=1)[:, :3], columns=['a', 'b', 'c']) plt.rcParams['figure.figsize'] = (14, 10) device = 'cuda:1' torch.backends.cudnn.benchmark = True debug = True grayscale = False df_num_rows = 25000 valid_size = 0.2 line_width = 6 base_size = 320 constant_size = 96 image_size = (constant_size, constant_size) num_epochs = 120 batch_size = 32 dfs_all = sorted(glob.glob('../../../input/train_chunks_full/*.csv')) if debug: dfs_all = dfs_all[:10] df_num_rows = 100 print('all DFS: {}'.format(len(dfs_all))) dfs_all = np.random.permutation(dfs_all) dfs_train = dfs_all[:-2] dfs_valid = dfs_all[-2:] print('DFs train: {}'.format(len(dfs_train))) print('DFs valid: {}'.format(len(dfs_valid))) df1 = pd.read_csv(dfs_train[0]) dfs_train r_ind = np.random.randint(0, len(df1)) raw_strokes = df1.drawing[r_ind] raw_strokes = ast.literal_eval(raw_strokes) line_width = 6 base_size = 256 center = True image = Image.new('RGB', (base_size, base_size)) draw = ImageDraw.Draw(image) for t, stroke in enumerate(raw_strokes): blue = int(t / len(raw_strokes) * 255.0) red = 255 - blue for i in range(len(stroke[0]) - 1): green = max( int(255.0 - i / (len(stroke[0]) - 1) * 255.0), 128) draw.line( (stroke[0][i], stroke[1][i], stroke[0][i + 1], stroke[1][i + 1]), fill=(red, green, blue), width=line_width) image = np.asarray(image) if center: nzero_vals = np.max(image, axis=0) > 0 nzero_ind = np.where(nzero_vals)[0] begin, end = nzero_ind[0], nzero_ind[-1] tx = (base_size - end + begin - 1) // 2 M = np.float32([[1, 0, tx], [0, 1, 0]]) image = cv2.warpAffine(image, M, image.shape[:2]) nzero_vals = np.max(image, axis=1) > 0 nzero_ind = np.where(nzero_vals)[0] begin, end = nzero_ind[0], nzero_ind[-1] tx = (base_size - end + begin - 1) // 2 M = np.float32([[1, 0, 0], [0, 1, tx]]) image = cv2.warpAffine(image, M, image.shape[:2]) plt.imshow(image) h, w = image_size[0], image_size[1] dataset_parameters = { 'base_size': base_size, 'line_width': line_width, 'df_num_rows': df_num_rows, 'divide': True, 'color': True, } valid_dataset_parameters = { 'base_size': base_size, 'line_width': line_width, 'df_num_rows': 30000, 'divide': True, 'color': True, } def train_transform(p=1): return Compose([ # VerticalFlip(p=0.5), HorizontalFlip(p=0.5), # RandomScale(scale_limit=0.3, p=0.5), Resize(h, w), # RandomCrop(h, w), # PadIfNeeded(h, w, 0), # Normalize(p=0, max_pixel_value=1.0) ], p=p) def valid_transform(p=1): return Compose([ Resize(h, w), # CenterCrop(h, w), # Normalize(p=0, max_pixel_value=1.0) ], p=p) from kaggledoodlewr.data.torch_dataset_doodle import DoodleDataset train_dataset = DoodleDataset( dfs_train[0], **dataset_parameters, transform=train_transform()) valid_dataset = DoodleDataset( dfs_valid[0], **valid_dataset_parameters, transform=valid_transform()) train_loader = data.DataLoader( train_dataset, batch_size=batch_size, shuffle=True, num_workers=1, pin_memory=True) valid_loader = data.DataLoader( valid_dataset, batch_size=batch_size, shuffle=False, num_workers=3, pin_memory=True) img = train_dataset.draw_pil(raw_strokes, center=True) plt.imshow(img) from kaggledoodlewr.models import * model_parameters = { 'num_classes': 340, 'pretrained': True, 'num_channels': 3, 'pooling_output_dim': 2, 'model_name': 'se_resnext50_32x4d', } def get_model(params): model = SENet(params) # model = ResNet_(params) model.train() model.to(device) return model ``` ``` asd %%time model = get_model(model_parameters) loss_fn = nn.CrossEntropyLoss() learning_rate = 1e-4 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) n_iter_print = 10000 n_iter_eval = 10000 losses_ = AverageMeter() top1 = AverageMeter() top5 = AverageMeter() top_map3 = AverageMeter() for e in range(num_epochs): train_loss = [] n_iter = 0 for image, target in tqdm(train_loader): image = image.to(device) y_pred = model(image) target = target.to(device) loss = loss_fn(y_pred, target) optimizer.zero_grad() loss.backward() optimizer.step() train_loss.append(loss.item()) n_iter += 1 if n_iter % n_iter_print == 0: print('Step: {}'.format(n_iter)) if n_iter % n_iter_eval == 0: print('Evaluate at step: {}'.format(n_iter)) losses_ = AverageMeter() top1 = AverageMeter() top5 = AverageMeter() top_map3 = AverageMeter() for val_image, val_target in valid_loader: with torch.no_grad(): val_image = val_image.to(device) y_pred = model(val_image) val_target = val_target.to(device) loss = loss_fn(y_pred, val_target) acc1, acc5 = accuracy(y_pred, val_target, topk=(1, 5)) map3 = mapk( np.expand_dims(val_target.cpu().detach().numpy(), axis=-1), preds2catids(y_pred.cpu().detach().numpy()).values) losses_.update(loss.item(), val_image.size(0)) top1.update(acc1[0], val_image.size(0)) top5.update(acc5[0], val_image.size(0)) top_map3.update(map3, val_image.size(0)) losses_.reset() top1.reset() top5.reset() top_map3.reset() print('Val acc@3: {:.3f}, MAP@3: {:.3f}'.format(top5.avg / 100, top_map3.avg)) print("Epoch: %d, Train: %.3f, Val: %.3f" % (e, np.mean(train_loss), losses_.avg)) img_ = image.cpu().detach().numpy() img_.shape plt.imshow(img_[3, 0]) asd ``` ### skimage: - 10.7gb, 2.13it/s ### PIL - 10.7gb, 2.04it/s - 800 iters, 11.0gb - 2500 iters, 11.3gb 4 threads: - 10.5gb, 2.06it/s, 100 iters - 11.2gb, 650 iters ``` tr_imgs = [] tr_targets = [] for image, target in tqdm(train_loader): tr_imgs.append(image.cpu().detach().numpy()) tr_targets.append(target.cpu().detach().numpy()) tr_imgs = np.vstack(tr_imgs) tr_targets = np.concatenate(tr_targets) print(tr_imgs.shape) n = 8 fig, axs = plt.subplots(nrows=n, ncols=n, sharex=True, sharey=True, figsize=(16, 16)) for i in range(n**2): ax = axs[i // n, i % n] ax.imshow((-tr_imgs[i, 0, :, :] + 1)/2, cmap=plt.cm.gray) ax.axis('off') plt.tight_layout() plt.show() ``` ``` tr_targets ```
github_jupyter
``` %matplotlib inline import numpy as np import itertools import tensorflow as tf from six.moves import cPickle as pickle from six.moves import range import random import matplotlib.pyplot as plt from tensorflow.contrib.layers import flatten from PIL import Image, ImageOps from scipy.ndimage.interpolation import shift from IPython.display import Image as Im from sklearn.utils import shuffle import sklearn from sklearn.svm import SVC import pandas # Our "library" from data_augmentation import * pickle_file = '../dataset/arbimon_0.pickle' def execute(aug_shifts, pickle_file): print("=====================") print("Aug Shifts: " + str(aug_shifts + 1)) with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save print('Original Training Set Shape: ', train_dataset.shape, train_labels.shape) print('Original Test Set Shape: ', test_dataset.shape, test_labels.shape) augmented_train_dataset, augmented_train_labels = diagonal_augmentation(train_dataset, aug_shifts, train_labels) #augmented_test_dataset, augmented_test_labels = diagonal_augmentation(test_dataset, aug_shifts, test_labels) print() print('Augmented Training Set Shape: ', augmented_train_dataset.shape, augmented_train_labels.shape) #print('Augmented Test Set Shape: ', augmented_test_dataset.shape, augmented_test_labels.shape) augmented_train_dataset = svm_reformat(augmented_train_dataset) #augmented_test_dataset = svm_reformat(augmented_test_dataset) test_dataset = svm_reformat(test_dataset) X_train = augmented_train_dataset #X_test = augmented_test_dataset test_dataset = test_dataset y_train = augmented_train_labels #y_test = augmented_test_labels X_train, y_train = shuffle(X_train, y_train) #augmented_rendimiento = [] #augmented_confusion_matrices = [] non_augmented_rendimiento = [] non_augmented_confusion_matrices = [] for i in range(1): print() print("Sample #", str(i+1)) #augmented_prediction_labels = [] non_augmented_prediction_labels = [] clf = SVC(probability = True) clf.fit(X_train, y_train) #augmented_test_accuracy = clf.score(X_test, y_test) #augmented_predictions = clf.predict_proba(X_test) #print("Augmented Test Accuracy = {:.3f}".format(augmented_test_accuracy)) non_augmented_test_accuracy = clf.score(test_dataset, test_labels) non_augmented_predictions = clf.predict_proba(test_dataset) print("Non-Augmented Test Accuracy = {:.3f}".format(non_augmented_test_accuracy)) #for prediction in augmented_predictions: # augmented_prediction_labels.append(np.argmax(prediction)) #augmented_cm = sklearn.metrics.confusion_matrix(y_test, augmented_prediction_labels) for prediction in non_augmented_predictions: non_augmented_prediction_labels.append(np.argmax(prediction)) non_augmented_cm = sklearn.metrics.confusion_matrix(test_labels, non_augmented_prediction_labels) #augmented_rendimiento.append(augmented_test_accuracy) #augmented_confusion_matrices.append([augmented_cm]) non_augmented_rendimiento.append(non_augmented_test_accuracy) non_augmented_confusion_matrices.append([non_augmented_cm]) #augmented_confusion_data.loc[len(augmented_confusion_data)] = augmented_confusion_matrices #augmented_performance.loc[len(augmented_performance)] = augmented_rendimiento non_augmented_confusion_data.loc[len(non_augmented_confusion_data)] = non_augmented_confusion_matrices non_augmented_performance.loc[len(non_augmented_performance)] = non_augmented_rendimiento #augmented_confusion_data = pandas.DataFrame(columns = list('123')) #augmented_performance = pandas.DataFrame(columns = list('123')) non_augmented_confusion_data = pandas.DataFrame(columns = list('1')) non_augmented_performance = pandas.DataFrame(columns = list('1')) for i in range(22): execute(aug_shifts = i, pickle_file = pickle_file) #augmented_performance.to_pickle('results/svm_diagonal_augmented_test/svm_performance_diagonal_augmented_test.pkl') #augmented_confusion_data.to_pickle('results/svm_diagonal_augmented_test/svm_confusion_matrices_diagonal_augmented_test.pkl') non_augmented_confusion_data.to_pickle('results/svm_diagonal_non_augmented_test/svm_confusion_matrices_diagonal_non_augmented_test.pkl') non_augmented_performance.to_pickle('results/svm_diagonal_non_augmented_test/svm_performance_diagonal_non_augmented_test.pkl') ```
github_jupyter
# Marvin query Results Now that you have performed your first query, let's take at what Marvin returns as a Marvin Results object. ``` from marvin import config config.setRelease('MPL-4') from marvin.tools.query import Query, Results, doQuery # make a query myquery = 'nsa.sersic_logmass > 10.3 AND nsa.z < 0.1' q = Query(search_filter=myquery) # run a query r = q.run() ``` Let's look at the Marvin Results object. We can see how many results were returned with r.count and r.totalcount ``` print(r) print('Total count', r.totalcount) print('Page count', r.count) ``` Queries returning more than 1000 results are paginated into chunks of 100. For anything less than 1000, the query will return everything. Totalcount shows the total result count, and count shows the returned count in just that page. The results from your query are stored in the .results attribute, as a list of NamedTuples. These are like regular tuples except they have names (like dictionary key names) ``` r.results ``` You can access specific values of the results through tuple indexing or via the named attribute, but this is not recommended in general. ``` res = r.results[0] print('single row', res) print('mangaid', res[0]) print('mangaid', res.mangaid) # what are the columns print('columns', r.columns) print(res.sersic_logmass) ``` ** But be careful ** Names using the full `table.parameter` syntax cannot be accessed via the named attribute. This syntax is returned when two parameters with non-unique names are returned, like `ifu.name` and `bintype.name`. Instead we recommend using the Marvin Results **getListOf** and **getDictOf** methods. ``` # if you want a retrieve a list of a single parameter, use getListOf mangaid = r.getListOf('mangaid') print(mangaid) ``` To see what columns are available, use r.columns and r.coltoparam ``` # these are the column names in the results print('columns', r.columns) ``` if you want to retrieve the results as a list of dictionaries or dictionary of lists, use getDictOf ``` # by default, getDictOf returns a list of dictionaries, that you can iterate over mylist = r.getDictOf() print(mylist) print('mangaid', mylist[0]['cube.mangaid'], mylist[1]['cube.mangaid']) ``` you can change the format returned using the **format_type** keyword. **format_type='dictlist'** returns a dictionary of lists getDictOf returns a list of dictionaries ``` mydict = r.getDictOf(format_type='dictlist') print(mydict) print('keys', mydict.keys()) print('mangaid', mydict['cube.mangaid']) ``` # Retrieving More Results If your returned results have been paginated, you can retrieve more using **r.getNext**, **r.getPrevious**, and **r.getSubset** ``` # get the next set of results r.getNext() # get only the next 10 results r.getNext(chunk=10) # get the previous 20 results r.getPrevious(chunk=20) # get a subset of results giving the starting index and number limit # total results print('total', r.totalcount) # let's get a subset of 10 rows starting at 300 r.getSubset(300, limit=10) ``` # Sorting results You can sort your results using the **r.sort** method. You can sort on any of the returned columns, using either the column name or full parameter name. ``` # let's sort by redshift. Default is in ascending order r.sort('z') # or in descending order r.sort('nsa.z', order='desc') ``` # Converting to Marvin Tool Objects Once you have a set of results, you may want to work with them using Marvin Tools. You can easily convert to Marvin Tools using the method **r.convertToTool**. This method lets you convert to Marvin Cubes, Spaxels, Maps, RSS, or ModelCube objects. **Note:** You must have the necessary parameters to initialize a particular Marvin object. ``` # See some results r.results[0:3] # Let's convert our results to Marvin Cube objects r.columns r.convertToTool('cube') # Your new objects are stored as a list in your results called objects r.objects ``` # Save your Results and restore them ``` # We strongly recommend saving to a Marvin pickle file (.mpf), so that you can restore the Results object later r.save('results.mpf') restored = Results.restore('results.mpf') # Saving to CSV, JSON, xlsx, txt, or FITS df = r.toDataFrame() df.to_csv('results.csv') df.to_json('results.json') df.to_excel('results.xlsx') table = r.toTable() table.write('results.txt') r.toFits('results.fits') ```
github_jupyter
# Info In this notebook we use a predefined test system and show how we can extract thermodynamic informations from OpenMM. The test system of choice is a diatom argon. ## Targets * learn how to use `openmmtools.testsystems.Diatom` * learn how to extract different properties during a simulation run * learn how to calculate the `temperature` and the `kinetic energy` by your self ## OpenMMTools : Test systems a full overview about all test systems is given in : * https://openmmtools.readthedocs.io/en/0.17.0/testsystems.html ``` import simtk.openmm.app as app import simtk.openmm as mm from simtk.unit import * import openmmtools import matplotlib.pyplot as plt import numpy as np from copy import deepcopy ``` ## Diatom Create a free diatomic molecule with a single harmonic bond between the two atoms. > The natural period of a harmonic oscillator is T = sqrt(m/K), so you will want to use an integration timestep smaller than ~ T/10. Define parameters for a di-argon system. ``` K = 290.1 * kilocalorie / (angstrom**2 * mole) r0 = 1.55 * angstrom m1 = 39.948 * dalton m2 = 39.948 * dalton ``` Estimate the required time step. ``` mu = (m1*m2)/(m1+m2) # reduced mass T = sqrt(mu / K) print("T : {}".format(T.in_units_of(femtosecond))) print("T/10 : {}".format(T.in_units_of(femtosecond)/10)) ``` Create the diatom system ``` diatom = openmmtools.testsystems.Diatom(K=K, r0=r0, m1=m1, m2=m2) ``` Create a simulation object. ``` integrator = mm.VerletIntegrator(1 * femtosecond) simulation = app.Simulation(topology=diatom.topology, system=diatom.system, integrator=integrator) simulation.context.setPositions(diatom.positions) simulation.context.setVelocitiesToTemperature(300*kelvin) ``` Do a simulation of system by repeatedly simulating the system for a short period of time. After each period extract the desired values from the trajectory and store them into our list. ``` storage_harmonic_bond = { 'positions' : Quantity([], unit=angstrom), 'velocities' : Quantity([], unit=angstrom/femtosecond), 'forces' : Quantity([], unit=kilocalorie/(angstrom*mole)), 'potential_energy' : Quantity([], unit=kilocalorie_per_mole), 'kinetic_energy' : Quantity([], unit=kilocalorie_per_mole), } # define a state state = simulation.context.getState(getPositions=True, getForces=True, getVelocities=True, getEnergy=True) # add the first frame storage_harmonic_bond['positions'].append(deepcopy( state.getPositions() ) ) storage_harmonic_bond['velocities'].append(deepcopy( state.getVelocities() ) ) storage_harmonic_bond['forces'].append(deepcopy( state.getForces() )) storage_harmonic_bond['potential_energy'].append(deepcopy( state.getPotentialEnergy() )) storage_harmonic_bond['kinetic_energy'].append(deepcopy( state.getKineticEnergy() )) # repeatedly simulate the system for a short period # extract the properties from the state for i in range(1000): simulation.step(100) # define a state state = simulation.context.getState(getPositions=True, getForces=True, getVelocities=True, getEnergy=True) # update storage storage_harmonic_bond['positions'].append(deepcopy( state.getPositions() ) ) storage_harmonic_bond['velocities'].append(deepcopy( state.getVelocities() ) ) storage_harmonic_bond['forces'].append(deepcopy( state.getForces() )) storage_harmonic_bond['potential_energy'].append(deepcopy( state.getPotentialEnergy() )) storage_harmonic_bond['kinetic_energy'].append(deepcopy( state.getKineticEnergy() )) ``` ## Analysis Let's analyze what we did. ## Potential energy as function of distance As first we want to plot the potential energy as function of distance between the atoms. Therefore, we first need to calculate the distance between the atoms as done in `get_bond_distance`. ``` def get_bond_distance(positions): # get the bond distance dx, dy, dz = positions[1] - positions[0] distance = sqrt(dx**2 + dy**2 + dz**2) return distance list_r0 = Quantity([ get_bond_distance(pos) for pos in storage_harmonic_bond['positions'] ]) plt.title("Potential Energy") plt.scatter(list_r0.value_in_unit(angstrom), storage_harmonic_bond['potential_energy'].value_in_unit(kilocalorie_per_mole)) plt.ylabel('Potential Energy [kcal/mole]') plt.xlabel(r'distance [$\AA$]') ``` ## Calculate Temperature If we want to plot the temperature of the system, we have to calculate it as done in `calculate_temperature(kineticEnergy)`. As we need the degree of freedom for this calculation we can automatically calculate them with `get_dof(system)`. ``` def get_dof(system): "get the degree of freedom" dof = 0 for i in range(system.getNumParticles()): if system.getParticleMass(i) > 0*dalton: dof += 3 dof -= system.getNumConstraints() if any(type(system.getForce(i)) == mm.CMMotionRemover for i in range(system.getNumForces())): dof -= 3 return dof def calculate_temperature(kineticEnergy): "caclulates the temperature from the kinetic energy" temperature = (2*kineticEnergy/(dof*MOLAR_GAS_CONSTANT_R)).in_units_of(kelvin) return temperature dof = get_dof(simulation.system) list_temperature = Quantity([ calculate_temperature(ke) for ke in storage_harmonic_bond['kinetic_energy'] ]) plt.title('Temperature as function of time') plt.plot(list_temperature /kelvin) plt.xlabel('Time [100 fs]') plt.ylabel('Temperature [K]') ``` ## Calulcate kinetic energy We can also calculate the kinetic energy of the system instead of extracting it. Therefore, we need the velocities and masses of all atoms. ``` def get_masses(system): "get atom masses as list" masses = [system.getParticleMass(i)/dalton for i in range(system.getNumParticles())] return Quantity(masses, unit=dalton) def calculate_kinetic_energy(velocities, masses): "calcualte the kinetic energy" ke = Quantity(0.0, unit=kilocalorie_per_mole) for i, (vx, vy, vz) in enumerate(velocities): v_magnitude = sqrt(vx**2 + vy**2 + vz**2) ke += 0.5 * v_magnitude**2 * masses[i] return ke masses = get_masses(simulation.system) list_kinetic_energy = Quantity([ calculate_kinetic_energy(vel, masses) for vel in storage_harmonic_bond['velocities'] ]) ``` Let's compare it with the kinetic energy extracted from the system. ``` plt.title("Kinetic energy as function of time") plt.plot(list_kinetic_energy /kilocalorie_per_mole) plt.plot(storage_harmonic_bond['kinetic_energy'] /kilocalorie_per_mole) plt.xlabel('Time [100 fs]') plt.ylabel('Kinetic energy [Kcal/mole]') ``` They seem to fit good but there is a slight of set, let's have a look into this and plot the difference between the extracted and calculated energy. ``` difference_energy = np.array(list_kinetic_energy /kilocalorie_per_mole) difference_energy-= np.array(storage_harmonic_bond['kinetic_energy'] /kilocalorie_per_mole) plt.title('difference in kinetic energy\n(calculated vs extracted)') plt.plot(difference_energy) plt.xlabel('Time [100 fs]') plt.ylabel('Kinetic energy [Kcal/mole]') ``` The reason behind this could be that we only have the velocities of a full time step, while OpenMM gets the kinetic energy from the half-time step. We can can test this by redo the experiment and use a velocity-verlet integrator this time instead of a leap-frog. This means we should now get the full-time step kinetic energy from OpenMM. ### Leap-frog vs Verlocity-Verlet Let's redo the whole procedure but this time with a velocity-verlet integrator. ``` diatom = openmmtools.testsystems.Diatom(K=K, r0=r0, m1=m1, m2=m2) # NOW use a VelocityVerletIntegrator integrator = openmmtools.integrators.VelocityVerletIntegrator(1 * femtosecond) sim_vv = app.Simulation(topology=diatom.topology, system=diatom.system, integrator=integrator) sim_vv.context.setPositions(diatom.positions) sim_vv.context.setVelocitiesToTemperature(300*kelvin) storage_harmonic_bond_vv = { 'positions' : Quantity([], unit=angstrom), 'velocities' : Quantity([], unit=angstrom/femtosecond), 'forces' : Quantity([], unit=kilocalorie/(angstrom*mole)), 'potential_energy' : Quantity([], unit=kilocalorie_per_mole), 'kinetic_energy' : Quantity([], unit=kilocalorie_per_mole), } # define a state state = sim_vv.context.getState(getPositions=True, getForces=True, getVelocities=True, getEnergy=True) # add the first frame storage_harmonic_bond_vv['positions'].append(deepcopy( state.getPositions() ) ) storage_harmonic_bond_vv['velocities'].append(deepcopy( state.getVelocities() ) ) storage_harmonic_bond_vv['forces'].append(deepcopy( state.getForces() )) storage_harmonic_bond_vv['potential_energy'].append(deepcopy( state.getPotentialEnergy() )) storage_harmonic_bond_vv['kinetic_energy'].append(deepcopy( state.getKineticEnergy() )) # repeatedly simulate the system for a short period # extract the properties from the state for i in range(1000): sim_vv.step(100) # define a state state = sim_vv.context.getState(getPositions=True, getForces=True, getVelocities=True, getEnergy=True) # update storage storage_harmonic_bond_vv['positions'].append(deepcopy( state.getPositions() ) ) storage_harmonic_bond_vv['velocities'].append(deepcopy( state.getVelocities() ) ) storage_harmonic_bond_vv['forces'].append(deepcopy( state.getForces() )) storage_harmonic_bond_vv['potential_energy'].append(deepcopy( state.getPotentialEnergy() )) storage_harmonic_bond_vv['kinetic_energy'].append(deepcopy( state.getKineticEnergy() )) ``` Let's calculate the kinetic energy again. ``` masses = get_masses(sim_vv.system) list_kinetic_energy = Quantity([ calculate_kinetic_energy(vel, masses) for vel in storage_harmonic_bond_vv['velocities'] ]) ``` And compute the difference in kinetic energy between the calculated and the one extracted directly from OpenMM when using the velocity-verlet integrator. ``` difference_energy = np.array(list_kinetic_energy /kilocalorie_per_mole) difference_energy-= np.array(storage_harmonic_bond_vv['kinetic_energy'] /kilocalorie_per_mole) plt.title('difference in kinetic energy\n(calculated vs extracted)') plt.plot(difference_energy) plt.xlabel('Time [100 fs]') plt.ylabel('Kinetic energy [Kcal/mole]') ``` Now the difference is in the range of single precision floating point errors.
github_jupyter
# User analysis Check the experience of the Twitter users involved in a topic, especially around volume peaks. Are the users new to the topic? Or did they participate in this topic earlier? ``` import csv import os import pandas as pd import re from IPython.display import clear_output DATADIR = "/home/erikt/projects/puregome/data/text/" ID = "id_str" KNOWN = "known" NEW = "new" TEXT = "text" USER = "user" def squeal(text=None): clear_output(wait=True) if not text is None: print(text) QUERY = "corona|covid|flattenthecurve|blijfthuis|rivm|mondkapje|huisarts|houvol|zorg" def get_user_history_per_date(file_pattern): file_list = sorted(os.listdir(DATADIR)) user_history = {} user_history_per_date = {} for file_name in file_list: if re.search(file_pattern, file_name): squeal(file_name) date = file_name[0:8] if not date in user_history_per_date: for user in user_history: user_history[user] += 1 user_history_per_date[date] = {} df = pd.read_csv(DATADIR+file_name, compression="gzip", index_col=ID) df = df[df[TEXT].str.contains(QUERY, flags=re.IGNORECASE)] for user in df[USER]: if not user in user_history: user_history[user] = 0 if not user_history[user] in user_history_per_date[date]: user_history_per_date[date][user_history[user]] = 0 user_history_per_date[date][user_history[user]] += 1 return(user_history_per_date) def print_user_history_per_date(user_history_per_date): print(" ......users......") print("date tweets new 1 day older") for date in user_history_per_date: date_total = sum(user_history_per_date[date].values()) print(f"{date} {date_total:6} {round(user_history_per_date[date][0]/date_total,3)*100:5.1f}%", end=" ") print(f"{round(sum([user_history_per_date[date][i] for i in user_history_per_date[date] if i == 1])/date_total,3)*100:5.1f}%", end=" ") print(f"{round(sum([user_history_per_date[date][i] for i in user_history_per_date[date] if i > 1])/date_total,3)*100:5.1f}%") user_history_per_date = get_user_history_per_date("^202003[01]") print_user_history_per_date(user_history_per_date) user_history_per_date = get_user_history_per_date("^(202005[23]|2020060)") print_user_history_per_date(user_history_per_date) ``` ## Analysis Most of the pandemic tweets were written by users that had contributed top the topic earlier. Of the first volume peak (20200312) 19.7% of the tweets were written by users new to the topic while in the week before about 15% of the tweets were written by new users. In the peak of 20200601 17.6% of the tweets were written by new users while in the days leading up to that event this number was also around 15%. Note that the measured percentage of new users also depends on the how long before the peak the measuring process was started. ``` # pd.DataFrame(files).to_csv("users-202002-202003.csv",index_label="n") ```
github_jupyter
``` class Solution: def robotSim(self, commands, obstacles) -> int: coord = [0, 0] # 机器人的起始位置 direction = 0 # 1: y 轴增加的方向, 0: x轴增加方向, 2: x轴减小方向, 3: y轴减小方向 for com in commands: if com == -1: # 右转+1 direction = (direction + 1) % 4 elif com == -2: direction = (direction - 1) % 4 else: if direction == 1: # x轴增加方向 key = 1 for value in range(1, com + 1): x_temp = coord[0] + value if [x_temp, coord[1]] in obstacles: coord[0] = x_temp - 1 key = -1 break if key == 1: coord[0] += com elif direction == 0: # y轴增加方向, k = 1 for value in range(1, com + 1): y_temp = coord[1] + value if [coord[0], y_temp] in obstacles: coord[1] = y_temp - 1 k = -1 break if k == 1: coord[1] += com elif direction == 3: # x轴减小方向, k = 1 for value in range(1, com + 1): x_temp = coord[0] - value if [x_temp, coord[1]] in obstacles: coord[0] = x_temp + 1 k = -1 break if k == 1: coord[0] -= com elif direction == 2: # y轴减小方向 k = 1 for value in range(1, com + 1): y_temp = coord[1] - value if [coord[0], y_temp] in obstacles: coord[1] = y_temp + 1 k = -1 break if k == 1: coord[1] -= com return pow(coord[0], 2) + pow(coord[1], 2) commands_ = [4,- 1, 4, -2, 4] obstacles_ = [[2, 4]] solution = Solution() solution.robotSim(commands_, obstacles_) -2 % 4 ``` ``` class Solution: def robotSim(self, commands, obstacles) -> int: position_offset = [(0, 1), (1, 0), (0, -1), (-1, 0)] # y增加,x增加,y减小,x减小 obstacles = set(map(tuple, obstacles)) x, y, direction = 0, 0, 0 for com in commands: if com == -2: pass elif com == -1: pass else: pass return pow(x, 2) + pow(y, 2) commands_ = [4,- 1, 4, -2, 4] obstacles_ = [[2, 4]] solution = Solution() solution.robotSim(commands_, obstacles_) ```
github_jupyter
# Import dependencies ``` import time import operator as optr from collections import Counter startTime=time.time() # to calculate total execution time ``` # Read given files to find misspelled word ``` with open('./jang_errors.txt', 'r', encoding='utf8', errors='ignore') as f: erorrsFile = f.readlines() # wrongly spelled file with open('./jang_nonerrors.txt', 'r', encoding='utf8', errors='ignore') as f: correctedFile = f.readlines() # correctly spelled file with open('./wordlist.txt', 'r', encoding='utf8', errors='ignore') as f: wordsFile = f.readlines() # list of valid urdu words, dictionary wordsList=list() for word in wordsFile: wordsList.append(word[:-1]) misspelledWords=[] correctSpelledWords=[] misspelledPrevious=dict() no_of_lines = len(correctedFile) # or len(erorrsFile) # as no of lines are equal #print(no_of_lines) print('Word Index\tWrong word\tCorrect word\tPrevious word') misspell_count=0 for idx in range(no_of_lines): errLine = erorrsFile[idx].replace('<s>', '').replace('</s>', '') correctLine = correctedFile[idx].replace('<s>', '').replace('</s>', '') errLineWords = errLine.split(' ') correctLineWords = correctLine.split(' ') errLineWords.remove('') correctLineWords.remove('') #print(errLine) #print(correctLine) no_of_words = -1 if (len(correctLineWords) == len(errLineWords)): no_of_words = len(errLineWords) #or len(correctLineWords) # as no of words are equal else: print ('Error line and correct line have word count difference..Abort!!') break for word_idx in range(no_of_words): if errLineWords[word_idx] != correctLineWords[word_idx] and errLineWords[word_idx] not in wordsList and errLineWords[word_idx] != '' and correctLineWords[word_idx] != '': misspelledWords.append(errLineWords[word_idx]) correctSpelledWords.append(correctLineWords[word_idx]) misspelledPrevious[errLineWords[word_idx]] = errLineWords[word_idx-1] print(misspell_count, '\t\t',errLineWords[word_idx], '\t\t', correctLineWords[word_idx], '\t\t', misspelledPrevious[errLineWords[word_idx]]) misspell_count+=1 print('\nTotal misspelled word count', misspell_count) ``` # Find all candidate words for all misspelled words ``` # Finding candidates for a word def makeCandidates(word): candidates=list() # urdu alphabet set from wikipedia #ا ب پ ت ٹ ث ج چ ح خ د ڈ ذ ر ڑ ز ژ س ش ص ض ط ظ ع غ ف ق ک گ ل م ن (ں) و ہ (ھ) ء ی ے urdu_charset='ابپتٹثجچحخدڈذرڑزژسشصضطظعغفقکگلمنںوہھءیے' # urdu charset for char in urdu_charset: #insertion candidates for i in range(len(word)+1): candidates.append(word[0:i]+char+word[i:]) #substitution candidates for i in range(len(word)): candidates.append(word[0:i]+char+word[i+1:]) #deletion candidates for i in range(len(word)): candidates.append(word[0:i]+word[i+1:]) #transpose candidates if(len(word)>1): for i in range(len(word)-1): candidates.append(word[0:i]+word[i+1]+word[i]+word[i+2:]) return candidates # All candidates for all error words def getAllCandidates(): candidatesFirstEdit=list() candidatesSecondEdit=list() for errWord in misspelledWords: #print(errWord) candidatesFirstEdit+=makeCandidates(errWord) for firstEditCandidate in candidatesFirstEdit: #print(firstEditCandidate) candidatesSecondEdit+=makeCandidates(firstEditCandidate) candidates=set(candidatesFirstEdit).union(set(candidatesSecondEdit)) # removing duplicates candidates=set(wordsList).intersection(candidates) # removing invalid candidates #print(len(candidatesFirstEdit)) #print(len(candidatesSecondEdit)) #print(len(candidates)) #candidatesFirstEdit: #candidatesSecondEdit: #candidates return candidates candidates=getAllCandidates() ``` # Calculate minimum edit distance of error word from candidate word ``` def calculateMinimumEditDistance(str1, str2): #str1: errWord, str2: candidateWord #insertion, deletion, substitution costs are 1 ic, dc, sc = 1, 1, 1 n, m=len(str1), len(str2) MED_DP=[[0 for x in range(m + 1)] for x in range(n + 1)] # initialize empty(zero) matrix for results of subproblems for i in range(1,n+1): MED_DP[i][0]=MED_DP[i-1][0]+dc for i in range(1,m+1): MED_DP[0][i]=MED_DP[0][i-1]+dc for i in range(1,n+1): for j in range(1,m+1): if (i>1 and j>1 and (str1[i-1]==str2[j-2]) and (str1[i-2]==str2[j-1])): MED_DP[i][j]=MED_DP[i-2][j-2]+sc elif(str1[i-1]==str2[j-1]): MED_DP[i][j]=min([MED_DP[i-1][j]+dc,MED_DP[i-1][j-1]+0,MED_DP[i][j-1]+ic]) else: MED_DP[i][j]=min([MED_DP[i-1][j]+dc,MED_DP[i-1][j-1]+sc,MED_DP[i][j-1]+ic]) #print(MED_DP) return MED_DP[n][m] ``` # Get candidate words within a given edit distance ``` def getCandidateWords(err_word, med): candidateWords=list() for word in candidates: MED=calculateMinimumEditDistance(err_word, word) if MED < med: candidateWords.append(word) return candidateWords ``` # Get tokens from training set for model training ``` def getTrainingSetTokens(): with open('./jang.txt', 'r', encoding='utf8', errors='ignore') as f: tokens=[] for line in f.readlines(): # training set tokens+=line.split() return tokens ``` # Make all combinational phrases of adjacent words upto n length ``` def makeNGram(n, tokens): tokenlen=len(tokens) nGramList=[] for idx, token in enumerate(tokens): singleNGramList=[] for i in range(n): if(idx+n<=tokenlen): singleNGramList.append(tokens[idx+i]) #print(single_ngramlist) if(len(singleNGramList)==n): nGramList.append(tuple(singleNGramList)) #print(single_ngramlist) #print(nGramList) return nGramList ``` # Calculate count of unigrams and bigrams ``` def calculateCounts(): tokens=getTrainingSetTokens() unigramList=makeNGram(1, tokens) bigramList=makeNGram(2, tokens) #print(unigramList) #print(bigramList) unigramCount=Counter(unigramList) #aggregate all unigrams bigramCount=Counter(bigramList) #aggregate all bigrams #print(unigramCount) #print(bigramCount) return unigramCount, bigramCount unigramCount, bigramCount = calculateCounts() ``` ## Calculate probabilities of all candidate within given edit distance for an error word ``` def calculateBigramProbability(err_word, med): uniLambda = 0.4 biLambda = 0.6 candidateWordProbabilityDict=dict() candidateWords = getCandidateWords(err_word, med) prev_word = misspelledPrevious[err_word] for word in candidateWords: unigramProbability=unigramCount[(word,)]/len(unigramCount) if bigramCount[(prev_word, word)]!=0: bigramProbability=bigramCount[(prev_word,word)]/unigramCount[(prev_word),] else: bigramProbability=0 candidateWordProbability=unigramProbability*uniLambda+bigramProbability*biLambda if candidateWordProbability != 0: candidateWordProbabilityDict[word] = candidateWordProbability return candidateWordProbabilityDict ``` ### Find probabilies within given edit distance and range of possible candidates to correct word ``` def getWordProbabilities(err_word, correct_word, med, top=10): predictedWordsProbabilityDict = calculateBigramProbability(err_word, med) data = sorted(predictedWordsProbabilityDict.items(), key=optr.itemgetter(1), reverse=True)[0:top] foundIndex = -1 for idx, candidate_word in enumerate(data): if correct_word in candidate_word[0]: foundIndex = idx break return data, foundIndex ``` #### Testing cases ``` #For testing purposes err_word = 'اادہ' correct_word = 'اعادہ' med = 3 test = getCandidateWords(err_word, med) for idx, word in enumerate(test): if word == correct_word: print (idx, 'yes, in candidates') data, fi = getWordProbabilities(err_word, correct_word, med, top=20) if fi == -1: print('not found in probability dictionary, unigram, bigram probabilities 0') else: for idx, word in enumerate(data): if word[0] == correct_word: print (idx, 'yes, in dictionary') ``` # Final Report ``` %%time found_idx=list() for idx, word in enumerate(misspelledWords): print('Word index:', idx, '\t\t', 'False word:', word, '\t', 'True word:', correctSpelledWords[idx], '\t', 'Previous word:', misspelledPrevious[word], '\n') med = 2 predictedWordProbabilityDict, foundIndex = getWordProbabilities(word, correctSpelledWords[idx], med) if foundIndex == -1: med += 1 predictedWordProbabilityDict, foundIndex = getWordProbabilities(word, correctSpelledWords[idx], med) if foundIndex == -1: print ('True word not found in top 10', med, 'edit distance candidate words\n') else: found_idx.append(idx) print ('True word found in top 10 predicted', med, 'edit distance candidates with bigram probability:', predictedWordProbabilityDict[foundIndex][1], '\n') print ('Candidate Index\t\tCandidate word\t\tProbability') for idx1, candidate_word in enumerate(predictedWordProbabilityDict): print_str = str(str(idx1) + '\t\t\t' + candidate_word[0]) + '\t\t\t' + str(candidate_word[1]) if foundIndex == idx1: print_str = '\x1b[0;30;43m'+'\33[1m'+print_str+' ----> TRUE WORD'+'\x1b[0m' print (str(print_str)) print ('----------------------------------------------------------------------------------------------------------------\n') print('Following are the word indices of words whose correct word was found in top 10 candidates:\n', found_idx, '\n') print('Total misspelled words whose correct word was found', len(found_idx), '\n') elapsedTime=time.time()-startTime print('Total execution time:', elapsedTime) ```
github_jupyter
# Getting to know Julia (Originally from https://juliabox.com under tutorials/intro-to-julia/short-version/01.Getting_to_know_Julia.ipynb) This notebook is meant to offer a crash course in Julia syntax to show you that Julia is lightweight and easy to use -- like your favorite high-level language! We'll talk about - Strings - Data structures - Loops - Conditionals - Functions ## Strings ``` string1 = "How many cats " string2 = "is too many cats?" string(string1, string2) 😺 = 10 println("I don't know but $😺 are too few!") ``` Note: Julia allows us to write super generic code, and 😺 is an example of this. This allows us to write code like ``` 😺 = 1 😀 = 0 😞 = -1 😺 + 😞 == 😀 ``` ## Data structures ### Tuples We can create a tuple by enclosing an ordered collection of elements in `( )`. Syntax: <br> ```julia (item1, item2, ...)``` ``` myfavoriteanimals = ("penguins", "cats", "sugargliders") myfavoriteanimals[1] ``` ### Dictionaries If we have sets of data related to one another, we may choose to store that data in a dictionary. To do this, we use the `Dict()` function. Syntax: ```julia Dict(key1 => value1, key2 => value2, ...)``` A good example of a dictionary is a contacts list, where we associate names with phone numbers. ``` myphonebook = Dict("Jenny" => "867-5309", "Ghostbusters" => "555-2368") myphonebook["Jenny"] ``` ### Arrays Unlike tuples, arrays are mutable. Unlike dictionaries, arrays contain ordered sequences of elements. <br> We can create an array by enclosing this sequence of elements in `[ ]`. Syntax: <br> ```julia [item1, item2, ...]``` For example, we might create an array to keep track of my friends ``` myfriends = ["Ted", "Robyn", "Barney", "Lily", "Marshall"] fibonacci = [1, 1, 2, 3, 5, 8, 13] mixture = [1, 1, 2, 3, "Ted", "Robyn"] ``` We can also create arrays of other data structures, or multi-dimensional arrays. ``` numbers = [[1, 2, 3], [4, 5], [6, 7, 8, 9]] rand(4, 3) ``` ## Loops ### `for` loops The syntax for a `for` loop is ```julia for *var* in *loop iterable* *loop body* end ``` ``` for n in 1:10 println(n) end ``` ### `while` loops The syntax for a `while` is ```julia while *condition* *loop body* end ``` ``` n = 0 while n < 10 global n += 1 println(n) end ``` ## Conditionals #### with `if` In Julia, the syntax ```julia if *condition 1* *option 1* elseif *condition 2* *option 2* else *option 3* end ``` allows us to conditionally evaluate one of our options. ``` x, y = 1, 2 if x > y x else y end ``` #### with ternary operators For this last block, we could instead use the ternary operator with the syntax ```julia a ? b : c ``` which equates to ```julia if a b else c end ``` ``` (x > y) ? x : y ``` ## Functions Topics: 1. How to declare a function 2. Duck-typing in Julia 3. Mutating vs. non-mutating functions 4. Some higher order functions ### How to declare a function #### First way: with `function` and `end` keywords ``` function f(x) x^2 end ``` #### Second way: with `=` ``` f2(x) = x^2 ``` Third way: as an anonymous function ``` f3 = x -> x^2 ``` #### Calling these functions ``` f(42) f2(42) f3(42) ``` ### Duck-typing in Julia *"If it quacks like a duck, it's a duck."* <br><br> Julia functions will just work on whatever inputs make sense. <br><br> For example, `f` will work on a matrix. ``` A = rand(3, 3) A f(A) ``` On the other hand, `f` will not work on a vector. Unlike `A^2`, which is well-defined, the meaning of `v^2` for a vector, `v`, is ambiguous. ``` v = rand(3) f(v) ``` ### Mutating vs. non-mutating functions By convention, functions followed by `!` alter their contents and functions lacking `!` do not. For example, let's look at the difference between `sort` and `sort!`. ``` v = [3, 5, 2] sort(v) v ``` `sort(v)` returns a sorted array that contains the same elements as `v`, but `v` is left unchanged. <br><br> On the other hand, when we run `sort!(v)`, the contents of v are sorted within the array `v`. ``` sort!(v) v ``` ### Some higher order functions #### map `map` is a "higher-order" function in Julia that *takes a function* as one of its input arguments. `map` then applies that function to every element of the data structure you pass it. For example, executing ```julia map(f, [1, 2, 3]) ``` will give you an output array where the function `f` has been applied to all elements of `[1, 2, 3]` ```julia [f(1), f(2), f(3)] ``` ``` map(f, [1, 2, 3]) ``` Here we've squared all the elements of the vector `[1, 2, 3]`, rather than squaring the vector `[1, 2, 3]`. To do this, we could have passed to `map` an anonymous function rather than a named function, such as ``` x -> x^3 ``` via ``` map(x -> x^3, [1, 2, 3]) ``` and now we've cubed all the elements of `[1, 2, 3]`! ### broadcast `broadcast` is another higher-order function like `map`. `broadcast` is a generalization of `map`, so it can do every thing `map` can do and more. The syntax for calling `broadcast` is the same as for calling `map` ``` broadcast(f, [1, 2, 3]) ``` and again, we've applied `f` (squared) to all the elements of `[1, 2, 3]` - this time by "broadcasting" `f`! Some syntactic sugar for calling `broadcast` is to place a `.` between the name of the function you want to `broadcast` and its input arguments. For example, ```julia broadcast(f, [1, 2, 3]) ``` is the same as ```julia f.([1, 2, 3]) ``` ``` f.([1, 2, 3]) ``` Notice again how different this is from calling ```julia f([1, 2, 3]) ``` We can square every element of a vector, but we can't square a vector! To drive home the point, let's look at the difference between ```julia f(A) ``` and ```julia f.(A) ``` for a matrix `A`: ``` A = [i + 3*j for j in 0:2, i in 1:3] f(A) ``` As before we see that for a matrix, `A`, ``` f(A) = A^2 = A * A ``` On the other hand, ``` B = f.(A) ``` contains the squares of all the entries of `A`. This dot syntax for broadcasting allows us to write relatively complex compound elementwise expressions in a way that looks natural/closer to mathematical notation. For example, we can write ``` A .+ 2 .* f.(A) ./ A ``` instead of ``` broadcast(x -> x + 2 * f(x) / x, A) ``` and this will still compile down to code that runs as efficiently as `C`!
github_jupyter
# JAK2 Min Analysis ``` %env CUDA_VISIBLE_DEVICES=2 ``` ## Imports ``` import os import json import torch import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from irelease.utils import generate_smiles, canonical_smiles import nb_utils as nbu from math import ceil from sklearn.manifold import TSNE import pandas as pd import rdkit.Chem as Chem from rdkit.Chem.Draw import DrawingOptions from rdkit.Chem import Draw DrawingOptions.atomLabelFontSize = 50 DrawingOptions.dotsPerAngstrom = 100 DrawingOptions.bondLineWidth = 3 ``` ## Load SMILES files ``` ppo_grl_train = 'jak2_min/JAK2_min_IReLeaSE-PPO_with_irl.json' ppo_grl_eval = 'jak2_min/JAK2_min_smiles_biased_ppo_grl_eval.json' ppo_baseline_reward_train = 'jak2_min/JAK2_min_IReLeaSE-PPO__baseline_reward.json' ppo_baseline_reward_eval = 'jak2_min/JAK2_min_smiles_biased_ppo_baseline_reward_eval.json' reinforce_train = 'jak2_min/JAK2_min_IReLeaSE-REINFORCE_no_irl.json' reinforce_eval = 'jak2_min/JAK2_min_smiles_biased_reinforce_eval.json' reinforce_grl_train = 'jak2_min/JAK2_min_IReLeaSE-REINFORCE_with_irl.json' reinforce_grl_eval = 'jak2_min/JAK2_min_smiles_biased_reinforce_grl_eval.json' stack_rnn_tl_train = 'jak2_min/JAK2_min_Stack_RNN_XEnt_Generator_Baseline.json' stack_rnn_tl_eval = 'jak2_min/JAK2_min_Stack_RNN_XEnt_Generator_Baseline_eval.json' ppo_grl_smiles_valid, ppo_grl_smiles_invalid = nbu.smiles_from_json_data(ppo_grl_eval) ppo_grl_conv = nbu.get_convergence_data(ppo_grl_train) ppo_baseline_reward_smiles_valid, ppo_baseline_reward_smiles_invalid = nbu.smiles_from_json_data(ppo_baseline_reward_eval) ppo_baseline_reward_conv = nbu.get_convergence_data(ppo_baseline_reward_train) reinforce_smiles_valid, reinforce_smiles_invalid = nbu.smiles_from_json_data(reinforce_eval) reinforce_conv = nbu.get_convergence_data(reinforce_train) reinforce_grl_smiles_valid, reinforce_grl_smiles_invalid = nbu.smiles_from_json_data(reinforce_grl_eval) reinforce_grl_conv = nbu.get_convergence_data(reinforce_grl_train) stack_rnn_tl_smiles_valid, stack_rnn_tl_smiles_invalid = nbu.smiles_from_json_data(stack_rnn_tl_eval) stack_rnn_tl_conv = nbu.get_convergence_data(stack_rnn_tl_train) len(ppo_grl_smiles_valid), len(ppo_baseline_reward_smiles_valid), len(reinforce_smiles_valid), len(reinforce_grl_smiles_valid), len(stack_rnn_tl_smiles_valid) # CSV files containing predictions/evaluations preds_ppo_grl_eval = pd.read_csv('jak2_min/JAK2_min_smiles_biased_ppo_grl_eval.csv') preds_ppo_baseline_reward_eval = pd.read_csv('jak2_min/JAK2_min_smiles_biased_ppo_baseline_reward_eval.csv') preds_reinforce_eval = pd.read_csv('jak2_min/JAK2_min_smiles_biased_reinforce_eval.csv') preds_reinforce_grl_eval = pd.read_csv('jak2_min/JAK2_min_smiles_biased_reinforce_grl_eval.csv') preds_stack_rnn_tl_eval = pd.read_csv('jak2_min/JAK2_min_Stack_RNN_XEnt_Generator_Baseline_eval.csv') preds_demo = pd.read_csv('jak2_min/jak2_min_biased.csv') preds_unbiased = pd.read_csv('jak2_min/jak2_unbiased.csv') preds_ppo_grl_eval.shape, preds_ppo_baseline_reward_eval.shape, preds_reinforce_eval.shape, preds_reinforce_grl_eval.shape, preds_stack_rnn_tl_eval.shape, preds_demo.shape, preds_unbiased.shape preds_ppo_grl_eval.shape[0] / 10000. * 100., preds_ppo_baseline_reward_eval.shape[0] / 10000. * 100., preds_reinforce_eval.shape[0] / 10000. * 100., preds_reinforce_grl_eval.shape[0] / 10000. * 100., preds_demo.shape[0] / 10000. * 100., preds_unbiased.shape[0] / 10000. * 100. ``` ## Evaluate SMILES ``` generators = nbu.data_provider('../../data/jak2_min_smiles_biased.smi', '../../data/unbiased_smiles.smi') demo_smiles = generators['demo_data'].random_training_set_smiles(10000) demo_smiles = list(set(demo_smiles)) unbiased_smiles = generators['unbiased_data'].random_training_set_smiles(10000) unbiased_smiles = list(set(unbiased_smiles)) demo_smiles[0], unbiased_smiles[0], len(demo_smiles) preds_ppo_grl_smiles = preds_ppo_grl_eval['prediction'].tolist() preds_ppo_baseline_reward_smiles = preds_ppo_baseline_reward_eval['prediction'].tolist() preds_reinforce_smiles = preds_reinforce_eval['prediction'].tolist() preds_reinforce_grl_smiles = preds_reinforce_grl_eval['prediction'].tolist() preds_stack_rnn_tl_smiles = preds_stack_rnn_tl_eval['prediction'].tolist() preds_demo_smiles = preds_demo['prediction'].tolist() preds_unbiased_smiles = preds_unbiased['prediction'].tolist() ``` ## KDE plots ``` sns.kdeplot(preds_ppo_grl_smiles, label='PPO-GRL', shade=False, color='blue') sns.kdeplot(preds_ppo_baseline_reward_smiles, label='PPO-Eval', shade=False, color='purple') sns.kdeplot(preds_reinforce_smiles, label='REINFORCE', shade=False, color='red') ax = sns.kdeplot(preds_reinforce_grl_smiles, label='REINFORCE-GRL', shade=False, color='orange') sns.kdeplot(preds_stack_rnn_tl_smiles, label='Stack-RNN-TL', shade=False, color='cyan') sns.kdeplot(preds_demo_smiles, label='Demo SMILES', shade=False, color='green') ax = sns.kdeplot(preds_unbiased_smiles, label='Unbiased SMILES', shade=False, color='gray') plt.xlabel('$pIC_{50}$') # ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # ax.set_xticks([]) sns.despine(offset=5, left=True, bottom=False) plt.savefig('jak2_min/jak2_min_kde_plots.eps') plt.show() ``` ## Convergence plot ``` ppo_grl_biased = ppo_grl_conv['biased'] ppo_baseline_reward_biased = ppo_baseline_reward_conv['biased'] reinforce_biased = reinforce_conv['biased'] reinforce_grl_biased = reinforce_grl_conv['biased'] stack_rnn_tl_biased = stack_rnn_tl_conv['biased'] demo_vals = reinforce_grl_conv['demo'] unbiased_vals = reinforce_grl_conv['baseline'] offset = 10 plt.plot(nbu.smoothing_values(ppo_grl_biased, 0.6)[offset:150]) plt.plot(nbu.smoothing_values(ppo_baseline_reward_biased, 0.6)[offset:]) plt.plot(nbu.smoothing_values(reinforce_biased, 0.6)[offset:]) plt.plot(nbu.smoothing_values(reinforce_grl_biased, 0.6)[offset:]) plt.plot(demo_vals) plt.plot(unbiased_vals) plt.xlabel('episode') plt.ylabel('pIC$_{50}$') plt.legend(['PPO-GRL','PPO-Eval','REINFORCE','REINFORCE-GRL', 'Demo SMILES', 'Unbiased SMILES'], loc='lower right') plt.savefig('jak2_min/jak2_min_irl_vs_rl_convergence.eps') sns.despine() offset = 10 plt.plot(nbu.smoothing_values(stack_rnn_tl_biased, 0.9)[offset:]) # plt.plot(demo_vals) # plt.plot(unbiased_vals) plt.xlabel('batch') plt.ylabel('pIC$_{50}$') plt.legend(['Stack-RNN-TL', 'Demo SMILES', 'Unbiased SMILES'], loc='lower right') plt.savefig('jak2_min/jak2_min_stack_rnn_tl_convergence.eps') sns.despine() ``` ## t-SNE plot ``` hparams = {'d_model': 1500, 'dropout': 0.0, 'monte_carlo_N': 5, 'use_monte_carlo_sim': True, 'no_mc_fill_val': 0.0, 'gamma': 0.97, 'episodes_to_train': 10, 'gae_lambda': 0.95, 'ppo_eps': 0.2, 'ppo_batch': 1, 'ppo_epochs': 5, 'entropy_beta': 0.01, 'bias_mode': 'max', 'use_true_reward': False, 'baseline_reward': False, 'reward_params': {'num_layers': 2, 'd_model': 512, 'unit_type': 'gru', 'demo_batch_size': 32, 'irl_alg_num_iter': 5, 'dropout': 0.2, 'use_attention': False, 'use_validity_flag': True, 'bidirectional': True, 'optimizer': 'adadelta', 'optimizer__global__weight_decay': 0.0005, 'optimizer__global__lr': 0.001, }, 'agent_params': {'unit_type': 'gru', 'num_layers': 2, 'stack_width': 1500, 'stack_depth': 200, 'optimizer': 'adadelta', 'optimizer__global__weight_decay': 0.005, 'optimizer__global__lr': 0.001}, 'critic_params': {'num_layers': 2, 'd_model': 256, 'dropout': 0.2, 'unit_type': 'gru', 'optimizer': 'adam', 'optimizer__global__weight_decay': 0.005, 'optimizer__global__lr': 0.001}, 'expert_model_dir': './model_dir/expert_xgb_reg' } init_dict = nbu.initialize(hparams, generators['demo_data'], generators['unbiased_data'], True) # encoder = init_dict['encoder'] # ppo_reward_net_rnn = init_dict['reward_net_rnn'] # ppo_reward_net = init_dict['reward_net'] # ppo_reward_net.load_state_dict(nbu.load_model_weights('../model_dir/JAK2_max_irelease_stack-rnn_gru_ppo_reward_net_2020_07_12__20_54_23_1.003_66.mod')) # with torch.set_grad_enabled(False): # reward_lst, logits_lst = [], [] # tsne_smiles = ppo_grl_smiles_valid + ppo_grl_smiles_invalid # for i in range(0, len(tsne_smiles), 500): # inp, valid_vec = nbu.smiles_to_tensor(['<'+s+'>' for s in tsne_smiles[i:i+500]]) # enc_out = encoder([inp, valid_vec]) # reward, logits = ppo_reward_net_rnn(enc_out, return_logits=True) # reward_lst.append(reward) # logits_lst.append(logits) # reward = -torch.cat(reward_lst) # logits = torch.cat(logits_lst) # logits.shape, reward.shape # tsne_rep = TSNE(n_components=2).fit_transform(logits.detach().cpu().numpy()) # tsne_data = pd.DataFrame({'x':tsne_rep[:,0], 'y':tsne_rep[:,1]}) # tsne_rep.shape # plt.figure(figsize=(10,10)) # points = plt.scatter(tsne_data['x'], tsne_data['y'], c=reward.detach().cpu().numpy().reshape(-1,), s=50, cmap="Spectral") # cb = plt.colorbar(points, ticks=None) # cb.outline.set_visible(False) # ax = sns.scatterplot(x="x", y="y", hue=reward.detach().cpu().numpy().reshape(-1,), data=tsne_data, # legend=False, palette='Spectral', edgecolor='black', linewidth=.01) # v = [] # # valid SMILES # while True: # va_idx = np.random.randint(len(ppo_grl_smiles_valid)) # comp = ppo_grl_smiles_valid[va_idx] # i = len(v)+1 # if len(comp) <= 50: # v.append(comp) # ax.annotate('val-'+str(i), xy=(tsne_data['x'][va_idx], tsne_data['y'][va_idx]), # xytext=(-20+(i*10),50), # arrowprops=dict(facecolor='black', arrowstyle='-'), # horizontalalignment='right', verticalalignment='top') # if len(v) == 3: # break # inv = [] # # invalid SMILES # for i in range(3): # inv_idx = np.random.randint(len(ppo_grl_smiles_invalid)) # inv.append(ppo_grl_smiles_invalid[inv_idx]) # ax.annotate('inv-'+str(i+1), xy=(tsne_data['x'][len(ppo_grl_smiles_valid) + inv_idx], # tsne_data['y'][len(ppo_grl_smiles_valid) + inv_idx]), # xytext=(-50+(i*10),40), arrowprops=dict(facecolor='black', arrowstyle='-'), # horizontalalignment='right', verticalalignment='top') # print('selected valid:\n', v) # print('selected invalid:\n', inv) # plt.axis('off') # plt.savefig('jak2_max/jak2_max_ppo_grl_tsne.pdf') # plt.show() ``` ## Draw random SMILES ``` vis_mols = [Chem.MolFromSmiles(sm, sanitize=True) for sm in set(ppo_grl_smiles_valid) if len(sm)]# <= 50] sanitized_gen_mols = [vis_mols[i] for i in np.where(np.array(vis_mols) != None)[0]] len(sanitized_gen_mols) n_to_draw = 20 ind = np.random.randint(0, len(sanitized_gen_mols), n_to_draw) mols_to_draw = [sanitized_gen_mols[i] for i in ind] legends = ['p = ' + str(round(float(preds_ppo_grl_smiles[i]), 3)) for i in ind] Draw.MolsToGridImage(mols_to_draw, molsPerRow=5, subImgSize=(300,300), legends=legends) for i, mol in enumerate(mols_to_draw): print(f'{Chem.MolToSmiles(mol)}\t\t{legends[i]}') # Save selected compounds to file # os.makedirs('./drd2_samples', exist_ok=True) # for i, mol in enumerate(mols_to_draw): # Draw.MolToImageFile(mol, f'./drd2/sample_compound_{i+1}.png') ``` ## Molecule metrics ``` def mol_metrics(df): results = {} num_can = 0 for idx, df_smiles in enumerate([df, df[df['prediction'] <= np.mean(demo_vals)]]): new_smiles, valid_vec = canonical_smiles(df_smiles['SMILES'].tolist()) smiles = [] for i, s in enumerate(new_smiles): if valid_vec[i] == 1: smiles.append(s) eval_dict = nbu.evaluate(smiles, demo_smiles) eval_dict['Num of unique canonical SMILES'] = len(set(smiles)) if idx==0: num_can = len(smiles) eval_dict['percentage of valid'] = df_smiles.shape[0] / 10000. * 100. if idx==1: eval_dict['percentage in threshold (canonical)'] = len(smiles) / num_can results['no_threshold' if idx==0 else 'with_threshold'] = eval_dict return results mol_metrics(preds_ppo_grl_eval) mol_metrics(preds_ppo_baseline_reward_eval) mol_metrics(preds_reinforce_eval) mol_metrics(preds_reinforce_grl_eval) mol_metrics(preds_stack_rnn_tl_eval) mol_metrics(preds_demo) mol_metrics(preds_unbiased) ``` ## RNN hidden neurons examination ``` # def plot_heatmap(smiles_list, logits, neuron=None, save=False): # if neuron is None: # neuron = np.random.randint(logits.shape[-1]) # print(f'Plotting for neuron {neuron}') # for i, smiles in enumerate(smiles_list): # arr = logits[:len(smiles), i, neuron].reshape(1, len(smiles)) # chars = np.array([c for c in smiles]).reshape(1,-1) # fig = plt.figure(figsize=(200,4)) # sns.heatmap(arr, annot=chars, fmt='', cbar=False, # cmap=sns.color_palette("bwr", 10), annot_kws={'size':100, 'fontweight':'normal'}, # xticklabels=False, yticklabels=False, square=True) # if save: # os.makedirs(f'drd2/neuron_{neuron}', exist_ok=True) # plt.savefig(f'drd2/neuron_{neuron}/{i}.png') # plt.show() # plot_heatmap(ppo_samples[:10], ppo_samples_neurons[:,:10,:]) ```
github_jupyter
# Analysing Public Member Info on Meetup.com In this notebook we do some simple analysis of information about members registered on [meetup.com](https://meetup.com). We extract the info using the official [meetup API](https://www.meetup.com/meetup_api/) where you can also get [your API key](https://secure.meetup.com/meetup_api/key/) as a registered member. **N.B.** This is work in progress. ## Getting Started ``` %matplotlib inline import re import os import json import requests import pandas as pd server = 'https://api.meetup.com' group_urlname = 'Python-Users-Berlin-PUB' from meetup_api_key import key ``` Get information about a group on Meetup.com: ``` requests.get("https://api.meetup.com/%s?key=%s" % (group_urlname, key)).json() ``` Get information about two members of that group: ``` url = server + "/2/members?offset=1&page=2&order=name&group_urlname=%s&key=%s" % (group_urlname, key) info = requests.get(url).json() # hide key so it doesn't show up in some repository: for f in ('next', 'url'): info['meta'][f] = re.sub('key=\w+', 'key=******', info['meta'][f]) info def get_all_members(group_urlname, verbose=False): "Read members info from a sequence of pages." total = [] offset = 1 page = 200 url = "{server}/2/members?offset={offset}&format=json&group_urlname={group_urlname}&page={page}&key={key}&order=name" url = url.format(server=server, offset=offset, page=page, group_urlname=group_urlname, key=key) info = requests.get(url).json() total += info['results'] if verbose: print(url) print(len(total), info['meta']['count']) while True: next_url = info['meta']['next'] print(next_url) if not next_url: break js = requests.get(next_url).json() total += info['results'] print(len(total), info['meta']['count']) if verbose: print('found %d members' % len(total)) return total path = 'pub-members.json' if os.path.exists(path): members = json.load(open(path)) else: members = get_all_members('Python-Users-Berlin-PUB') json.dump(members, open(path, 'w')) members[0] ``` ## PUB Members' Interests ``` members[0]['topics'] pd.DataFrame(members[0]['topics']) ``` Now build a dataframe with this information for all members: ``` df = pd.concat([pd.DataFrame(m['topics']) for m in members]) len(df) s = df.groupby('name').size().sort_values(ascending=True)[-20:] s.plot.barh(title='Most cited topics people are interested in', figsize=(10, 5)) ``` ## PyData Members' Interests ``` path = 'pydata-members.json' if os.path.exists(path): members = json.load(open(path)) else: members = get_all_members('PyData-Berlin') json.dump(members, open(path, 'w')) df = pd.concat([pd.DataFrame(m['topics']) for m in members]) s = df.groupby('name').size().sort_values(ascending=True)[-20:] s.plot.barh(title='Most cited topics people are interested in', figsize=(10, 5)) ``` ## Members' Groups? Information about the groups a member has joined seems to be harder to find... (???)
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Eager modunun ana hatlari <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/tr/r1/tutorials/eager/eager_basics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/tr/r1/tutorials/eager/eager_basics.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Bu kitapcikta TensorFlow kullanarak konuya giris yapacagiz. Asagidaki konulari isleyecegiz: * Gerekli paketleri iceri aktarma * Tensorlari olusturma ve kullanma * GPU hizlandirmayi kullanmak * Veri setleri ## TensorFlow'u iceri alalim 'tensorflow' modulunu iceri alalim ver eager modunu secelim. Eager modu, TensorFlow'a detaylarini daha sonra aciklayacagimiz etkilesimli bir arayuz saglar. ``` import tensorflow.compat.v1 as tf ``` ## Tensorlar Tensor kisaca cok boyutlu bir dizidir. NumPy'deki 'ndarray' nesneleri gibi, `Tensor` nesnesinin de bir veri turu ve sekli vardir. Ayrica Tensorlar GPU gibi hizlandirilmis hafizada bulunabilirler. TensorFlow, Tensorlari olusturmak ve kullanmak icin zengin islemlere sahiptir ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.). Bu islemler Python tiplerini otomatik olarak degistirirler. Ornegin: ``` print(tf.add(1, 2)) print(tf.add([1, 2], [3, 4])) print(tf.square(5)) print(tf.reduce_sum([1, 2, 3])) print(tf.encode_base64("hello world")) # Islec asiri yuklenmesi de desteklenir print(tf.square(2) + tf.square(3)) ``` Her Tensor'un bir sekli ve veri turu vardir ``` x = tf.matmul([[1]], [[2, 3]]) print(x.shape) print(x.dtype) ``` NumPy dizileri ve TensorFlow Tensorlari arasindaki en belirgin farklar sunlardir: 1. Tensorlar hizlandirilmis hafizalar tarafindan desteklenebilr (GPU, TPU gibi). 2. Tensorlar degistirilemez. ### NumPy Uyumlulugu TensorFlow Tensorlari ile NumPy 'ndarray'leri arasindaki donusum cok basittir: * TensorFlow islemleri otomatik olarak NumPy ndarray'lerini Tensorlara donusturur. * NumPy islemleri de otomatik olarak Tensorlari NumPy ndarray'lerine donusturur. '.numpy()' metodunu kullanarak Tensorlari belirgin sekilde NumPy ndarray'lerine donusturebilirsiniz. Tensorlar ve 'ndarray'ler temelde mumkun oldugunca ayni sekilde tanimlandigi icin bu donusturmeler ucuzdur. Fakat, NumPy dizileri her zaman ana hafizada calisirken Tensorlar GPU hafizasini da kullanabildigi icin her zaman benzer sekilde tanimlanamazlar ve donusturme isleminde GPU'dan ana hafizaya kopyalama da bulunur. ``` import numpy as np ndarray = np.ones([3, 3]) print("TensorFlow operations convert numpy arrays to Tensors automatically") tensor = tf.multiply(ndarray, 42) print(tensor) print("And NumPy operations convert Tensors to numpy arrays automatically") print(np.add(tensor, 1)) print("The .numpy() method explicitly converts a Tensor to a numpy array") print(tensor.numpy()) ``` ## GPU hizlandirmasi Hesaplamalar icin GPU kullanarak bircok TensorFlow islemleri hizlandirilabilir. TensorFlow bir islem icin, ek aciklamaya gerek duymadan, otomatik olarak GPU ya da CPU kullanimina karar verir (ve gerektiginde tensorlari GPU ve CPU hafizalarina kopyalar). Bir islem sonucu olusan tensorlar o islem hangi hafizada yurutulduyse o hafizaya kopyalanir. Ornegin: ``` x = tf.random_uniform([3, 3]) print("Is there a GPU available: "), print(tf.test.is_gpu_available()) print("Is the Tensor on GPU #0: "), print(x.device.endswith('GPU:0')) ``` ### Aygit Isimleri `Tensor.device` ozelligi tensorlarin bulundugu aygitin tam adini dizgi olarak temin eder. Bu dizgide bircok detay bulunmaktadir: programin calistigi anasistemin bulundugu agin taniticisi ve anasistemdeki aygit. Bunlar TensorFlow programlarinin dagitiminda gerekli olan bilgilerdir. Eger tensor sistemdeki 'N'inci GPU'ya yerlestirilmisse bu dizgi `GPU:<N>` ile biter. ### Belirtilmis Aygit Yerlestirilmesi TensorFlow'da "yerlestirme" terimi islemlerin uygulama sirasinda sistemde tek tek nasil atandigi (yerlestirildigi) anlaminda kullanilmistir. Yukarida da bahsettigimiz gibi, eger ozellikle belirtilmemisse TensorFlow bir islemi nerde calistiracagina otomatik olarak karar verir ve gerekirse tensorlari oraya kopyalar. Fakat, TensorFlow islemleri 'tf.device' baglam yoneticisi kullanilarak belirli aygitlara yerlestirilebilir. Ornegin: ``` import time def time_matmul(x): start = time.time() for loop in range(10): tf.matmul(x, x) result = time.time()-start print("10 loops: {:0.2f}ms".format(1000*result)) # CPU ustunde zorla calistirma print("On CPU:") with tf.device("CPU:0"): x = tf.random_uniform([1000, 1000]) assert x.device.endswith("CPU:0") time_matmul(x) # Eger mumkunse GPU ustunde zorla calistirma #0 if tf.test.is_gpu_available(): with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc. x = tf.random_uniform([1000, 1000]) assert x.device.endswith("GPU:0") time_matmul(x) ``` ## Veri setleri Simdi modelimize veri akimini saglamak icin [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets)'sini nasil kullanacagimizi gorecegiz: * `Dataset`i olusturalim. * Eager modunda `Dataset`in yinelenmesi. Modelimizin egitim ve degerlendirme dongulerine verilen kompleks girdi hatlarini 'Dataset' API'si ile basit ve tekrar kullanilabilir parcalardan olusturmanizi tavsiye ederiz. 'Dataset' nesnesi olusturma API'si eager modunda iken TensorFlow graph'taki ile aynidir, fakat veri setindeki elemanlarin yinelenmesi islemi biraz daha basittir. 'tf.data.Dataset' nesnesi ustunde direk olarak Python yinelemesi yapabildiginiz icin `tf.data.Iterator` nesnesi olusturmaniza gerek yoktur. Sonuc olarak, eger eager modunu kullaniyorsaniz, [TensorFlow Rehberi](https://www.tensorflow.org/r1/guide/datasets)'nde anlatilan yineleme gereksizdir. ### `Dataset` kaynagi olusturalim Buradaki fonksiyonlardan birini [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices) ya da kutuklerden okunan nesneleri [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) veya [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset) kullanarak _source_ dataset olusturabiliriz. [TensorFlow Rehberi](https://www.tensorflow.org/r1/guide/datasets#reading_input_data)'nde daha detayli bilgi bulabilirsiniz. ``` ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6]) # CSV kutugunu olusturalim import tempfile _, filename = tempfile.mkstemp() with open(filename, 'w') as f: f.write("""Line 1 Line 2 Line 3 """) ds_file = tf.data.TextLineDataset(filename) ``` ### Transformations (donusumler) uygulayalim Veri seti kayitlarini donusturmek icin transformations (donusumler) fonksiyonlarini kullanabiliriz: ornegin [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle). `tf.data.Dataset` API dokumanlari hakkinda daha fazla bilgi icin [buraya bakiniz](https://www.tensorflow.org/api_docs/python/tf/data/Dataset). ``` ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2) ds_file = ds_file.batch(2) ``` ### Yineleme Eager modunda 'Dataset' nesneleri yinelemeleri destekler. Eger TensorFlow 'graphs'taki 'Dataset' kullanimina asina iseniz, `Dataset.make_one_shot_iterator()` ya da `get_next()` kullanimina gerek olmadigina lutfen dikkat ediniz. ``` print('Elements of ds_tensors:') for x in ds_tensors: print(x) print('\nElements in ds_file:') for x in ds_file: print(x) ```
github_jupyter
``` import numpy as np import glob spectrogram = glob.glob('spectrogram-train/*npy') len(spectrogram) def filter_text(string): string = string.lower() splitted = string.split('/')[1].split('.')[0].replace('<>','-').split('-') splitted = [w for w in splitted if not w.isdigit() and w not in ['man', 'woman', 'augment']] return ' '.join(splitted) filter_text(spectrogram[-1]) train_X, train_Y = [], [] for spec in spectrogram: train_Y.append(filter_text(spec)) train_X.append(np.load(spec)) spectrogram = glob.glob('spectrogram-test/*npy') len(spectrogram) test_X, test_Y = [], [] for spec in spectrogram: test_Y.append(filter_text(spec)) test_X.append(np.load(spec)) import tensorflow as tf from tqdm import tqdm train_X = tf.keras.preprocessing.sequence.pad_sequences( train_X, dtype = 'float32', padding = 'post' ) test_X = tf.keras.preprocessing.sequence.pad_sequences( test_X, dtype = 'float32', padding = 'post' ) chars = list(set([c for target in train_Y + test_Y for c in target])) num_classes = len(chars) + 2 idx2char = {idx + 1: char for idx, char in enumerate(chars)} idx2char[0] = '<PAD>' char2idx = {char: idx for idx, char in idx2char.items()} train_Y = [[char2idx[c] for c in target] for target in train_Y] test_Y = [[char2idx[c] for c in target] for target in test_Y] def pad_sentence_batch(sentence_batch, pad_int): padded_seqs = [] seq_lens = [] max_sentence_len = max([len(sentence) for sentence in sentence_batch]) for sentence in sentence_batch: padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence))) seq_lens.append(len(sentence)) return padded_seqs, seq_lens def sparse_tuple_from(sequences, dtype=np.int32): indices = [] values = [] for n, seq in enumerate(sequences): indices.extend(zip([n] * len(seq), range(len(seq)))) values.extend(seq) indices = np.asarray(indices, dtype=np.int64) values = np.asarray(values, dtype=dtype) shape = np.asarray([len(sequences), np.asarray(indices).max(0)[1] + 1], dtype=np.int64) return indices, values, shape def pad_second_dim(x, desired_size): padding = tf.tile([[0]], tf.stack([tf.shape(x)[0], desired_size - tf.shape(x)[1]], 0)) return tf.concat([x, padding], 1) class Model: def __init__( self, num_layers, size_layers, learning_rate, num_features, dropout = 1.0, ): self.X = tf.placeholder(tf.float32, [None, None, num_features]) self.label = tf.placeholder(tf.int32, [None, None]) self.Y_seq_len = tf.placeholder(tf.int32, [None]) self.Y = tf.sparse_placeholder(tf.int32) seq_lens = tf.count_nonzero( tf.reduce_sum(self.X, -1), 1, dtype = tf.int32 ) + 10 filled = tf.fill(tf.shape(seq_lens), tf.shape(self.X)[1]) seq_lens = tf.where(seq_lens > tf.shape(self.X)[1], filled, seq_lens) def cells(size, reuse = False): return tf.contrib.rnn.DropoutWrapper( tf.nn.rnn_cell.LSTMCell( size, initializer = tf.orthogonal_initializer(), reuse = reuse, ), state_keep_prob = dropout, output_keep_prob = dropout, ) features = self.X for n in range(num_layers): (out_fw, out_bw), ( state_fw, state_bw, ) = tf.nn.bidirectional_dynamic_rnn( cell_fw = cells(size_layers), cell_bw = cells(size_layers), inputs = features, sequence_length = seq_lens, dtype = tf.float32, scope = 'bidirectional_rnn_%d' % (n), ) features = tf.concat((out_fw, out_bw), 2) logits = tf.layers.dense(features, num_classes) time_major = tf.transpose(logits, [1, 0, 2]) self.time_major = time_major decoded, log_prob = tf.nn.ctc_greedy_decoder(time_major, seq_lens) decoded = tf.to_int32(decoded[0]) self.preds = tf.sparse.to_dense(decoded) self.cost = tf.reduce_mean( tf.nn.ctc_loss( self.Y, time_major, seq_lens ) ) self.optimizer = tf.train.AdamOptimizer( learning_rate = learning_rate ).minimize(self.cost) preds = self.preds[:, :tf.reduce_max(self.Y_seq_len)] masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32) preds = pad_second_dim(preds, tf.reduce_max(self.Y_seq_len)) y_t = tf.cast(preds, tf.int32) self.prediction = tf.boolean_mask(y_t, masks) mask_label = tf.boolean_mask(self.label, masks) self.mask_label = mask_label correct_pred = tf.equal(self.prediction, mask_label) correct_index = tf.cast(correct_pred, tf.float32) self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) tf.reset_default_graph() sess = tf.InteractiveSession() size_layers = 512 learning_rate = 1e-3 num_layers = 2 batch_size = 128 epoch = 20 model = Model(num_layers, size_layers, learning_rate, train_X.shape[2]) sess.run(tf.global_variables_initializer()) for e in range(epoch): pbar = tqdm( range(0, len(train_X), batch_size), desc = 'minibatch loop') train_cost, train_accuracy, test_cost, test_accuracy = [], [], [], [] for i in pbar: batch_x = train_X[i : min(i + batch_size, len(train_X))] y = train_Y[i : min(i + batch_size, len(train_X))] batch_y = sparse_tuple_from(y) batch_label, batch_len = pad_sentence_batch(y, 0) _, cost, accuracy = sess.run( [model.optimizer, model.cost, model.accuracy], feed_dict = {model.X: batch_x, model.Y: batch_y, model.label: batch_label, model.Y_seq_len: batch_len}, ) train_cost.append(cost) train_accuracy.append(accuracy) pbar.set_postfix(cost = cost, accuracy = accuracy) pbar = tqdm( range(0, len(test_X), batch_size), desc = 'testing minibatch loop') for i in pbar: batch_x = test_X[i : min(i + batch_size, len(test_X))] y = test_Y[i : min(i + batch_size, len(test_X))] batch_y = sparse_tuple_from(y) batch_label, batch_len = pad_sentence_batch(y, 0) cost, accuracy = sess.run( [model.cost, model.accuracy], feed_dict = {model.X: batch_x, model.Y: batch_y, model.label: batch_label, model.Y_seq_len: batch_len}, ) test_cost.append(cost) test_accuracy.append(accuracy) pbar.set_postfix(cost = cost, accuracy = accuracy) print('epoch %d, training avg cost %f, training avg accuracy %f'%(e + 1, np.mean(train_cost), np.mean(train_accuracy))) print('epoch %d, testing avg cost %f, testing avg accuracy %f'%(e + 1, np.mean(test_cost), np.mean(test_accuracy))) import random random_index = random.randint(0, len(test_X) - 1) batch_x = test_X[random_index : random_index + 1] print( 'real:', ''.join( [idx2char[no] for no in test_Y[random_index : random_index + 1][0]] ), ) batch_y = sparse_tuple_from(test_Y[random_index : random_index + 1]) pred = sess.run(model.preds, feed_dict = {model.X: batch_x})[0] print('predicted:', ''.join([idx2char[no] for no in pred])) ```
github_jupyter
``` from ei_net import * # import the .py file but you can find all the functions at the bottom of this notebook from utilities import show_values import matplotlib.pyplot as plt %matplotlib inline ########################################## ############ PLOTTING SETUP ############## EI_cmap = "Greys" where_to_save_pngs = "../figs/pngs/" where_to_save_pdfs = "../figs/pdfs/" save = True ########################################## ########################################## ``` # The emergence of informative higher scales in complex networks # Chapter 01: Effective Information in Networks ## Networks and Causal Structure Networks provide a powerful syntax for representing a wide range of systems, from the trivially simple to the highly complex. It is common to characterize networks based on structural properties like their degree distribution or whether they show community structure. While our understanding of these structural properties of networks has been crucial for the rapid rise of network science as a discipline, there is a distinct gap in our treatment of both dependencies between nodes and also higher scales in networks. This gap is especially pressing because networks often have an interpretation where links represent dependencies, such as contact networks in epidemiology, neuronal and functional networks in the brain, or interaction networks among cells, genes, or drugs, and these networks can often be analyzed at multiple different scales. Previously, others have used directed acyclic graphs known as "causal diagrams" to represent causal relationships as dependencies in networks. But there has been little research on quantifying or broadly classifying such causation in networks, particularly those that have both weighted connections and feedback, which are hallmarks of complex systems across domains. Here we introduce information-theoretic measures designed to capture the information contained in the dependencies of networks and which can be used to identify when these networks possess informative higher scales. ## Effective Information Describing cause and effect implicitly invokes the idea of a network. For example, if a system in a particular state, *A*, always transitions to state *B*, the causal relationship between *A* and *B* can be represented by a node-link diagram wherein the two nodes---*A* and *B*---are connected by a directed arrow, indicating that *B* depends on *A*. In such a network, the out-weight vector, $W^{out}_{i}$, of a node, $v_i$, represents the possible transitions and their probabilities from that node. Specifically, $W^{out}_{i}$ consists of weights $w_{ij}$ between node $v_i$ and its neighbors $v_j$, where $w_{ij}=0.0$ if there is no edge from $v_i$ to $v_j$. This means the edge weights $w_{ij}$ can be interpreted as the probability $p_{ij}$ that a random walker on $v_i$ will transition to $v_j$ in the next time step. We will refer to such a network as having a *causal structure*. In the cases where links between nodes represent dependency in general, such as influence, strength, or potential causal interactions, but not explicitly transitions (or where details about transitions is lacking), for our analysis we create $W^{out}_{i}$ by normalizing each node's out-weight vector to sum to $1.0$. This generalizes our results to multiple types of representations (although what sort of dependencies the links in the network represent should be kept in mind when interpreting the values of the measures we introduce below). A network's causal structure can be characterized by the uncertainty in the relationships among the nodes' out-weights (possible effects) and in-weights (possible causes). The total information in the dependencies between nodes is a function of this uncertainty and can be derived from two fundamental properties. The first is the uncertainty of a node's effects, which can be quantified by the Shannon entropy of its out-weights, $H(W^{out}_{i})$. The average of this entropy, $\langle H(W^{out}_{i} )\rangle $, across all nodes is the amount of noise present in the network's causal structure. Only if $\langle H(W^{out}_{i} )\rangle $ is zero is the network is *deterministic*. The second fundamental causal property is how weight is distributed across the whole network, $\langle W^{out}_{i}\rangle$. This vector $\langle W^{out}_{i}\rangle$ consists of elements that are the sum of the in-weights $w_{ji}$ to each node $v_i$ from each of its incoming neighbors, $v_j$ (then normalized by total weight of the network). Its entropy, $H (\langle W^{out}_{i}\rangle)$, reflects how certainty is distributed across the network. If all nodes link only to the same node, then $H(\langle W^{out}_{i}\rangle)$ is zero, and the network is totally *degenerate* since all causes lead to the same effect. From these two properties we can derive the amount of information in a network's causal structure, the *effective information* ($EI$), as: $$ EI = H(\langle W^{out}_{i} \rangle) - \langle {H}(W^{out}_{i}) \rangle $$ Here, we use this measure to develop a general classification of networks. Networks with high $EI$ contain more certainty in the relationships between nodes in the network (since the links represent greater dependencies), whereas networks with low $EI$ contain less certainty. In this work, we show how the connectivity and different growth rules of a network have a deep relationship to that network's $EI$. This also provides a principled means of quantifying the amount of information among the micro-, meso-, and macroscale dependencies in a network. We introduce a formalism for finding and assessing the most informative scale of a network: the scale that minimizes the uncertainty in the dependencies between nodes. For some networks, a macroscale description of the network can be more informative in this manner, demonstrating a phenomenon known as *causal emergence*. ## 1.0 Create a Few Example Transition-Probability Matrices ``` Copy_Copy = np.array([[1.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]]) And_And = np.array([[1.0, 0.0, 0.0, 0.0], [1.0, 0.0, 0.0, 0.0], [1.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]]) Or_Or = np.array([[1.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0], [0.0, 0.0, 0.0, 1.0], [0.0, 0.0, 0.0, 1.0]]) Or_Copy = np.array([[1.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 0.0, 1.0], [0.0, 0.0, 0.0, 1.0]]) Star = np.array([[1.0, 0.0, 0.0, 0.0], [1.0, 0.0, 0.0, 0.0], [1.0, 0.0, 0.0, 0.0], [1.0, 0.0, 0.0, 0.0]]) ``` ### 1.0.1 Plot these TPMs, showing their $EI$ values ``` fig, (ax0, ax1, ax2, ax3, ax4) = plt.subplots(1, 5, figsize=(22,4)) c0 = ax0.pcolor( np.arange(-0.5, Copy_Copy.shape[0], 1), np.arange(-0.5, Copy_Copy.shape[0], 1), Copy_Copy, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c1 = ax1.pcolor( np.arange(-0.5, And_And.shape[0], 1), np.arange(-0.5, And_And.shape[0], 1), And_And, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c2 = ax2.pcolor( np.arange(-0.5, Or_Or.shape[0], 1), np.arange(-0.5, Or_Or.shape[0], 1), Or_Or, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c3 = ax3.pcolor( np.arange(-0.5, Or_Copy.shape[0], 1), np.arange(-0.5, Or_Copy.shape[0], 1), Or_Copy, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c4 = ax4.pcolor( np.arange(-0.5, Star.shape[0], 1), np.arange(-0.5, Star.shape[0], 1), Star, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) show_values(c0, ax=ax0, fmt="%.1f", fontsize=16) show_values(c1, ax=ax1, fmt="%.1f", fontsize=16) show_values(c2, ax=ax2, fmt="%.1f", fontsize=16) show_values(c3, ax=ax3, fmt="%.1f", fontsize=16) show_values(c4, ax=ax4, fmt="%.1f", fontsize=16) ax0.invert_yaxis() ax1.invert_yaxis() ax2.invert_yaxis() ax3.invert_yaxis() ax4.invert_yaxis() xlabs = ylabs = ['0|0','0|1', '1|0', '1|1'] ax0.set_xticks(np.arange(0, Copy_Copy.shape[0], 1)) ax0.set_yticks(np.arange(0, Copy_Copy.shape[1], 1)) ax0.set_xticklabels(xlabs, fontsize=14) ax0.set_yticklabels(ylabs, fontsize=14) ax0.set_xticks(np.arange(-0.5, Copy_Copy.shape[0]-0.5, 1), minor=True) ax0.set_yticks(np.arange(-0.5, Copy_Copy.shape[1]-0.5, 1), minor=True) ax1.set_xticks(np.arange(0, And_And.shape[0], 1)) ax1.set_yticks(np.arange(0, And_And.shape[1], 1)) ax1.set_xticklabels(xlabs, fontsize=14) ax1.set_yticklabels(ylabs, fontsize=14) ax1.set_xticks(np.arange(-0.5, And_And.shape[0]-0.5, 1), minor=True) ax1.set_yticks(np.arange(-0.5, And_And.shape[1]-0.5, 1), minor=True) ax2.set_xticks(np.arange(0, Or_Or.shape[0], 1)) ax2.set_yticks(np.arange(0, Or_Or.shape[1], 1)) ax2.set_xticklabels(xlabs, fontsize=14) ax2.set_yticklabels(ylabs, fontsize=14) ax2.set_xticks(np.arange(-0.5, Or_Or.shape[0]-0.5, 1), minor=True) ax2.set_yticks(np.arange(-0.5, Or_Or.shape[1]-0.5, 1), minor=True) ax3.set_xticks(np.arange(0, Or_Copy.shape[0], 1)) ax3.set_yticks(np.arange(0, Or_Copy.shape[1], 1)) ax3.set_xticklabels(xlabs, fontsize=14) ax3.set_yticklabels(ylabs, fontsize=14) ax3.set_xticks(np.arange(-0.5, Or_Copy.shape[0]-0.5, 1), minor=True) ax3.set_yticks(np.arange(-0.5, Or_Copy.shape[1]-0.5, 1), minor=True) ax4.set_xticks(np.arange(0, Star.shape[0], 1)) ax4.set_yticks(np.arange(0, Star.shape[1], 1)) ax4.set_xticklabels(xlabs, fontsize=14) ax4.set_yticklabels(ylabs, fontsize=14) ax4.set_xticks(np.arange(-0.5, Star.shape[0]-0.5, 1), minor=True) ax4.set_yticks(np.arange(-0.5, Star.shape[1]-0.5, 1), minor=True) ax0.xaxis.tick_top() ax1.xaxis.tick_top() ax2.xaxis.tick_top() ax3.xaxis.tick_top() ax4.xaxis.tick_top() ax0.set_title('Copy-Copy logic gate\n $EI = %.3f$ \n'% effective_information(Copy_Copy), fontsize=20, pad=10) ax1.set_title('And-And logic gate\n $EI = %.3f$ \n'% effective_information(And_And), fontsize=20, pad=10) ax2.set_title('Or-Or logic gate\n $EI = %.3f$ \n'% effective_information(Or_Or), fontsize=20, pad=10) ax3.set_title('Or-Copy logic gate\n $EI = %.3f$ \n'% effective_information(Or_Copy), fontsize=20, pad=10) ax4.set_title('Star-like logic gate\n $EI = %.3f$ \n'% effective_information(Star), fontsize=20, pad=10) if save: plt.savefig(where_to_save_pngs+"Example1_LogicGates.png", bbox_inches='tight', dpi=425) plt.savefig(where_to_save_pdfs+"Example1_LogicGates.pdf", bbox_inches='tight') plt.show() ``` ______________________ ## 1.1 Add noise to the transition probability matrices ``` noise = np.random.uniform(0.0,0.1,size=Copy_Copy.shape) Copy_Copy_noise = Copy_Copy + noise Copy_Copy_noise = Copy_Copy_noise / Copy_Copy_noise.sum(axis=1) noise = np.random.uniform(0.0,0.1,size=And_And.shape) And_And_noise = And_And + noise And_And_noise = And_And_noise / And_And_noise.sum(axis=1) noise = np.random.uniform(0.0,0.1,size=Or_Or.shape) Or_Or_noise = Or_Or + noise Or_Or_noise = Or_Or_noise / Or_Or_noise.sum(axis=1) noise = np.random.uniform(0.0,0.1,size=Or_Copy.shape) Or_Copy_noise = Or_Copy + noise Or_Copy_noise = Or_Copy_noise / Or_Copy_noise.sum(axis=1) noise = np.random.uniform(0.0,0.1,size=Star.shape) Star_noise = Star + noise Star_noise = Star_noise / Star_noise.sum(axis=1) ``` ### 1.1.1 Plot these TPMs, showing their $EI$ values ``` fig, (ax0, ax1, ax2, ax3, ax4) = plt.subplots(1, 5, figsize=(22,4)) c0 = ax0.pcolor( np.arange(-0.5, Copy_Copy_noise.shape[0], 1), np.arange(-0.5, Copy_Copy_noise.shape[0], 1), Copy_Copy_noise, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c1 = ax1.pcolor( np.arange(-0.5, And_And_noise.shape[0], 1), np.arange(-0.5, And_And_noise.shape[0], 1), And_And_noise, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c2 = ax2.pcolor( np.arange(-0.5, Or_Or_noise.shape[0], 1), np.arange(-0.5, Or_Or_noise.shape[0], 1), Or_Or_noise, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c3 = ax3.pcolor( np.arange(-0.5, Or_Copy_noise.shape[0], 1), np.arange(-0.5, Or_Copy_noise.shape[0], 1), Or_Copy_noise, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c4 = ax4.pcolor( np.arange(-0.5, Star_noise.shape[0], 1), np.arange(-0.5, Star_noise.shape[0], 1), Star_noise, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) show_values(c0, ax=ax0, fmt="%.2f", fontsize=16) show_values(c1, ax=ax1, fmt="%.2f", fontsize=16) show_values(c2, ax=ax2, fmt="%.2f", fontsize=16) show_values(c3, ax=ax3, fmt="%.2f", fontsize=16) show_values(c4, ax=ax4, fmt="%.2f", fontsize=16) ax0.invert_yaxis() ax1.invert_yaxis() ax2.invert_yaxis() ax3.invert_yaxis() ax4.invert_yaxis() xlabs = ylabs = ['0|0','0|1', '1|0', '1|1'] ax0.set_xticks(np.arange(0, Copy_Copy_noise.shape[0], 1)) ax0.set_yticks(np.arange(0, Copy_Copy_noise.shape[1], 1)) ax0.set_xticklabels(xlabs, fontsize=14) ax0.set_yticklabels(ylabs, fontsize=14) ax0.set_xticks(np.arange(-0.5, Copy_Copy_noise.shape[0]-0.5, 1), minor=True) ax0.set_yticks(np.arange(-0.5, Copy_Copy_noise.shape[1]-0.5, 1), minor=True) ax1.set_xticks(np.arange(0, And_And_noise.shape[0], 1)) ax1.set_yticks(np.arange(0, And_And_noise.shape[1], 1)) ax1.set_xticklabels(xlabs, fontsize=14) ax1.set_yticklabels(ylabs, fontsize=14) ax1.set_xticks(np.arange(-0.5, And_And_noise.shape[0]-0.5, 1), minor=True) ax1.set_yticks(np.arange(-0.5, And_And_noise.shape[1]-0.5, 1), minor=True) ax2.set_xticks(np.arange(0, Or_Or_noise.shape[0], 1)) ax2.set_yticks(np.arange(0, Or_Or_noise.shape[1], 1)) ax2.set_xticklabels(xlabs, fontsize=14) ax2.set_yticklabels(ylabs, fontsize=14) ax2.set_xticks(np.arange(-0.5, Or_Or_noise.shape[0]-0.5, 1), minor=True) ax2.set_yticks(np.arange(-0.5, Or_Or_noise.shape[1]-0.5, 1), minor=True) ax3.set_xticks(np.arange(0, Or_Copy_noise.shape[0], 1)) ax3.set_yticks(np.arange(0, Or_Copy_noise.shape[1], 1)) ax3.set_xticklabels(xlabs, fontsize=14) ax3.set_yticklabels(ylabs, fontsize=14) ax3.set_xticks(np.arange(-0.5, Or_Copy_noise.shape[0]-0.5, 1), minor=True) ax3.set_yticks(np.arange(-0.5, Or_Copy_noise.shape[1]-0.5, 1), minor=True) ax4.set_xticks(np.arange(0, Star_noise.shape[0], 1)) ax4.set_yticks(np.arange(0, Star_noise.shape[1], 1)) ax4.set_xticklabels(xlabs, fontsize=14) ax4.set_yticklabels(ylabs, fontsize=14) ax4.set_xticks(np.arange(-0.5, Star_noise.shape[0]-0.5, 1), minor=True) ax4.set_yticks(np.arange(-0.5, Star_noise.shape[1]-0.5, 1), minor=True) ax0.xaxis.tick_top() ax1.xaxis.tick_top() ax2.xaxis.tick_top() ax3.xaxis.tick_top() ax4.xaxis.tick_top() ax0.set_title('Copy-Copy logic gate\n $EI = %.3f$ \n'% effective_information(Copy_Copy_noise), fontsize=20, pad=10) ax1.set_title('And-And logic gate\n $EI = %.3f$ \n'% effective_information(And_And_noise), fontsize=20, pad=10) ax2.set_title('Or-Or logic gate\n $EI = %.3f$ \n'% effective_information(Or_Or_noise), fontsize=20, pad=10) ax3.set_title('Or-Copy logic gate\n $EI = %.3f$ \n'% effective_information(Or_Copy_noise), fontsize=20, pad=10) ax4.set_title('Star-like logic gate\n $EI = %.3f$ \n'% effective_information(Star_noise), fontsize=20, pad=10) if save: plt.savefig(where_to_save_pngs+"Example2_LogicGates.png", bbox_inches='tight', dpi=425) plt.savefig(where_to_save_pdfs+"Example2_LogicGates.pdf", bbox_inches='tight') plt.show() ``` _______________________ ## 1.2 Random Matrices ``` rand0 = np.random.rand(4,4) rand0 = np.array([rand0[i]/sum(rand0[i]) for i in range(rand0.shape[0])]) rand1 = np.random.rand(4,4) rand1 = np.array([rand1[i]/sum(rand1[i]) for i in range(rand1.shape[0])]) rand2 = np.random.rand(4,4) rand2 = np.array([rand2[i]/sum(rand2[i]) for i in range(rand2.shape[0])]) rand3 = np.random.rand(4,4) rand3 = np.array([rand3[i]/sum(rand3[i]) for i in range(rand3.shape[0])]) rand4 = np.random.rand(4,4) rand4 = np.array([rand4[i]/sum(rand4[i]) for i in range(rand4.shape[0])]) ``` ### 1.2.1 Plot these TPMs, showing their $EI$ values ``` fig, (ax0, ax1, ax2, ax3, ax4) = plt.subplots(1, 5, figsize=(22,4)) c0 = ax0.pcolor( np.arange(-0.5, rand0.shape[0], 1), np.arange(-0.5, rand0.shape[0], 1), rand0, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c1 = ax1.pcolor( np.arange(-0.5, rand1.shape[0], 1), np.arange(-0.5, rand1.shape[0], 1), rand1, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c2 = ax2.pcolor( np.arange(-0.5, rand2.shape[0], 1), np.arange(-0.5, rand2.shape[0], 1), rand2, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c3 = ax3.pcolor( np.arange(-0.5, rand3.shape[0], 1), np.arange(-0.5, rand3.shape[0], 1), rand3, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) c4 = ax4.pcolor( np.arange(-0.5, rand4.shape[0], 1), np.arange(-0.5, rand4.shape[0], 1), rand4, edgecolors='#999999', linewidths=3.0, cmap=EI_cmap) show_values(c0, ax=ax0, fmt="%.2f", fontsize=16) show_values(c1, ax=ax1, fmt="%.2f", fontsize=16) show_values(c2, ax=ax2, fmt="%.2f", fontsize=16) show_values(c3, ax=ax3, fmt="%.2f", fontsize=16) show_values(c4, ax=ax4, fmt="%.2f", fontsize=16) ax0.invert_yaxis() ax1.invert_yaxis() ax2.invert_yaxis() ax3.invert_yaxis() ax4.invert_yaxis() xlabs = ylabs = ['0|0','0|1', '1|0', '1|1'] ax0.set_xticks(np.arange(0, rand0.shape[0], 1)) ax0.set_yticks(np.arange(0, rand0.shape[1], 1)) ax0.set_xticklabels(xlabs, fontsize=14) ax0.set_yticklabels(ylabs, fontsize=14) ax0.set_xticks(np.arange(-0.5, rand0.shape[0]-0.5, 1), minor=True) ax0.set_yticks(np.arange(-0.5, rand0.shape[1]-0.5, 1), minor=True) ax1.set_xticks(np.arange(0, rand1.shape[0], 1)) ax1.set_yticks(np.arange(0, rand1.shape[1], 1)) ax1.set_xticklabels(xlabs, fontsize=14) ax1.set_yticklabels(ylabs, fontsize=14) ax1.set_xticks(np.arange(-0.5, rand1.shape[0]-0.5, 1), minor=True) ax1.set_yticks(np.arange(-0.5, rand1.shape[1]-0.5, 1), minor=True) ax2.set_xticks(np.arange(0, rand2.shape[0], 1)) ax2.set_yticks(np.arange(0, rand2.shape[1], 1)) ax2.set_xticklabels(xlabs, fontsize=14) ax2.set_yticklabels(ylabs, fontsize=14) ax2.set_xticks(np.arange(-0.5, rand2.shape[0]-0.5, 1), minor=True) ax2.set_yticks(np.arange(-0.5, rand2.shape[1]-0.5, 1), minor=True) ax3.set_xticks(np.arange(0, rand3.shape[0], 1)) ax3.set_yticks(np.arange(0, rand3.shape[1], 1)) ax3.set_xticklabels(xlabs, fontsize=14) ax3.set_yticklabels(ylabs, fontsize=14) ax3.set_xticks(np.arange(-0.5, rand3.shape[0]-0.5, 1), minor=True) ax3.set_yticks(np.arange(-0.5, rand3.shape[1]-0.5, 1), minor=True) ax4.set_xticks(np.arange(0, rand4.shape[0], 1)) ax4.set_yticks(np.arange(0, rand4.shape[1], 1)) ax4.set_xticklabels(xlabs, fontsize=14) ax4.set_yticklabels(ylabs, fontsize=14) ax4.set_xticks(np.arange(-0.5, rand4.shape[0]-0.5, 1), minor=True) ax4.set_yticks(np.arange(-0.5, rand4.shape[1]-0.5, 1), minor=True) ax0.xaxis.tick_top() ax1.xaxis.tick_top() ax2.xaxis.tick_top() ax3.xaxis.tick_top() ax4.xaxis.tick_top() ax0.set_title('Random TPM 0\n $EI = %.3f$ \n'% effective_information(rand0), fontsize=20, pad=10) ax1.set_title('Random TPM 1\n $EI = %.3f$ \n'% effective_information(rand1), fontsize=20, pad=10) ax2.set_title('Random TPM 2\n $EI = %.3f$ \n'% effective_information(rand2), fontsize=20, pad=10) ax3.set_title('Random TPM 3\n $EI = %.3f$ \n'% effective_information(rand3), fontsize=20, pad=10) ax4.set_title('Random TPM 4\n $EI = %.3f$ \n'% effective_information(rand4), fontsize=20, pad=10) if save: plt.savefig(where_to_save_pngs+"Example3_RandomTPMs.png", bbox_inches='tight', dpi=425) plt.savefig(where_to_save_pdfs+"Example3_RandomTPMs.pdf", bbox_inches='tight') plt.show() ``` ## 1.3 Example calculation figure (Supplemental Information, Figure 1) ``` ############ PLOTTING SETUP ############## from matplotlib import gridspec plt.rc('axes', linewidth=3) font = {'family': 'serif', 'color': 'k', 'weight': 'normal', 'size': 28} plt.rc('text', usetex=True) plt.rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']}) ########################################## TPM = np.array([[0.0, 0.0, 0.0, 0.5, 0.5], [1/3, 0.0, 1/3, 1/3, 0.0], [0.0, 0.5, 0.0, 0.5, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0], [0.5, 0.0, 0.0, 0.5, 0.0]]) fig = plt.figure(figsize=(7.3, 16)) gs = gridspec.GridSpec(4, 1, height_ratios=[7, 10, 1.2, 1.2]) xlabs = ylabs = ['$A$', '$B$', '$C$', '$D$', '$E$'] ax0 = plt.subplot(gs[1]) ax0.set_xticks(np.arange(0, TPM.shape[0], 1)) ax0.set_yticks(np.arange(0, TPM.shape[0], 1)) ax0.set_xticklabels(xlabs, fontsize=32) ax0.set_yticklabels(ylabs, fontsize=32) ax0.set_xticks(np.arange(-0.5, TPM.shape[0]-0.5, 1), minor=True) ax0.set_yticks(np.arange(-0.5, TPM.shape[0]-0.5, 1), minor=True) ax0.tick_params(axis='y', which='major', pad=7) c0 = plt.pcolor( np.arange(-.5, TPM.shape[0], 1), np.arange(-.5, TPM.shape[1], 1), TPM, edgecolors='k', linewidths=3.0, cmap='Blues', vmin=-.05, vmax=1.2) show_values(c0, ax=ax0, fmt="%.2f", fontsize=26) ax0.invert_yaxis() ax0.xaxis.set_label_position("top") ax0.xaxis.tick_top() ax0.set_xlabel(r'$t + 1$', size=28, labelpad=8.0) ax0.set_ylabel(r'$t$', size=28, rotation=0, labelpad=27.0) ax0.xaxis.label.set_position((0.5,5.0)) ax0.text(4.65, 0.20, '$=W_{A}^{out}$', ha='left', rotation=0, wrap=True, size=32) ax0.text(4.65, 1.20, '$=W_{B}^{out}$', ha='left', rotation=0, wrap=True, size=32) ax0.text(4.65, 2.20, '$=W_{C}^{out}$', ha='left', rotation=0, wrap=True, size=32) ax0.text(4.65, 3.20, '$=W_{D}^{out}$', ha='left', rotation=0, wrap=True, size=32) ax0.text(4.65, 4.20, '$=W_{E}^{out}$', ha='left', rotation=0, wrap=True, size=32) ms = 78 ax1 = plt.subplot(gs[2]) Win_j = TPM.sum(axis=0).reshape(1,TPM.shape[0]) Win = W_in(TPM).reshape(1,TPM.shape[0]) c1 = plt.pcolor( np.arange(-.5, Win_j.shape[1], 1), np.arange(0.0, 1.5, 1), Win_j, edgecolors='k', linewidths=3.0, cmap='Oranges', vmin=0, vmax=3.0) show_values(c1, ax=ax1, fmt="%.2f", fontsize=26) ax1.set_xlabel("") ax1.set_ylabel("") ax1.set_xticks([]) ax1.set_yticks([]) ax1.set_xticklabels(['']) ax1.set_yticklabels(['']) ax2 = plt.subplot(gs[3]) c2 = plt.pcolor( np.arange(-.5, Win.shape[1], 1), np.arange(0.0, 1.5, 1), Win, edgecolors='k', linewidths=3.0, cmap='Oranges', vmin=0, vmax=0.75) show_values(c2, ax=ax2, fmt="%.2f", fontsize=26) ax2.set_xlabel("") ax2.set_ylabel("") ax2.set_xticks([]) ax2.set_yticks([]) ax2.set_xticklabels(['']) ax2.set_yticklabels(['']) string10 = r'$= \displaystyle\sum_{i=1}^N w_{ij}$' string20 = r'$= \langle W_{i}^{out} \rangle$' ax1.text(4.65, -.15, string10, ha='left', rotation=0, wrap=True, size=28) ax2.text(4.65, 0.15, string20, ha='left', rotation=0, wrap=True, size=32) plt.subplots_adjust(wspace=0, hspace=0.05) if save: plt.savefig(where_to_save_pngs+"Example4_ExampleTPM.png", bbox_inches='tight', dpi=425) plt.savefig(where_to_save_pdfs+"Example4_ExampleTPM.pdf", bbox_inches='tight') plt.show() ``` The adjacency matrix of a network with 1.158 bits of effective information. The rows correspond to $W^{out}_{i}$, a vector of probabilities that a random walker on node $v_i$ at time $t$ transitions to $v_j$ in the following time step, $t+1$. $\langle W^{out}_{i}\rangle$ represents the (normalized) input weight distribution of the network, that is, the probabilities that a random walker will arrive at a given node $v_j$ at $t+1$, after a uniform introduction of random walkers into the network at $t$. ``` Win = W_in(TPM) vals = Win Win_cols = plt.cm.Oranges(Win+0.15) WoutA_cols = plt.cm.Blues(max(TPM[0])) WoutB_cols = plt.cm.Blues(max(TPM[1])) WoutC_cols = plt.cm.Blues(max(TPM[2])) WoutD_cols = plt.cm.Blues(max(TPM[3])*0.66) WoutE_cols = plt.cm.Blues(max(TPM[4])) tpm0 = TPM Gtpm0 = check_network(tpm0) fig, ((ax00, ax01), (ax02, ax03), (ax04, ax05)) = plt.subplots( 3, 2, figsize=(16*1.3,9*1.3)) plt.subplots_adjust(left=None, bottom=0.1, right=None, top=None, wspace=0.17, hspace=0.3) ax00.bar( np.linspace(0.0, 5.5, 5), TPM[0]+0.01, color=WoutA_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$W_{A}^{out}$') x_wij = np.linspace(0.0, 5.5, 5) y_wij = tpm0[0] for i in range(len(x_wij)): ax00.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax00.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='royalblue', alpha=0.6) ax00.bar( np.linspace(-.6, 4.9, 5), vals, color=Win_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$\langle W_{i}^{out} \rangle$') x_wij = np.linspace(-.6, 4.9, 5) y_wij = vals for i in range(len(x_wij)): ax00.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax00.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#e56c13', alpha=0.9) ax00.set_ylim(0.0,1.19) ax00.set_xlim(-0.95,5.85) ax00.set_xticks(np.linspace(-.3, 5.2, 5)) ax00.set_yticks(np.linspace(0, 1, 6)) ax00.set_yticklabels(np.round(np.linspace(0, 1, 6), 2), size=16) ax00.set_xticklabels(xlabs, size=22) ax00.set_ylabel(r'$w_{ij}$', fontsize=30, rotation='horizontal', labelpad=25) ax00.set_xlabel('out-neighbor', fontsize=20, labelpad=0) ax00.set_axisbelow(True) ax00.grid(which='major', linestyle='-', color='#999999', linewidth=2.5, alpha=0.3) ax00.legend(loc=2, fontsize=22, framealpha=0.7) strax1 = r'$D_{KL}[W^{out}_{A}||\langle W_{i}^{out} \rangle] = %.3f $'%\ effect_information_i(Gtpm0, node_i=0) ax00.text(3.12, 0.802, strax1, ha='center', fontsize=28, color='k') ax02.bar( np.linspace(0.0, 5.5, 5), tpm0[1]+0.01, color=WoutB_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$W_{B}^{out}$') x_wij = np.linspace(0.0, 5.5, 5) y_wij = tpm0[1] for i in range(len(x_wij)): ax02.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax02.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='royalblue', alpha=0.6) ax02.set_xticklabels(["A", "B", "C", "D", "E"]) ax02.bar( np.linspace(-.6, 4.9, 5), vals, color=Win_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$\langle W_{i}^{out} \rangle$') x_wij = np.linspace(-.6, 4.9, 5) y_wij = vals for i in range(len(x_wij)): ax02.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax02.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#e56c13', alpha=0.9) ax02.set_ylim(0.0,1.19) ax02.set_xlim(-0.95,5.85) ax02.set_xticks(np.linspace(-.3, 5.2, 5)) ax02.set_yticks(np.linspace(0, 1, 6)) ax02.set_yticklabels(np.round(np.linspace(0, 1, 6), 2), size=16) ax02.set_xticklabels(xlabs, size=22) ax02.set_ylabel(r'$w_{ij}$', fontsize=30, rotation='horizontal', labelpad=25) ax02.set_xlabel('out-neighbor', fontsize=20, labelpad=0) ax02.set_axisbelow(True) ax02.grid(which='major', linestyle='-', color='#999999', linewidth=2.5, alpha=0.3) ax02.legend(loc=2, fontsize=22, framealpha=0.7) strax1 = r'$D_{KL}[W^{out}_{B}||\langle W_{i}^{out} \rangle] = %.3f $'%\ effect_information_i(Gtpm0, node_i=1) ax02.text(3.12, 0.802, strax1, ha='center', fontsize=28, color='k') ax04.bar( np.linspace(0.0, 5.5, 5), tpm0[2]+0.01, color=WoutC_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$W_{C}^{out}$') x_wij = np.linspace(0.0, 5.5, 5) y_wij = tpm0[2] for i in range(len(x_wij)): ax04.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax04.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='royalblue', alpha=0.6) ax04.set_xticklabels(["A", "B", "C", "D", "E"]) ax04.bar( np.linspace(-.6, 4.9, 5), vals, color=Win_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$\langle W_{i}^{out} \rangle$') x_wij = np.linspace(-.6, 4.9, 5) y_wij = vals for i in range(len(x_wij)): ax04.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax04.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#e56c13', alpha=0.9) ax04.set_ylim(0.0,1.19) ax04.set_xlim(-0.95,5.85) ax04.set_xticks(np.linspace(-.3, 5.2, 5)) ax04.set_yticks(np.linspace(0, 1, 6)) ax04.set_yticklabels(np.round(np.linspace(0, 1, 6), 2), size=16) ax04.set_xticklabels(xlabs, size=22) ax04.set_ylabel(r'$w_{ij}$', fontsize=30, rotation='horizontal', labelpad=25) ax04.set_xlabel('out-neighbor', fontsize=20, labelpad=0) ax04.set_axisbelow(True) ax04.grid(which='major', linestyle='-', color='#999999', linewidth=2.5, alpha=0.3) ax04.legend(loc=2, fontsize=22, framealpha=0.7) strax1 = r'$D_{KL}[W^{out}_{C}||\langle W_{i}^{out} \rangle] = %.3f $'%\ effect_information_i(Gtpm0, node_i=2) ax04.text(3.12, 0.802, strax1, ha='center', fontsize=28, color='k') ax01.bar( np.linspace(0.0, 5.5, 5), tpm0[3]+0.01, color=WoutD_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$W_{D}^{out}$') x_wij = np.linspace(0.0, 5.5, 5) y_wij = tpm0[3] for i in range(len(x_wij)): ax01.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax01.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='royalblue', alpha=0.6) ax01.set_xticklabels(["A", "B", "C", "D", "E"]) ax01.bar( np.linspace(-.6, 4.9, 5), vals, color=Win_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$\langle W_{i}^{out} \rangle$') x_wij = np.linspace(-.6, 4.9, 5) y_wij = vals for i in range(len(x_wij)): ax01.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax01.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#e56c13', alpha=0.9) ax01.set_ylim(0.0,1.19) ax01.set_xlim(-0.95,5.85) ax01.set_xticks(np.linspace(-.3, 5.2, 5)) ax01.set_yticks(np.linspace(0, 1, 6)) ax01.set_yticklabels(np.round(np.linspace(0, 1, 6), 2), size=16) ax01.set_xticklabels(xlabs, size=22) ax01.set_ylabel(r'$w_{ij}$', fontsize=30, rotation='horizontal', labelpad=25) ax01.set_xlabel('out-neighbor', fontsize=20, labelpad=0) ax01.set_axisbelow(True) ax01.grid(which='major', linestyle='-', color='#999999', linewidth=2.5, alpha=0.3) ax01.legend(loc=2, fontsize=22, framealpha=0.7) strax1 = r'$D_{KL}[W^{out}_{D}||\langle W_{i}^{out} \rangle] = %.3f $'%\ effect_information_i(Gtpm0, node_i=3) ax01.text(3.12, 0.802, strax1, ha='center', fontsize=28, color='k') ax03.bar( np.linspace(0.0, 5.5, 5), tpm0[4]+0.01, color=WoutE_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$W_{E}^{out}$') x_wij = np.linspace(0.0, 5.5, 5) y_wij = tpm0[4] for i in range(len(x_wij)): ax03.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax03.text(x_wij[i], y_wij[i]+0.040, "%.2f"%y_wij[i], ha='center', fontsize=21, color='royalblue', alpha=0.6) ax03.set_xticklabels(["A", "B", "C", "D", "E"]) ax03.bar( np.linspace(-.6, 4.9, 5), vals, color=Win_cols, linewidth=2.0, edgecolor='k', width=0.6, label=r'$\langle W_{i}^{out} \rangle$') x_wij = np.linspace(-.6, 4.9, 5) y_wij = vals for i in range(len(x_wij)): ax03.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#262626') ax03.text(x_wij[i], y_wij[i]+0.035, "%.2f"%y_wij[i], ha='center', fontsize=21, color='#e56c13', alpha=0.9) ax03.set_ylim(0.0,1.19) ax03.set_xlim(-0.95,5.85) ax03.set_xticks(np.linspace(-.3, 5.2, 5)) ax03.set_yticks(np.linspace(0, 1, 6)) ax03.set_yticklabels(np.round(np.linspace(0, 1, 6), 2), size=16) ax03.set_xticklabels(xlabs, size=22) ax03.set_ylabel(r'$w_{ij}$', fontsize=30, rotation='horizontal', labelpad=25) ax03.set_xlabel('out-neighbor', fontsize=20, labelpad=0) ax03.set_axisbelow(True) ax03.grid(which='major', linestyle='-', color='#999999', linewidth=2.5, alpha=0.3) ax03.legend(loc=2,fontsize=22, framealpha=0.7) strax1 = r'$D_{KL}[W^{out}_{E}||\langle W_{i}^{out} \rangle] = %.3f $'%\ effect_information_i(Gtpm0, node_i=4) ax03.text(3.12, 0.802, strax1, ha='center', fontsize=28, color='k') string1 = r'$EI = \displaystyle\frac{1}{N}$'+\ r'$\displaystyle\sum_{i=1}^N D_{KL}[W_{i}^{out}||$'+\ r'$\langle W_{i}^{out} \rangle]$' string2 = r'$EI = \displaystyle\frac{1}{5}$'+\ r'$\hspace{0.5cm} [0.592 + 1.061 + 1.385 + 1.737 + 1.016]$' string3 = r'$EI = 1.158 \hspace{0.5cm}$' r'$\rm bits$' ax05.text(-.02, 0.590, string1, ha='left', rotation=0, wrap=False, size=28) ax05.text(-.02, 0.250, string2, ha='left', rotation=0, wrap=False, size=28) ax05.text(-.02, -0.01, string3, ha='left', rotation=0, wrap=False, size=28) ax05.axis('off') if save: plt.savefig(where_to_save_pngs+\ "Example4_ExampleCalculation.png", bbox_inches='tight', dpi=425) plt.savefig(where_to_save_pdfs+\ "Example4_ExampleCalculation.pdf", bbox_inches='tight') plt.show() ``` Each node's contribution to the $EI$ ($EI_i$) is the KL divergence of its $W^{out}_{i}$ vector from the network's $\langle W^{out}_{i}\rangle$, known as the *effect information*. $$ EI = \dfrac{1}{N} \displaystyle\sum_{i=1}^N \text{D}_{_{KL}}[W^{out}_{i} || \langle W^{out}_{i} \rangle] $$ where $EI$ is the average of the *effect information*, $EI_i$, of each node. This is equivalent to our derivation of $EI$ from first principles above since $$ \begin{align} EI &= \dfrac{1}{N} \displaystyle\sum_{i=1}^N \text{D}_{_{KL}}[W^{out}_{i} || {\langle W^{out}_{i}\rangle}]\\ &= \dfrac{1}{N} \displaystyle\sum_{i=1}^{N} \displaystyle\sum_{j=1}^{N} w_{ij}\log_2\bigg(\dfrac{w_{ij}}{W_{j}}\bigg)\\ &= \dfrac{1}{N} \displaystyle\sum_{i=1}^{N}\bigg( \displaystyle\sum_{j=1}^{N} w_{ij}\log_2(w_{ij}) - \sum_{j=1}^{N} w_{ij}\log_2(W_{j})\bigg)\\ &= \dfrac{1}{N} \displaystyle\sum_{i=1}^{N} \displaystyle\sum_{j=1}^{N} w_{ij}\log_2\big(w_{ij}\big) - \dfrac{1}{N} \displaystyle\sum_{i=1}^{N} \displaystyle\sum_{j=1}^{N} w_{ij}\log_2\big(W_{j}\big) \end{align} $$ - Note that for a given node, $v_i$, the term in the first summation above, $\sum_{j=1}^{N} w_{ij}\log_2\big(w_{ij}\big)$, is equivalent to the negative entropy of the out-weights from $v_i$, $-H(W_i^{out})$. Also note that $W_j$, the *j*th element in the $\langle W^{out}_{i}\rangle$ vector, is the normalized sum of the incoming weights to $v_j$ from its neighbors, $v_i$, such that $W_j=\frac{1}{N} \sum_{i=1}^N w_{ij}$. We substitute these two terms into the equation above such that: $$ EI = \dfrac{1}{N} \sum_{i=1}^{N}-H(W_i^{out}) - \sum_{j=1}^{N} W_j\log_2\big(W_{j}\big) $$ This is equivalent to the formulation of $EI$ above, since $H(\langle W^{out}_{i}\rangle) = -\sum_{j=1}^{N} W_j\log_2(W_{j})$: $$ EI = H(\langle W^{out}_{i}\rangle) -\langle H(W_i^{out}) \rangle $$ In this figure, we adopt the relative entropy formulation of $EI$ for ease of derivation. ___________________ ## 1.4 Network motifs and effective information (N = 3) ``` G01 = nx.DiGraph() G02 = nx.DiGraph() G03 = nx.DiGraph() G04 = nx.DiGraph() G05 = nx.DiGraph() G06 = nx.DiGraph() G07 = nx.DiGraph() G08 = nx.DiGraph() G09 = nx.DiGraph() G10 = nx.DiGraph() G11 = nx.DiGraph() G12 = nx.DiGraph() G13 = nx.DiGraph() G01.add_nodes_from([0,1,2]) G02.add_nodes_from([0,1,2]) G03.add_nodes_from([0,1,2]) G04.add_nodes_from([0,1,2]) G05.add_nodes_from([0,1,2]) G06.add_nodes_from([0,1,2]) G07.add_nodes_from([0,1,2]) G08.add_nodes_from([0,1,2]) G09.add_nodes_from([0,1,2]) G10.add_nodes_from([0,1,2]) G11.add_nodes_from([0,1,2]) G12.add_nodes_from([0,1,2]) G13.add_nodes_from([0,1,2]) G01.add_edges_from([(0,1),(0,2)]) G02.add_edges_from([(0,1),(2,0)]) G03.add_edges_from([(0,1),(0,2),(2,0)]) G04.add_edges_from([(1,0),(2,0)]) G05.add_edges_from([(1,0),(1,2),(0,2)]) # e. coli G06.add_edges_from([(1,0),(1,2),(0,2),(0,1)]) G07.add_edges_from([(1,0),(0,2),(2,0)]) G08.add_edges_from([(1,0),(0,1),(0,2),(2,0)]) G09.add_edges_from([(1,0),(0,2),(2,1)]) G10.add_edges_from([(1,0),(2,0),(1,2),(0,1)]) G11.add_edges_from([(1,0),(2,0),(2,1),(0,1)]) G12.add_edges_from([(0,1),(1,0),(1,2),(2,1),(0,2)]) G13.add_edges_from([(0,1),(1,0),(1,2),(2,1),(0,2),(2,0)]) motif_dict = {"Motif 01": {"G":G01, "edges":str(list(G01.edges())), "EI":effective_information(G01)}, "Motif 02": {"G":G02, "edges":str(list(G02.edges())), "EI":effective_information(G02)}, "Motif 03": {"G":G03, "edges":str(list(G03.edges())), "EI":effective_information(G03)}, "Motif 04": {"G":G04, "edges":str(list(G04.edges())), "EI":effective_information(G04)}, "Motif 05": {"G":G05, "edges":str(list(G05.edges())), "EI":effective_information(G05)}, "Motif 06": {"G":G06, "edges":str(list(G06.edges())), "EI":effective_information(G06)}, "Motif 07": {"G":G07, "edges":str(list(G07.edges())), "EI":effective_information(G07)}, "Motif 08": {"G":G08, "edges":str(list(G08.edges())), "EI":effective_information(G08)}, "Motif 09": {"G":G09, "edges":str(list(G09.edges())), "EI":effective_information(G09)}, "Motif 10": {"G":G10, "edges":str(list(G10.edges())), "EI":effective_information(G10)}, "Motif 11": {"G":G11, "edges":str(list(G11.edges())), "EI":effective_information(G11)}, "Motif 12": {"G":G12, "edges":str(list(G12.edges())), "EI":effective_information(G12)}, "Motif 13": {"G":G13, "edges":str(list(G13.edges())), "EI":effective_information(G13)}} ei_heights = np.array([list(motif_dict.values())[i]['EI'] for i in range(len(list(motif_dict.values())))]) + 0.005 ei_bars = np.array(range(len(list(motif_dict.values())))) colors = ["#486164","#9094c9","#ab4e53","#fa8d11","#74d76c", "#bc7dc6","#db453b","#cad24b","#8f52d2","#00aaff", "#c2843a","#4f5435","#d05185"] bar_labels = list(motif_dict.keys()) import numpy as np import networkx as nx from matplotlib import gridspec colors = ["#45af9c","#5b91cb","#9f8448","#bf6d8c","#9876c0","#bb6eac","#5ea05c", "#cf5c57","#c24864","#c69932","#d3468f","#ce5c2f","#6b6cd9","#78b43d","#ba58c2"] i = 10 ns = 250 ew = 3.5 nc = 'w' ec = '#333333' oc = '#e4c600' nc_o = '#333333' fig, ax = plt.subplots(1,1,figsize=(17,8)) plt.subplots_adjust(wspace=0.10, hspace=0.1) plt.rc('axes', axisbelow=True) plt.rc('axes', linewidth=1.5) gs = gridspec.GridSpec(2, len(ei_bars), height_ratios=[7,1.7]) ax0 = plt.subplot(gs[0, :]) cols_i = 'grey' ax0.bar(ei_bars, ei_heights, color=colors, width=0.75, edgecolor='#333333', linewidth=3, alpha=1) ax0.set_xlim(min(ei_bars)-0.5,max(ei_bars)+0.5) ax0.set_xticks(ei_bars) ax0.set_xticklabels([""]*13) ax0.set_yticklabels(np.round(np.linspace(0,1.6,num=9),2), size=18) ax0.set_ylim(0, max(ei_heights)+0.04) ax0.set_xlim(-0.5,12.5) ax0.set_ylabel("$EI$", size=28) ax0.grid(linestyle='-', color='#999999', linewidth=2.5, alpha=0.35) for q, Q in enumerate(list(motif_dict.values())): g = Q['G'] ax0 = plt.subplot(gs[-1, q]) pos = nx.circular_layout(g) nx.draw_networkx_nodes(g, pos, node_size=ns, node_color=colors[q], linewidths=3, edgecolors="#333333", ax=ax0) nx.draw_networkx_edges(g, pos, width=ew*0.9, edge_color="#3F3F3F",#colors[q], arrowsize=19, alpha=1, ax=ax0) ax0.set_axis_off() posy = np.array(list(zip(*list(pos.values())))[1]) posx = np.array(list(zip(*list(pos.values())))[0]) ax0.set_ylim(min(posy)*1.35, max(posy)*1.5) ax0.set_xlim(min(posx)*1.69, max(posx)*1.6) title = list(motif_dict.keys())[q] ax0.set_title(title, fontsize=16, pad=-0.35) if save: plt.savefig("../figs/pngs/EffectiveInformation_NetworkMotifs.png", bbox_inches='tight', dpi=425) plt.savefig("../figs/pdfs/EffectiveInformation_NetworkMotifs.pdf", bbox_inches='tight', dpi=425) plt.show() ``` ## End of Chapter 01. In [Chapter 02](https://nbviewer.jupyter.org/github/jkbren/einet/blob/master/code/Chapter%2002%20-%20Network%20Size%20and%20Effective%20Information.ipynb), we will look at the $EI$ of common networks _______________ ### References: - __[Hoel, E. P. (2017). When the Map Is Better Than the Territory. Entropy, 19(5), 188. doi: 10.3390/e19050188](http://www.mdpi.com/1099-4300/19/5/188)__ - __[Hoel, E. P., Albantakis, L., & Tononi, G. (2013). Quantifying causal emergence shows that macro can beat micro. Proceedings of the National Academy of Sciences, 110(49), 19790–5. doi: 10.1073/pnas.1314922110](http://www.pnas.org/content/110/49/19790)__ - __[Tononi, G. (2001). Information measures for conscious experience. Archives Italiennes de Biologie,139(4), 367–371. doi: 10.4449/aib.v139i4.51](https://www.ncbi.nlm.nih.gov/pubmed/11603079)__ ______________________
github_jupyter
<h2>Construct keyfiles from project directory containing a Base FVS Rx template.</h2> ``` import os import glob from jinja2 import Template import pandas as pd import random import psycopg2 ``` Create a jinja2 template from a Base_Rx.key file. ``` # read in the base_rx keyfile template using jinja2 templating with open(os.path.join('Rx_Template','Base_Rx.key'), 'r') as base_keyfile: template = Template(base_keyfile.read()) print('Found Base_Rx.key and created jinja2 template.') ``` A dictionary for holding the items to insert into an FVS keyfile template using jinja2 templating. ``` inserts = {} ``` Specify the FVS input and output databases for insertion in the jinja2 template ``` inserts['FVSIn'] = 'PNWFIADB_FVSIn' inserts['FVSOut'] = 'PNWFIADB_FVSOut' ``` Read the contents of each rx*.kcp file in the Rxs directory and store them as values in an `rxs_dict` dictionary. ``` rxs_dict = {} # a dictionary storing the silvicultural keywords for each rx rx_kcps = glob.glob(os.path.join('Rx_Template', 'Rxs', '*.kcp')) if len(rx_kcps) > 0: print('Found the following kcp files in the Rxs subdirectory:') for kcp in rx_kcps: fname = os.path.split(kcp)[-1] print(fname, end='...') # read the kcp file key = fname.split('/')[-1].split('.')[0] # key for item in inserts dictionary with open(kcp, 'r') as item: value = item.read() # add the contents of the kcp file to the inserts dictionary rxs_dict[key] = value print(' added to template.') else: raise FileNotFoundError('No kcp files found in the Rx_Template directory.') ``` A function to use for creating keyfiles. ``` def create_keyfile(standID, variant, rx, offset): ''' Creates a single FVS keyfile based on the jinja2 template. ''' inserts['ID'] = standID inserts['rx'] = rxs_dict[rx] # FVS slows down outputting to large databases, so we'll divide output among 10 databases #inserts['db_num'] = random.randint(1,10) # add a random number, 1-10 for a output database suffix fname = 'fvs'+variant+'_stand'+str(standID)+'_'+rx+'_off'+str(offset)+'.key' path = os.path.abspath('keyfiles_to_run') if not os.path.exists(path): os.makedirs(path, exist_ok=True) with open(os.path.join('keyfiles_to_run',fname),'w') as keyfile: keyfile.write(template.render(**inserts)) def create_keyfiles(stands, variants, rxs, offsets=[0], verbose=False): ''' Creates FVS keyfiles for all stands using Base_Rx.key as a template. Arguments: stands: List of standIDs that keyfiles will be created for. Required. variants: List of 2-letter codes of FVS variant for each stand. Required. rxs: a list of rx names to build keyfiles for. Required. offsets: optional, a list of offsets, used in FVS to delay implementation of a management regime. e.g., [0, 5, 10]. Defaults to a list with no offsets (i.e., [0]). ''' stands_processed = 0 keyfiles_written = 0 num_stands = len(stands) num_keys = num_stands * len(rxs) * len(offsets) print('Creating {:,} keyfiles for {:,} stands.'.format(num_keys, num_stands)) if not verbose: print('Stands processed', end=": ") for i in range(len(stands)): if verbose: print('Creating keyfiles for stand', stands[i], end='... ') stand_keyfiles = 0 for rx in rxs: for offset in offsets: # run the create_keyfile function create_keyfile(standID=stands.iloc[i], variant=variants.iloc[i], rx=rx, offset=offset) keyfiles_written += 1 stand_keyfiles += 1 stands_processed += 1 if verbose: print(stand_keyfiles, 'keyfiles written.') else: if stands_processed % 100 == 0: print('{:,}'.format(stands_processed), end='... ') print('Done. Created', keyfiles_written, 'keyfiles for', stands_processed, 'stands.') ``` Identify stands to run. ``` pg_engine='postgresql://postgres@localhost:5432/PNWFIADB_FVSIn' # only grab the stands that have DBH increment recorded # and which have all needed covariates as non nulls SQL = ''' SELECT fvs_standinit.stand_id, variant FROM fvs_standinit, fvs_treeinit WHERE fvs_standinit.stand_id = fvs_treeinit.stand_id AND aspect IS NOT NULL AND slope IS NOT NULL and dg IS NOT NULL AND crratio IS NOT NULL AND dbh IS NOT NULL AND species = 202 AND inv_year <= 2015 GROUP BY fvs_standinit.stand_id, variant ''' # read in the stands from the FVSIn database stands = pd.read_sql(sql=SQL, con=pg_engine) #stands = stands.loc[stands.CountPlot == 'N'] stands.head() ``` Create the keyfiles! ``` %%time rxs_to_run = ['GrowOnly'] #stands = stands.sample(n=200) create_keyfiles(stands=stands.stand_id, variants=stands.variant, rxs=rxs_to_run, verbose=False) ```
github_jupyter
# About this notebook TBD... # Data Loading ``` import os import pandas as pd import seaborn as sns from matplotlib import pyplot as plt os.listdir("../input/cassava-leaf-disease-classification") # train = pd.read_csv("../input/cassava-leaf-disease-classification/train.csv") train = pd.read_csv("../input/cassava-leaf-disease-merged/merged.csv") # train = pd.read_csv("../input/cassava-leaf-disease-merged/oversample-0124.csv") # label 0124 x3 test = pd.read_csv("../input/cassava-leaf-disease-classification/sample_submission.csv") label_map = pd.read_json("../input/cassava-leaf-disease-classification/label_num_to_disease_map.json", orient="index") display(train.head()) display(test.head()) display(label_map) sns.distplot(train["label"], kde=False) ``` # Directory settings ``` # ==================================================== # Directory settings # ==================================================== import os OUTPUT_DIR = "./" if not os.path.exists(OUTPUT_DIR): os.makedirs(OUTPUT_DIR) # TRAIN_PATH = "../input/cassava-leaf-disease-classification/train_images" TRAIN_PATH = "../input/cassava-leaf-disease-merged/train" TEST_PATH = "../input/cassava-leaf-disease-classification/test_images" ``` # CFG ``` # ==================================================== # CFG # ==================================================== class CFG: debug = False apex = False print_freq = 100 num_workers = 4 model_name = "vit_base_patch16_384" # resnext50_32x4d, seresnext50_32x4d, tf_efficientnet_b3_ns, vit_base_patch16_384, deit_base_patch16_384 batch_size = 8 gradient_accumulation_steps = 4 size = 384 if "it_base_" in model_name else 512 n_fold = 5 trn_fold = [0, 1, 2, 3, 4] criterion = "BiTemperedLoss" # ['CrossEntropyLoss', 'BiTemperedLoss'] btl_t1 = 0.3 # Bi-Tempered Logistic Loss btl_t2 = 1.0 label_smoothing = 0.2 scheduler = "CosineAnnealingWarmRestarts" # ['ReduceLROnPlateau', 'CosineAnnealingLR', 'CosineAnnealingWarmRestarts', 'CosineAnnealingWarmupRestarts'] scheduler_batch_update = True epochs = 10 # factor = 0.2 # ReduceLROnPlateau # patience = 4 # ReduceLROnPlateau # eps = 1e-6 # ReduceLROnPlateau # T_max = 10 # CosineAnnealingLR T_0 = ( len(train) // n_fold * (n_fold - 1) // batch_size // gradient_accumulation_steps * epochs + 5 ) # CosineAnnealingWarmRestarts # first_cycle_steps = ( # len(train) // n_fold * (n_fold - 1) // batch_size // gradient_accumulation_steps * epochs + 5 # ) # CosineAnnealingWarmupRestarts for batch update # warmup_steps = first_cycle_steps // 10 # CosineAnnealingWarmupRestarts # gamma = 0.8 # CosineAnnealingWarmupRestarts lr = 1e-4 min_lr = 2e-6 weight_decay = 1e-6 max_grad_norm = 1000 seed = 6345 target_size = 5 target_col = "label" train = True inference = False if CFG.debug: CFG.epochs = 1 train = train.sample(n=1000, random_state=CFG.seed).reset_index(drop=True) ``` # Library ``` # ==================================================== # Library # ==================================================== import sys sys.path.append("../input/pytorch-image-models/pytorch-image-models-master") sys.path.append("../input/pytorchcosineannealingwithwarmup") sys.path.append("../input/bitemperedlogloss/") sys.path.append("../input/image-fmix/FMix-master") import math import os import random import shutil import time import warnings from collections import Counter, defaultdict from contextlib import contextmanager from functools import partial from pathlib import Path import bi_tempered_loss_pytorch as btl import cv2 import numpy as np import pandas as pd import scipy as sp import timm import torch import torch.nn as nn import torch.nn.functional as F import torchvision.models as models from albumentations import ( CenterCrop, CoarseDropout, Compose, Cutout, HorizontalFlip, HueSaturationValue, IAAAdditiveGaussianNoise, ImageOnlyTransform, Normalize, OneOf, RandomBrightness, RandomBrightnessContrast, RandomContrast, RandomCrop, RandomResizedCrop, Resize, Rotate, ShiftScaleRotate, Transpose, VerticalFlip, ) from albumentations.pytorch import ToTensorV2 from cosine_annearing_with_warmup import CosineAnnealingWarmupRestarts from fmix import sample_mask from PIL import Image from sklearn import preprocessing from sklearn.metrics import accuracy_score from sklearn.model_selection import StratifiedKFold from torch.nn.parameter import Parameter from torch.optim import SGD, Adam from torch.optim.lr_scheduler import CosineAnnealingLR, CosineAnnealingWarmRestarts, ReduceLROnPlateau from torch.utils.data import DataLoader, Dataset from tqdm.auto import tqdm warnings.filterwarnings("ignore") if CFG.apex: from apex import amp device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ``` # Utils ``` # ==================================================== # Utils # ==================================================== def get_score(y_true, y_pred): return accuracy_score(y_true, y_pred) @contextmanager def timer(name): t0 = time.time() LOGGER.info(f"[{name}] start") yield LOGGER.info(f"[{name}] done in {time.time() - t0:.0f} s.") def init_logger(log_file=OUTPUT_DIR + "train.log"): from logging import INFO, FileHandler, Formatter, StreamHandler, getLogger logger = getLogger(__name__) logger.setLevel(INFO) handler1 = StreamHandler() handler1.setFormatter(Formatter("%(message)s")) handler2 = FileHandler(filename=log_file) handler2.setFormatter(Formatter("%(message)s")) logger.addHandler(handler1) logger.addHandler(handler2) return logger LOGGER = init_logger() def seed_torch(seed=42): random.seed(seed) os.environ["PYTHONHASHSEED"] = str(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True seed_torch(seed=CFG.seed) ``` # CV split ``` folds = train.copy() Fold = StratifiedKFold(n_splits=CFG.n_fold, shuffle=True, random_state=CFG.seed) for n, (train_index, val_index) in enumerate(Fold.split(folds, folds[CFG.target_col])): folds.loc[val_index, "fold"] = int(n) folds["fold"] = folds["fold"].astype(int) print(folds.groupby(["fold", CFG.target_col]).size()) ``` # Dataset ``` # ==================================================== # Dataset # ==================================================== class TrainDataset(Dataset): def __init__(self, df, transform=None): self.df = df self.file_names = df["image_id"].values self.labels = df["label"].values self.transform = transform def __len__(self): return len(self.df) def __getitem__(self, idx): file_name = self.file_names[idx] file_path = f"{TRAIN_PATH}/{file_name}" image = cv2.imread(file_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) if self.transform: augmented = self.transform(image=image) image = augmented["image"] label = torch.tensor(self.labels[idx]).long() return image, label class TestDataset(Dataset): def __init__(self, df, transform=None): self.df = df self.file_names = df["image_id"].values self.transform = transform def __len__(self): return len(self.df) def __getitem__(self, idx): file_name = self.file_names[idx] file_path = f"{TEST_PATH}/{file_name}" image = cv2.imread(file_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) if self.transform: augmented = self.transform(image=image) image = augmented["image"] return image train_dataset = TrainDataset(train, transform=None) for i in range(1): image, label = train_dataset[i] plt.imshow(image) plt.title(f"label: {label}") plt.show() ``` # Transforms ``` # ==================================================== # Transforms # ==================================================== def get_transforms(*, data): if data == "train": return Compose( [ # Resize(CFG.size, CFG.size), RandomResizedCrop(CFG.size, CFG.size), Transpose(p=0.5), HorizontalFlip(p=0.5), VerticalFlip(p=0.5), ShiftScaleRotate(p=0.5), HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5), RandomBrightnessContrast(brightness_limit=(-0.1, 0.1), contrast_limit=(-0.1, 0.1), p=0.5), CoarseDropout(p=0.5), Cutout(p=0.5), Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], ), ToTensorV2(), ] ) elif data == "valid": return Compose( [ Resize(CFG.size, CFG.size), CenterCrop(CFG.size, CFG.size), Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], ), ToTensorV2(), ] ) train_dataset = TrainDataset(train, transform=get_transforms(data="train")) for i in range(1): image, label = train_dataset[i] plt.imshow(image[0]) plt.title(f"label: {label}") plt.show() ``` # CutMix / FMix ``` def rand_bbox(size, lam): W = size[2] H = size[3] cut_rat = np.sqrt(1.0 - lam) cut_w = np.int(W * cut_rat) cut_h = np.int(H * cut_rat) # uniform cx = np.random.randint(W) cy = np.random.randint(H) bbx1 = np.clip(cx - cut_w // 2, 0, W) bby1 = np.clip(cy - cut_h // 2, 0, H) bbx2 = np.clip(cx + cut_w // 2, 0, W) bby2 = np.clip(cy + cut_h // 2, 0, H) return bbx1, bby1, bbx2, bby2 def cutmix(data, target, alpha): indices = torch.randperm(data.size(0)) shuffled_data = data[indices] shuffled_target = target[indices] lam = np.clip(np.random.beta(alpha, alpha), 0.3, 0.4) bbx1, bby1, bbx2, bby2 = rand_bbox(data.size(), lam) new_data = data.clone() new_data[:, :, bby1:bby2, bbx1:bbx2] = data[indices, :, bby1:bby2, bbx1:bbx2] # adjust lambda to exactly match pixel ratio lam = 1 - ((bbx2 - bbx1) * (bby2 - bby1) / (data.size()[-1] * data.size()[-2])) targets = (target, shuffled_target, lam) return new_data, targets def fmix(data, targets, alpha, decay_power, shape, max_soft=0.0, reformulate=False): lam, mask = sample_mask(alpha, decay_power, shape, max_soft, reformulate) if CFG.apex: # mask = torch.tensor(mask, device=device).float() mask = mask.astype(np.float32) indices = torch.randperm(data.size(0)) shuffled_data = data[indices] shuffled_targets = targets[indices] x1 = torch.from_numpy(mask).to(device) * data x2 = torch.from_numpy(1 - mask).to(device) * shuffled_data targets = (targets, shuffled_targets, lam) return (x1 + x2), targets ``` # MixUp ``` # https://github.com/yuhao318/mwh/blob/e9e2da8fc6/utils.py def mixup(x, y, alpha=1.0, use_cuda=True): """Compute the mixup data. Return mixed inputs, pairs of targets, and lambda""" if alpha > 0.0: lam = np.random.beta(alpha, alpha) lam = max(lam, 1 - lam) # lam = min(lam, 1-lam) else: lam = 1.0 batch_size = x.size()[0] if use_cuda: index = torch.randperm(batch_size).cuda() else: index = torch.randperm(batch_size) ## SYM # mixed_x = lam * x + (1 - lam) * x[index,:] # mixed_y = (1 - lam) * x + lam * x[index,:] # mixed_image = torch.cat([mixed_x,mixed_y], 0) # y_a, y_b = y, y[index] # mixed_label = torch.cat([y_a,y_b], 0) ## Reduce batch size # new_batch_size = batch_size // 2 # x_i = x[ : new_batch_size] # x_j = x[new_batch_size : ] # y_a = y[ : new_batch_size] # y_b = y[new_batch_size : ] # mixed_x = lam * x_i + (1 - lam) * x_j ## NO SYM mixed_x = lam * x + (1 - lam) * x[index, :] y_a, y_b = y, y[index] ## Only Alpha # mixed_x = 0.5 * x + (1 - 0.5) * x[index,:] # mixed_image = mixed_x # y_a, y_b = y, y[index] # ind_label = torch.randint_like(y, 0,2) # mixed_label = ind_label * y_a + (1-ind_label) * y_b ## Reduce batch size and SYM # new_batch_size = batch_size // 2 # x_i = x[ : new_batch_size] # x_j = x[new_batch_size : ] # y_a = y[ : new_batch_size] # y_b = y[new_batch_size : ] # mixed_x = lam * x_i + (1 - lam) * x_j # mixed_y = (1 - lam) * x_i + lam * x_j # mixed_x = torch.cat([mixed_x,mixed_y], 0) # y_b = torch.cat([y_b,y_a], 0) # y_a = y # return mixed_image, mixed_label, lam return mixed_x, (y_a, y_b, lam) ``` # MODEL ``` # ==================================================== # MODEL # ==================================================== class CassvaImgClassifier(nn.Module): def __init__(self, model_name="resnext50_32x4d", pretrained=False): super().__init__() self.model_name = model_name if model_name.startswith("deit_"): self.model = torch.hub.load("facebookresearch/deit:main", model_name, pretrained=True) if model_name == "deit_base_patch16_384": n_features = self.model.head.in_features self.model.head = nn.Linear(n_features, CFG.target_size) else: self.model = timm.create_model(model_name, pretrained=pretrained) if "resnext50_32x4d" in model_name: n_features = self.model.fc.in_features self.model.fc = nn.Linear(n_features, CFG.target_size) elif model_name.startswith("tf_efficientnet"): n_features = self.model.classifier.in_features self.model.classifier = nn.Linear(n_features, CFG.target_size) elif model_name.startswith("vit_"): n_features = self.model.head.in_features self.model.head = nn.Linear(n_features, CFG.target_size) def forward(self, x): x = self.model(x) return x def freeze_batch_normalization(model): if CFG.model_name.startswith("tf_efficientnet_"): for name1, child1 in model.named_children(): for name2, child2 in child1.named_children(): # print(f"===== {name2} =====") if name2.startswith("bn"): for param in child2.parameters(): param.requires_grad = False # print(param.requires_grad) for child3 in child2.children(): if isinstance(child3, nn.modules.container.Sequential): for child4 in child3.children(): for child5 in child4.children(): if isinstance(child5, nn.BatchNorm2d): # print(child5) for param in child5.parameters(): param.requires_grad = False # print(param.requires_grad) if CFG.model_name.startswith("vit_"): try: for m in model.modules(): if isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.LayerNorm): m.eval() except ValuError: print("error with batchnorm2d or layernorm") return model = CassvaImgClassifier(model_name=CFG.model_name, pretrained=False) freeze_batch_normalization(model) print(model) train_dataset = TrainDataset(train, transform=get_transforms(data="train")) train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True, num_workers=4, pin_memory=True, drop_last=True) for image, label in train_loader: output = model(image) print(output) break ``` # Loss functions ``` class BiTemperedLogisticLoss(nn.Module): def __init__(self, t1, t2, smoothing=0.0): super(BiTemperedLogisticLoss, self).__init__() self.t1 = t1 self.t2 = t2 self.smoothing = smoothing def forward(self, logit_label, truth_label): loss_label = btl.bi_tempered_logistic_loss( logit_label, truth_label, t1=self.t1, t2=self.t2, label_smoothing=self.smoothing, reduction="none" ) loss_label = loss_label.mean() return loss_label ``` # Helper functions ``` # ==================================================== # Helper functions # ==================================================== class AverageMeter(object): """Computes and stores the average and current value""" def __init__(self): self.reset() def reset(self): self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count def asMinutes(s): m = math.floor(s / 60) s -= m * 60 return "%dm %ds" % (m, s) def timeSince(since, percent): now = time.time() s = now - since es = s / (percent) rs = es - s return "%s (remain %s)" % (asMinutes(s), asMinutes(rs)) def train_fn(train_loader, model, criterion, optimizer, epoch, scheduler, device, scheduler_batch_update=True): batch_time = AverageMeter() data_time = AverageMeter() losses = AverageMeter() scores = AverageMeter() # switch to train mode model.train() start = end = time.time() global_step = 0 for step, (images, labels) in enumerate(train_loader): # measure data loading time data_time.update(time.time() - end) images = images.to(device) labels = labels.to(device) batch_size = labels.size(0) # CutMix, FMix if epoch <= 1 or epoch >= CFG.epochs - 1: mix_decision = 0.75 # Disable CutMix, FMix for final epoch else: mix_decision = np.random.rand() if epoch >= CFG.epochs - 4: mix_decision *= 2 # Reduce probability if mix_decision < 0.25: images, labels = cutmix(images, labels, 1.0) elif mix_decision >= 0.25 and mix_decision < 0.5: images, labels = fmix(images, labels, alpha=1.0, decay_power=5.0, shape=(CFG.size, CFG.size)) elif mix_decision >= 0.5 and mix_decision < 0.75: images, labels = mixup(images, labels, alpha=0.5) y_preds = model(images.float()) if mix_decision < 0.75: loss = criterion(y_preds, labels[0]) * labels[2] + criterion(y_preds, labels[1]) * (1.0 - labels[2]) else: loss = criterion(y_preds, labels) # record loss losses.update(loss.item(), batch_size) if CFG.gradient_accumulation_steps > 1: loss = loss / CFG.gradient_accumulation_steps if CFG.apex: with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() else: loss.backward() grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), CFG.max_grad_norm) if (step + 1) % CFG.gradient_accumulation_steps == 0: optimizer.step() optimizer.zero_grad() if CFG.scheduler_batch_update: scheduler.step() global_step += 1 # measure elapsed time batch_time.update(time.time() - end) end = time.time() if step % CFG.print_freq == 0 or step == (len(train_loader) - 1): print( "Epoch: [{0}][{1}/{2}] " # "Data {data_time.val:.3f} ({data_time.avg:.3f}) " # "Batch {batch_time.val:.3f} ({batch_time.avg:.3f}) " "Elapsed {remain:s} " "Loss: {loss.val:.4f}({loss.avg:.4f}) " "Grad: {grad_norm:.4f} " "LR: {lr:.6f} ".format( epoch + 1, step, len(train_loader), # batch_time=batch_time, # data_time=data_time, loss=losses, remain=timeSince(start, float(step + 1) / len(train_loader)), grad_norm=grad_norm, lr=scheduler.get_lr()[0], ) ) return losses.avg def valid_fn(valid_loader, model, criterion, device): batch_time = AverageMeter() data_time = AverageMeter() losses = AverageMeter() scores = AverageMeter() # switch to evaluation mode model.eval() preds = [] start = end = time.time() for step, (images, labels) in enumerate(valid_loader): # measure data loading time data_time.update(time.time() - end) images = images.to(device) labels = labels.to(device) batch_size = labels.size(0) # compute loss with torch.no_grad(): y_preds = model(images) loss = criterion(y_preds, labels) losses.update(loss.item(), batch_size) # record accuracy preds.append(y_preds.softmax(1).to("cpu").numpy()) if CFG.gradient_accumulation_steps > 1: loss = loss / CFG.gradient_accumulation_steps # measure elapsed time batch_time.update(time.time() - end) end = time.time() if step % CFG.print_freq == 0 or step == (len(valid_loader) - 1): print( "EVAL: [{0}/{1}] " # "Data {data_time.val:.3f} ({data_time.avg:.3f}) " # "Batch {batch_time.val:.3f} ({batch_time.avg:.3f}) " "Elapsed {remain:s} " "Loss: {loss.val:.4f}({loss.avg:.4f}) ".format( step, len(valid_loader), # batch_time=batch_time, # data_time=data_time, loss=losses, remain=timeSince(start, float(step + 1) / len(valid_loader)), ) ) predictions = np.concatenate(preds) return losses.avg, predictions def inference(model, states, test_loader, device): model.to(device) tk0 = tqdm(enumerate(test_loader), total=len(test_loader)) probs = [] for i, (images) in tk0: images = images.to(device) avg_preds = [] for state in states: model.load_state_dict(state["model"]) model.eval() with torch.no_grad(): y_preds = model(images) avg_preds.append(y_preds.softmax(1).to("cpu").numpy()) avg_preds = np.mean(avg_preds, axis=0) probs.append(avg_preds) probs = np.concatenate(probs) return probs ``` # Train loop ``` # ==================================================== # Train loop # ==================================================== def train_loop(folds, fold): LOGGER.info(f"========== fold: {fold} training ==========") # ==================================================== # loader # ==================================================== trn_idx = folds[folds["fold"] != fold].index val_idx = folds[folds["fold"] == fold].index train_folds = folds.loc[trn_idx].reset_index(drop=True) valid_folds = folds.loc[val_idx].reset_index(drop=True) train_dataset = TrainDataset(train_folds, transform=get_transforms(data="train")) train_dataset_no_aug = TrainDataset(train_folds, transform=get_transforms(data="valid")) valid_dataset = TrainDataset(valid_folds, transform=get_transforms(data="valid")) train_loader = DataLoader( train_dataset, batch_size=CFG.batch_size, shuffle=True, num_workers=CFG.num_workers, pin_memory=True, drop_last=True, ) train_loader_no_aug = DataLoader( train_dataset_no_aug, batch_size=CFG.batch_size, shuffle=True, num_workers=CFG.num_workers, pin_memory=True, drop_last=True, ) valid_loader = DataLoader( valid_dataset, batch_size=CFG.batch_size, shuffle=False, num_workers=CFG.num_workers, pin_memory=True, drop_last=False, ) # ==================================================== # scheduler # ==================================================== def get_scheduler(optimizer): if CFG.scheduler == "ReduceLROnPlateau": scheduler = ReduceLROnPlateau( optimizer, mode="min", factor=CFG.factor, patience=CFG.patience, verbose=True, eps=CFG.eps ) elif CFG.scheduler == "CosineAnnealingLR": scheduler = CosineAnnealingLR(optimizer, T_max=CFG.T_max, eta_min=CFG.min_lr, last_epoch=-1) elif CFG.scheduler == "CosineAnnealingWarmRestarts": scheduler = CosineAnnealingWarmRestarts(optimizer, T_0=CFG.T_0, T_mult=1, eta_min=CFG.min_lr, last_epoch=-1) elif CFG.scheduler == "CosineAnnealingWarmupRestarts": scheduler = CosineAnnealingWarmupRestarts( optimizer, first_cycle_steps=CFG.first_cycle_steps, cycle_mult=1.0, max_lr=CFG.lr, min_lr=CFG.min_lr, warmup_steps=CFG.warmup_steps, gamma=CFG.gamma, ) return scheduler # ==================================================== # model & optimizer # ==================================================== model = CassvaImgClassifier(CFG.model_name, pretrained=True) freeze_batch_normalization(model) model.to(device) # Use multi GPU if device == torch.device("cuda") and not CFG.apex: model = torch.nn.DataParallel(model) # make parallel # torch.backends.cudnn.benchmark=True optimizer = Adam(model.parameters(), lr=CFG.lr, weight_decay=CFG.weight_decay, amsgrad=False) scheduler = get_scheduler(optimizer) # ==================================================== # apex # ==================================================== if CFG.apex: model, optimizer = amp.initialize(model, optimizer, opt_level="O1", verbosity=0) # ==================================================== # Criterion # ==================================================== def get_criterion(): if CFG.criterion == "CrossEntropyLoss": criterion = nn.CrossEntropyLoss() elif CFG.criterion == "BiTemperedLoss": criterion = BiTemperedLogisticLoss(t1=CFG.btl_t1, t2=CFG.btl_t2, smoothing=CFG.label_smoothing) return criterion criterion = get_criterion() # ==================================================== # loop # ==================================================== best_score = 0.0 best_loss = np.inf for epoch in range(CFG.epochs): start_time = time.time() # train if epoch <= 1 or epoch >= CFG.epochs - 1: avg_loss = train_fn( train_loader_no_aug, model, criterion, optimizer, epoch, scheduler, device, CFG.scheduler_batch_update ) else: avg_loss = train_fn( train_loader, model, criterion, optimizer, epoch, scheduler, device, CFG.scheduler_batch_update ) # eval avg_val_loss, preds = valid_fn(valid_loader, model, criterion, device) valid_labels = valid_folds[CFG.target_col].values if not CFG.scheduler_batch_update: if isinstance(scheduler, ReduceLROnPlateau): scheduler.step(avg_val_loss) elif isinstance(scheduler, CosineAnnealingLR): scheduler.step() elif isinstance(scheduler, CosineAnnealingWarmRestarts): scheduler.step() # scoring score = get_score(valid_labels, preds.argmax(1)) elapsed = time.time() - start_time LOGGER.info( f"Epoch {epoch+1} - avg_train_loss: {avg_loss:.4f} avg_val_loss: {avg_val_loss:.4f} time: {elapsed:.0f}s" ) LOGGER.info(f"Epoch {epoch+1} - Accuracy: {score}") if score > best_score: best_score = score LOGGER.info(f"Epoch {epoch+1} - Save Best Score: {best_score:.4f} Model") torch.save( {"model": model.state_dict(), "preds": preds}, OUTPUT_DIR + f"{CFG.model_name}_fold{fold}_best.pth" ) if epoch == CFG.epochs - 1: LOGGER.info(f"Epoch {epoch+1} - Save final model") torch.save( {"model": model.state_dict(), "preds": preds}, OUTPUT_DIR + f"{CFG.model_name}_fold{fold}_final.pth" ) check_point = torch.load(OUTPUT_DIR + f"{CFG.model_name}_fold{fold}_best.pth") valid_folds[[str(c) for c in range(5)]] = check_point["preds"] valid_folds["preds"] = check_point["preds"].argmax(1) return valid_folds # ==================================================== # main # ==================================================== def main(): """ Prepare: 1.train 2.test 3.submission 4.folds """ def get_result(result_df): preds = result_df["preds"].values labels = result_df[CFG.target_col].values score = get_score(labels, preds) LOGGER.info(f"Score: {score:<.5f}") if CFG.train: # train oof_df = pd.DataFrame() for fold in range(CFG.n_fold): if fold in CFG.trn_fold: _oof_df = train_loop(folds, fold) oof_df = pd.concat([oof_df, _oof_df]) LOGGER.info(f"========== fold: {fold} result ==========") get_result(_oof_df) # CV result LOGGER.info(f"========== CV ==========") get_result(oof_df) # save result oof_df.to_csv(OUTPUT_DIR + "oof_df.csv", index=False) if CFG.inference: # inference model = CassvaImgClassifier(CFG.model_name, pretrained=False) states = [torch.load(OUTPUT_DIR + f"{CFG.model_name}_fold{fold}_best.pth") for fold in CFG.trn_fold] test_dataset = TestDataset(test, transform=get_transforms(data="valid")) test_loader = DataLoader( test_dataset, batch_size=CFG.batch_size, shuffle=False, num_workers=CFG.num_workers, pin_memory=True ) predictions = inference(model, states, test_loader, device) # submission test["label"] = predictions.argmax(1) test[["image_id", "label"]].to_csv(OUTPUT_DIR + "submission.csv", index=False) if __name__ == "__main__": main() ```
github_jupyter
## Rounding differences in Python, R and Spark Python, R and Spark have different ways of rounding numbers which end in $.5$; Python and R round to the **nearest even integer** (sometimes called *bankers rounding*), whereas Spark will round **away from zero** (up in the conventional mathematical way for positive numbers, and round down for negative numbers), in the same way as in Excel. This can be confusing when using PySpark and sparklyr if you are used to the behaviour in Python and R. ### Comparison of rounding methods Create a DataFrame with numbers all ending in `.5`, both positive and negative, using [`spark.range()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.SparkSession.range.html)/[`sdf_seq()`](https://spark.rstudio.com/packages/sparklyr/latest/reference/sdf_seq.html) and then dividing the `id` column by `2`: ``` from pyspark.sql import SparkSession, functions as F import pandas as pd import numpy as np spark = (SparkSession.builder.master("local[2]") .appName("rounding") .getOrCreate()) sdf = spark.range(-7, 8, 2).select((F.col("id") / 2).alias("half_id")) sdf.show() ``` ```r library(sparklyr) library(dplyr) sc <- sparklyr::spark_connect( master = "local[2]", app_name = "rounding", config = sparklyr::spark_config()) sdf <- sparklyr:::sdf_seq(sc, -7, 8, 2) %>% sparklyr::mutate(half_id = id / 2) %>% sparklyr::select(half_id) sdf %>% sparklyr::collect() %>% print() ``` Round using Spark with [`F.round()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.round.html)/[`round()`](https://spark.apache.org/docs/latest/api/sql/index.html#round); this will round away from zero (up for positive numbers and down for negative): ``` sdf = sdf.withColumn("spark_round", F.round("half_id")) sdf.toPandas() ``` ```r sdf <- sdf %>% sparklyr::mutate(spark_round = round(half_id)) sdf %>% sparklyr::collect() %>% print() ``` Now try using Python/R; this will use the bankers method of rounding: ``` pdf = sdf.toPandas() pdf["python_round"] = round(pdf["half_id"], 0) pdf ``` ```r tdf <- sdf %>% sparklyr::collect() %>% sparklyr::mutate(r_round = round(half_id)) %>% print() ``` The two methods have returned different results, despite both using functions named `round()`. Just like in Python, pandas and numpy also use bankers rounding: ``` pdf["pd_round"] = pdf["half_id"].round() pdf["np_round"] = np.round(pdf["half_id"]) pdf ``` You can use the Python and R style of bankers rounding in Spark with [`F.bround()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.bround.html)/[`bround()`](https://spark.apache.org/docs/latest/api/sql/index.html#bround): ``` sdf = sdf.withColumn("spark_bround", F.bround("half_id")) sdf.toPandas() ``` ```r sdf <- sdf %>% sparklyr::mutate(spark_bround = bround(half_id)) sdf %>% sparklyr::collect() %>% print() ``` ### Other information on rounding #### UDFs and `spark_apply()` [User Defined Functions](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.udf.html) (UDFs) in Python, and R code ran on the Spark cluster with [`spark_apply()`](https://spark.rstudio.com/packages/sparklyr/latest/reference/spark_apply.html) will use bankers rounding, in common with Python and R. #### Python 2 The rounding method changed to bankers rounding in Python 3. In Python 2, it used the round away from zero method, the same as Spark. It is strongly recommended to use Python 3 for any new code development. Spark 3 has dropped support for Python 2. #### Other common software Both Excel and SPSS Statistics use the Spark method of rounding away from zero. If you are new to coding and are learning Python or R predominately to use Spark, be careful when using regular Python or R functions. #### Testing Given that there are different ways of rounding depending on the language used, it is a good idea to thoroughly [unit test](../testing-debugging/unit-testing) your functions to ensure that they behave as expected. ### Further Resources Spark at the ONS Articles: - [Unit Testing in Spark](../testing-debugging/unit-testing) PySpark Documentation: - [`spark.range()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.SparkSession.range.html) - [`F.round()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.round.html) - [`F.bround()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.bround.html) - [User Defined Functions](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.udf.html) sparklyr Documentation: - [`sdf_seq()`](https://spark.rstudio.com/packages/sparklyr/latest/reference/sdf_seq.html) - [`spark_apply()`](https://spark.rstudio.com/packages/sparklyr/latest/reference/spark_apply.html) Spark SQL Documentation: - [`round`](https://spark.apache.org/docs/latest/api/sql/index.html#round) - [`bround`](https://spark.apache.org/docs/latest/api/sql/index.html#bround)
github_jupyter
``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session import gc import traceback import datatable as dt import datetime import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import pandas as pd, numpy as np from tensorflow.keras import layers import tensorflow_probability as tfp import tensorflow.keras.backend as K from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error from sklearn.preprocessing import RobustScaler, MinMaxScaler from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator from sklearn.linear_model import LinearRegression import matplotlib.pyplot as plt # from ta import add_all_ta_features # from ta.utils import dropna # Set graph style and font sns.set() # Change the axes' title and label size to 18 & 16 by default and default figure size, and make title bold # Axes formatter limit will only display scientific notation if it's > 10^7 (or 10 million JPY) or < 10^-5 plt.rcParams.update({'axes.titleweight': 'bold','figure.figsize': (16,10),'axes.titlesize': 18,'axes.labelsize': 16, 'legend.fontsize': 12, 'xtick.labelsize': 12, 'ytick.labelsize': 12, 'font.family': 'serif', 'axes.formatter.limits':'-5, 7'}) ``` # Loading data ``` # For Kaggle use only directory_path = "/kaggle/input/392-crypto-currency-pairs-at-minute-resolution/" BTC = pd.read_csv(directory_path + 'btcusd.csv') print(BTC.head()) ETH = pd.read_csv(directory_path+'ethusd.csv') print(ETH.head()) LTC = pd.read_csv(directory_path + 'ltcusd.csv') print(LTC.head()) # Convert to human timestamp BTC['time'] = pd.to_datetime(BTC['time'], unit='ms') ETH['time'] = pd.to_datetime(ETH['time'], unit='ms') LTC['time'] = pd.to_datetime(LTC['time'], unit='ms') BTC.describe(include='all') ETH.describe(include='all') LTC.describe(include='all') # Make copies of these df before doing further BTC_copy = BTC.copy() ETH_copy = ETH.copy() LTC_copy = LTC.copy() # Set time as index for plotting BTC.set_index('time', inplace=True) ETH.set_index('time', inplace=True) LTC.set_index('time', inplace=True) BTC ``` # EDA ``` plt.plot(BTC.index, BTC.close) plt.xlabel('Date') plt.ylabel('Price in USD') plt.title('Price of BTC over years') plt.show() plt.plot(ETH.index, ETH.close) plt.xlabel('Date') plt.ylabel('Price in USD') plt.title('Price of ETH over years') plt.show() plt.plot(LTC.index, LTC.close) plt.xlabel('Date') plt.ylabel('Price in USD') plt.title('Price of LTC over years') plt.show() print(BTC.isnull().sum()) print(ETH.isnull().sum()) print(LTC.isnull().sum()) ``` No null values at all, no need to drop any data ``` # Use only data from the last 2 years for modelling BTC_2yr = BTC['2020-01-01':] ETH_2yr = ETH['2020-01-01':] LTC_2yr = LTC['2020-01-01':] BTC_2yr ``` # Feature engineering ``` def upper_shadow(df): return df['high'] - np.maximum(df['close'], df['open']) def lower_shadow(df): return np.minimum(df['close'], df['open']) - df['low'] def get_features(df, row = False): df_feat = df df_feat['spread'] = df_feat['high'] - df_feat['low'] df_feat['upper_shadow'] = upper_shadow(df_feat) df_feat['lower_shadow'] = lower_shadow(df_feat) df_feat['close-open'] = df_feat['close'] - df_feat['open'] df_feat['SMA_7'] = df_feat.iloc[:,1].rolling(window=7).mean() df_feat['SMA_14'] = df_feat.iloc[:,1].rolling(window=14).mean() df_feat['SMA_21'] = df_feat.iloc[:,1].rolling(window=21).mean() # Create the STD_DEV feature for the past 7 days df_feat['STD_DEV_7'] = df_feat.iloc[:,1].rolling(window=7).std() # Drop the NA rows created by the SMA indicators df_feat.dropna(inplace = True) return df_feat BTC_2yr = get_features(BTC_2yr) BTC_2yr plt.plot(BTC_2yr.index, BTC_2yr['close']) plt.show() BTC_y = BTC_2yr['close'] BTC_X = BTC_2yr.drop('close', axis=1) BTC_X ``` # Modelling ## Baseline model: linear regression ``` # 70% for training, 30% for testing index_70pct = int(len(BTC_X)*0.7) BTC_X_train = BTC_X[:index_70pct] BTC_X_test = BTC_X[index_70pct:] BTC_y_train = BTC_y[:index_70pct] BTC_y_test = BTC_y[index_70pct:] print(BTC_X_train) print(BTC_y_test) linreg = LinearRegression() linreg.fit(BTC_X_train, BTC_y_train) BTC_y_pred = linreg.predict(BTC_X_test) BTC_y_pred len(BTC_y_test) mse = mean_squared_error(BTC_y_pred, BTC_y_test) mse plt.plot(BTC_y_train.index, BTC_y_train, color = 'y', label ='Training prices') plt.plot(BTC_y_test.index, BTC_y_pred, color = 'b', label ='Test prices') plt.legend() plt.show() ```
github_jupyter
# Final exam **Note:** Use these guidelines if and only if you are taking the **final exam**. If you are working on a **final project of your own design**, see the (separate) [final project guidelines](https://github.com/wilkens-teaching/info3350-s22/blob/main/final_exam/project.ipynb). ## Guidelines This exam is for **undergraduates enrolled in INFO 3350**. If you are a graduate student enrolled in INFO 6350, you must complete a final project of your own design. ### The task Your task is to: identify an interesting problem that's addressable with the help of computational methods applied to the supplied corpus, formulate a hypothesis about that problem, devise an experiment or experiments to test your hypothesis, present the results of your investigations, and discuss your findings. This workflow essentially replicates the process of writing an academic paper. You can think of your exam as a paper in miniature. You are free to present each component as you see fit. You should use free-form text (that is, your own writing in a markdown cell), citations of others' work, numerical results, tables of data, and static and/or interactive visualizations as appropriate. Total length is flexible and depends on the specific balance you strike between the ambition of your question and the sophistication of your methods. But be aware that numbers never, ever speak for themselves. Quantitative results presented without substantial discussion will not earn high marks. Your project should reflect, at minimum, ten or more hours of work, though you will be graded on the quality of your output, not the amount of time it took you to produce it. #### Pick an important and interesting problem! No amount of technical sophistication will overcome a fundamentally uninteresting problem at the core of your work. You have seen many pieces of successful computational humanities research over the course of the semester. You might use these as a guide to the kinds of problems that interest scholars in a range of humanities disciplines. You may also want to spend some time in the library, reading recent books and articles in the professional literature. **Problem selection and motivation are integral parts of the project.** Do not neglect them. ### The corpus We have supplied you (via the course GitHub site) with a corpus of 1,540 volumes of American fiction published between 1789 and 1875, as well as a range of potentially relevant metadata. This corpus is large: it contains well over 100 million words. Some summary and descriptive statistics are included below, along with a short annotation of the metadata fields. **Be aware that some (but certainly not all) text analysis tasks will be slow (or impossible) when run over a corpus as large as this one.** For comparison purposes, the album review dataset we used for homework 8 contained about 10% as many words (but a lot more total documents). You might consider whether or not your question requires the use of the full corpus. Books in the corpus are those that were included in volumes 1 and 2 of Lyle Wright's three-volume bibliography of American fiction before 1900 and that were digitized by the University of Virginia (1789-1850) and Indiana University (1851-1875). This corpus includes about 40% of the American fiction from the period (1789-1875) that has been preserved in American academic libraries. You might think a little about what kinds of books are most likely to have found their way first into print and then into academic libraries, and what kinds of books (and authors) might not have. Metadata were collected manually by a team of undergraduate students at the University of Notre Dame. **Note that the nineteenth century was awful.** These books reflect that fact in all kinds of ways, even though (or maybe because) they were generally considered unproblematic at the time. If you read the books or dig very far into the most informative features, you will quickly discover objectionable content. It would be valuable to devise (and you will be rewarded for devising) methods to avoid displaying unmasked versions of racial slurs, for example, in any visualization that might otherwise include them. ### Format You should submit your exam as a report in the form of a Jupyter notebook that includes all code, figures, and write-up. Your report should have four basic sections (provided in cells below for ease of reference and reuse): 1. **Introduction and hypothesis.** What problem are you working on? Why is it interesting and important? What have other people said about it? What do you expect to find? 2. **Corpus, data, and methods.** What data have you used? What are the limitations of that data? What major methods will you use to analyze it? Why are those methods the appropriate ones? 3. **Results.** What did you find? How did you find it? How should we read your figures? Be sure to include confidence intervals or other measures of statistical significance or uncetainty where appropriate. 4. **Discussion and conclusions.** What does it all mean? Do your results support your hypothesis? Why or why not? What are the limitations of your study and how might those limitations be addressed in future work? Within each of those sections, you may use as many code and markdown cells as you like. You may, of course, address additional questions or issues not listed above. You may also gather additional data or metadata relevant to your analysis, but you are not required to do so. All code used in the project should be present in the notebook (except for widely-available libraries that you import), but **be sure that we can read and understand your report in full without rerunning the code**. Unexecuted code will receive no credit. Be sure, too, to explain what you're doing along the way, both by describing your data and methods and by writing clean, well commented code. ### Grading This exam is the take-home final for the course. It is worth 20% of your overall grade. You will be graded on the quality and ambition of each aspect of the project. No single component is more important than the others. ### Practical details * The exam is due at **11:59pm EST on Thursday, May 19, 2022** via upload of a single, fully executed Jupyter notebook file to CMS. * **You must work alone.** You may not collaborate with others. * You may post questions on Ed, but should do so privately (visible to course staff only). * Interactive visualizations do not always work when embedded in shared notebooks. If you plan to use interactives, you may need to host them elsewhere and link to them. --- ## 1. Introduction and hypothesis ## 2. Data and methods ``` # Imports import os import pandas as pd # File locations # Note that metadata are supplied as a TSV file # Text files are in a directory, one file per (long, novel-like) document metadata_file = os.path.join('..', 'data', 'us_fiction', 'corpus_data.tsv') text_dir = os.path.join('..', 'data', 'us_fiction', 'us_texts') # Load the metadata metadata = pd.read_csv( metadata_file, sep='\t', low_memory=False ).set_index('source_id') ``` ### Corpus details The cells below are supplied to help you understand the corpus. **You should remove them from your completed exam** and include only the information you deem relevant to your report. That said, you are free to keep the metadata-loading code above and you may copy any and all of the other code below for your own purposes. ``` # Glance at the metadata metadata.head() # Summary stats for numeric columns metadata.describe() ``` ### Field definitions and distributional stats Most of the metadata fields are self-explanatory, but here are some details. Note that not every field in the metadata is described below. * `source_id`: This is the name of the file corresponding to the volume. You can use it to match metadata records to full-text documents. Note that the corpus includes a nontrivial number of multivolume works. These volumes have `source_id`s like `eaf086v1` or `Wright2-1720v2`. * `gender`: Author gender. `M`, `F`, or `NaN` (= unknown). * `gender_guess`: Was the author gender assignment determined by biographical research (`0`) or by guessing on the basis of the author's name (`1`)? * `ethnicity`: Author ethnicity. One of `White`, `Black`, `Native`, or `NaN` (= unknown). Always assigned via biographical research. Not very useful, as the values are almost exclusively `White` or `NaN`. This fact tells you something about the US literary field in the nineteenth century. * `occupation` and `occupation_free`: The author's primary employment identification. Recall that the US in the nineteenth century didn't always have a large market for novels, so many of the authors in the corpus made their living by other means. The difference between these fields is that `occupation` uses a fixed vocabulary, while `occupation_free` does not (so includes more detailed or fine-grained information). * `state_*`: The state in which the author was `born`, `died`, and with which they are conventionally associated (`main`). * `born` and `died`: Year of the author's birth and death, respectively, where known. ``` # Occurrence counts for selected metadata fields for col in ['gender', 'gender_guess', 'ethnicity', 'occupation', 'state_main']: display(metadata[col].value_counts()) print() # Distribution of publication dates metadata.pub_date.plot.hist(bins=metadata.pub_date.max()-metadata.pub_date.min()+1); # Distribution of volume lengths # Note removal of long volumes metadata.loc[metadata.words.between(0,250000)].words.plot.hist(bins=100); ``` The corpus includes some very long and very short volumes. Think about what you want to do with outliers. You'll also want to think about whether or not to break each volume into chunks; this is a good idea for some purposes, but not for others. ## 3. Results ## 4. Discussion and conclusions
github_jupyter
# Image Classification In this project, you'll classify images from the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. ## Get the Data Run the following cell to download the [CIFAR-10 dataset for python](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz). ``` """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('cifar-10-python.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', 'cifar-10-python.tar.gz', pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open('cifar-10-python.tar.gz') as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) ``` ## Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named `data_batch_1`, `data_batch_2`, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the `batch_id` and `sample_id`. The `batch_id` is the id for a batch (1-5). The `sample_id` is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np import tensorflow as tf # Explore the dataset batch_id = 1 sample_id = 89 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) ``` ## Implement Preprocess Functions ### Normalize In the cell below, implement the `normalize` function to take in image data, `x`, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as `x`. ``` def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ return x / 255 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) ``` ### One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the `one_hot_encode` function. The input, `x`, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to `one_hot_encode`. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. ``` def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ one_hot = np.zeros((len(x), 10), np.int_) for i, label in enumerate(x): one_hot[i][label] = 1 return one_hot """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) ``` ### Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. ## Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) ``` # Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ``` import tensorflow as tf """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) ``` ## Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. If you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) to build each layer, except "Convolutional & Max Pooling" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. If you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin! ### Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement `neural_net_image_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) * Set the shape using `image_shape` with batch size set to `None`. * Name the TensorFlow placeholder "x" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder). * Implement `neural_net_label_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) * Set the shape using `n_classes` with batch size set to `None`. * Name the TensorFlow placeholder "y" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder). * Implement `neural_net_keep_prob_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder). These names will be used at the end of the project to load your saved model. Note: `None` for shapes in TensorFlow allow for a dynamic size. ``` def neural_net_image_input(image_shape): """ Return a Tensor for a bach of image input : image_shape: Shape of the images : return: Tensor for image input. """ return tf.placeholder(tf.float32, [None, *image_shape], "x") def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ return tf.placeholder(tf.float32, [None, n_classes], "y") def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ return tf.placeholder(tf.float32, name = "keep_prob") """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) ``` ### Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function `conv2d_maxpool` to apply convolution then max pooling: * Create the weight and bias using `conv_ksize`, `conv_num_outputs` and the shape of `x_tensor`. * Apply a convolution to `x_tensor` using weight and `conv_strides`. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using `pool_ksize` and `pool_strides`. * We recommend you use same padding, but you're welcome to use any padding. Note: You **can't** use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) for this layer. You're free to use any TensorFlow package for all the other layers. ``` def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for convolution :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # used to rule out errors in my network #conv_layer = tf.layers.conv2d(x_tensor, conv_num_outputs, conv_ksize, conv_strides, "SAME") #conv_layer = tf.layers.max_pooling2d(conv_layer, pool_ksize, pool_strides, "SAME") #return conv_layer color_channels = x_tensor.get_shape().as_list()[-1] # before setting stddev=0.1, my training validation accuracy never get over 0.1, # after reading some source code of TensorFlow, I found a keyword "stddev", then search it in Slack, # gotcha! accuracy get to 0.2 ! weights = tf.Variable(tf.truncated_normal([*conv_ksize, color_channels, conv_num_outputs], stddev=0.1)) # ~~but it stops at 0.2, and that's because I add extra `[]` in `tf.zeros` param!~~ # tf.zeros([3]) is the same as tf.zeros(3), it should be some error in ConvNet architecture, not the `[]`! biases = tf.Variable(tf.zeros([conv_num_outputs])) conv_layer = tf.nn.conv2d(x_tensor, weights, strides=[1, *conv_strides, 1], padding="SAME") conv_layer = tf.nn.bias_add(conv_layer, biases) conv_layer = tf.nn.relu(conv_layer) conv_layer = tf.nn.max_pool(conv_layer, ksize=[1, *pool_ksize, 1], strides=[1, *pool_strides, 1], padding='SAME') return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) ``` ### Flatten Layer Implement the `flatten` function to change the dimension of `x_tensor` from a 4-D tensor to a 2-D tensor. The output should be the shape (*Batch Size*, *Flattened Image Size*). You can use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) for this layer. ``` def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # use contrib layer instead size = 1 for x in x_tensor.get_shape().as_list(): if (x is not None): size = size * x return tf.reshape(x_tensor, [-1, size]) # return tf.contrib.layers.flatten(x_tensor) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) ``` ### Fully-Connected Layer Implement the `fully_conn` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). You can use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) for this layer. ``` def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # use contrib layer instead input_size = x_tensor.get_shape().as_list()[1] # also stddev saves me! weights = tf.Variable(tf.truncated_normal([input_size, num_outputs], stddev=0.1)) biases = tf.Variable(tf.zeros([num_outputs])) fully_connected = tf.add(tf.matmul(x_tensor, weights), biases) fully_connected = tf.nn.relu(fully_connected) return fully_connected # return tf.contrib.layers.fully_connected(x_tensor, num_outputs) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) ``` ### Output Layer Implement the `output` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). You can use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) for this layer. Note: Activation, softmax, or cross entropy shouldn't be applied to this. ``` def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # use contrib layer instead input_size = x_tensor.get_shape().as_list()[1] weights = tf.Variable(tf.truncated_normal([input_size, num_outputs], stddev=0.1)) biases = tf.Variable(tf.zeros([num_outputs])) out = tf.add(tf.matmul(x_tensor, weights), biases) return out # return tf.contrib.layers.fully_connected(x_tensor, num_outputs, None) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) ``` ### Create Convolutional Model Implement the function `conv_net` to create a convolutional neural network model. The function takes in a batch of images, `x`, and outputs logits. Use the layers you created above to create this model: * Apply 1, 2, or 3 Convolution and Max Pool layers * Apply a Flatten Layer * Apply 1, 2, or 3 Fully Connected Layers * Apply an Output Layer * Return the output * Apply [TensorFlow's Dropout](https://www.tensorflow.org/api_docs/python/tf/nn/dropout) to one or more layers in the model using `keep_prob`. ``` def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) # After exploring around, I find out such a good architecture, it get 0.4 at the third epoch conv1 = conv2d_maxpool(x, 12, (3, 3), (1, 1), (2, 2), (2, 2)) conv2 = conv2d_maxpool(conv1, 24, (3, 3), (1, 1), (2, 2), (2, 2)) conv3 = conv2d_maxpool(conv2, 48, (3, 3), (1, 1), (2, 2), (2, 2)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) flat = flatten(conv3) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) fc1 = fully_conn(flat, 576) fc1 = tf.nn.dropout(fc1, keep_prob) fc2 = fully_conn(fc1, 384) fc2 = tf.nn.dropout(fc2, keep_prob) fc3 = fully_conn(fc2, 192) fc3 = tf.nn.dropout(fc3, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) out = output(fc3, 10) # TODO: return output return out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) ``` ## Train the Neural Network ### Single Optimization Implement the function `train_neural_network` to do a single optimization. The optimization should use `optimizer` to optimize in `session` with a `feed_dict` of the following: * `x` for image input * `y` for labels * `keep_prob` for keep probability for dropout This function will be called for each batch, so `tf.global_variables_initializer()` has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. ``` def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ session.run(optimizer, {x: feature_batch, y: label_batch, keep_prob: keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) ``` ### Show Stats Implement the function `print_stats` to print loss and validation accuracy. Use the global variables `valid_features` and `valid_labels` to calculate validation accuracy. Use a keep probability of `1.0` to calculate the loss and validation accuracy. ``` def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ train_loss = session.run(cost, {x: feature_batch, y: label_batch, keep_prob: 1.}) valid_loss = session.run(cost, {x: valid_features, y: valid_labels, keep_prob: 1.}) valid_acc = session.run(accuracy, {x: valid_features, y: valid_labels, keep_prob: 1.}) print('Train Loss: {:>10.6f}, Validation Loss: {:>10.6f}, Validation Accuracy: {:.6f}' .format(train_loss, valid_loss, valid_acc)) ``` ### Hyperparameters Tune the following parameters: * Set `epochs` to the number of iterations until the network stops learning or start overfitting * Set `batch_size` to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set `keep_probability` to the probability of keeping a node using dropout ``` # TODO: Tune Parameters # Focusing on **validation loss**, it start increasing at 35 epoch # That's why test accuracy decrease from 66.12% to 64.87% when try 500 epoch. # AWS is awesome! In my MacBook Pro (15-inch, 2016), it takes 6s for one epoch, # AWS only need less then 1s! epochs = 20 batch_size = 256 keep_probability = 0.5 ``` ### Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import time print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): start = time.time() batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) end = time.time() print("time: ", end - start) ``` ### Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) ``` # Checkpoint The model has been saved to disk. ## Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() ``` ## Why 50-70% Accuracy? You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores [well above 70%](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130). That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques. ## Submitting This Project When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
github_jupyter
# FRED API Tutorial This tutorial aims to be a quick guide to get you started using the FRED API integrated into Messari’s python library. ``` from messari.fred import FRED API_KEY='your_api_key' fred = FRED(api_key=API_KEY) ``` ## API Structure The FRED Python client contains a number of functions that wrap some of FRED’s API endpoints. These include: Categories Releases Series Sources Tags Below are a few examples to showcase the functionality and types of data each function generates. ## Categories Functions to return information about categories tracked by the FRED ``` categories = [125, 124] ``` ### get_category Get a category ``` categories_df = fred.get_category(categories) categories_df.head() ``` ### get_category_children Get the child categories for a specified parent category ``` children_df = fred.get_category_children([1, 10]) children_df.head() ``` ### get_category_related Get the related categories for a category ``` categories_related_df = fred.get_category_related('32073') categories_related_df.head() ``` ### get_category_series Get the series in a category ``` categories_series_df = fred.get_category_series(categories) categories_series_df.head() ``` ### get_category_tags Get the tags for a category ``` categories_tags_df = fred.get_category_tags(categories) categories_tags_df.head() ``` ### get_category_related_tags Get the related tags for a category ``` tag_names = ['services', 'quarterly'] categories_related_tags_df = fred.get_category_related_tags(categories, tag_names) categories_related_tags_df.head() ``` ## Releases Functions to return information about releases tracked by the FRED ``` # Setup here releases = [9, 262] ``` ### get_releases Get all releases of economic data ``` releases_df = fred.get_releases() releases_df.head() ``` ### get_releases_dates Get release dates for all releases of economic data ``` releases_dates_df = fred.get_releases_dates() releases_dates_df.head() ``` ### get_release Get a release of economic data ``` release_df = fred.get_release(releases) release_df.head() ``` ### get_release_dates Get release dates for a release of economic data ``` release_dates_df = fred.get_release_dates(releases) release_dates_df.head() ``` ### get_release_series Get the series on a release of economic data ``` release_series_df = fred.get_release_series(releases) release_series_df.head() ``` ### get_release_sources Get the sources for a release of economic data ``` release_sources_df = fred.get_release_sources(releases) release_sources_df.head() ``` ### get_release_tags Get the tags for a release ``` release_tags_df = fred.get_release_tags(releases) release_tags_df.head() ``` ### get_release_related_tags Get the related tags for a release ``` tag_names = ['sa', 'foreign'] release_related_tags_df = fred.get_release_related_tags('86', tag_names) release_related_tags_df.head() ``` ## Series Functions to return information about categories tracked by the FRED ``` # Setup here series = ['GNPCA', 'DGS10'] ``` ### get_series Get an economic data series ``` series_df = fred.get_series(series) series_df.head() ``` ### get_series_categories Get the categories for an economic data series ``` series_categories_df = fred.get_series_categories(series) series_categories_df.head() ``` ### get_series_observations Get the observations or data values for an economic data series ``` treasuries = ['DGS1MO','DGS3MO','DGS1','DGS2', 'DGS5', 'DGS7', 'DGS10', 'DGS20', 'DGS30'] series_observations_df = fred.get_series_observations(treasuries) series_observations_df import numpy as np series_observations_df['2020-01-01':].astype(np.float64).plot(figsize=(20,10)) ``` ### get_series_release Get the release for an economic data series ``` series_release_df = fred.get_series_release(series) series_release_df ``` ### get_series_tags Get the tags for an economic data series ``` series_tags_df = fred.get_series_tags(series) series_tags_df ``` ### get_series_updates Get economic data series sorted by when observations were updated on the FRED server ``` series_updates_df = fred.get_series_updates(start_date='2013-01-01', end_date='2020-06-01') series_updates_df ``` ### get_series_vintagedates Get the dates in history when a series' data values were revised or new data values were released ``` series_vintagedates_df = fred.get_series_vintagedates(series) series_vintagedates_df ``` ## Sources Functions to return information about categories tracked by the FRED ``` # Setup here sources = [1, 3] ``` ### get_sources Get all sources of economic data ``` sources_df = fred.get_sources() sources_df.head() ``` ### get_source Get a source of economic data ``` source_df = fred.get_source(sources) source_df.head() ``` ### get_source_releases Get the releases for a source ``` source_releases_df = fred.get_source_releases(sources) source_releases_df.head() ``` ## tags Functions to return information about categories tracked by the FRED ``` # Setup here tags = ['slovenia', 'food', 'oecd'] ``` ### get_tags Get all tags, search for tags, or get tags by name ``` tags_df = fred.get_tags() tags_df.head() ``` ### get_related_tags Get the related tags for one or more tags ``` related_tags_df = fred.get_related_tags(tags) related_tags_df.head() ``` ### get_tags_series Get the series matching tags ``` tags_series_df = fred.get_tags_series(tags) tags_series_df.head() ```
github_jupyter
``` from typing import Dict from tempfile import gettempdir import matplotlib.pyplot as plt import numpy as np import torch from torch import nn, optim from torch.utils.data import DataLoader from torchvision.models.resnet import resnet50 from tqdm import tqdm from l5kit.configs import load_config_data from l5kit.data import LocalDataManager, ChunkedDataset from l5kit.dataset import AgentDataset, EgoDataset from l5kit.rasterization import build_rasterizer from l5kit.evaluation import write_pred_csv, compute_metrics_csv, read_gt_csv, create_chopped_dataset from l5kit.evaluation.chop_dataset import MIN_FUTURE_STEPS from l5kit.evaluation.metrics import neg_multi_log_likelihood, time_displace from l5kit.geometry import transform_points from l5kit.visualization import PREDICTED_POINTS_COLOR, TARGET_POINTS_COLOR, draw_trajectory from prettytable import PrettyTable from pathlib import Path import os ``` ## Prepare Data path and load cfg By setting the `L5KIT_DATA_FOLDER` variable, we can point the script to the folder where the data lies. Then, we load our config file with relative paths and other configurations (rasteriser, training params...). ``` # set env variable for data os.environ["L5KIT_DATA_FOLDER"] = "PATH_TO_DATA" dm = LocalDataManager(None) # get config cfg = load_config_data("./agent_motion_config.yaml") print(cfg) ``` ## Model Our baseline is a simple `resnet50` pretrained on `imagenet`. We must replace the input and the final layer to address our requirements. ``` def build_model(cfg: Dict) -> torch.nn.Module: # load pre-trained Conv2D model model = resnet50(pretrained=True) # change input channels number to match the rasterizer's output num_history_channels = (cfg["model_params"]["history_num_frames"] + 1) * 2 num_in_channels = 3 + num_history_channels model.conv1 = nn.Conv2d( num_in_channels, model.conv1.out_channels, kernel_size=model.conv1.kernel_size, stride=model.conv1.stride, padding=model.conv1.padding, bias=False, ) # change output size to (X, Y) * number of future states num_targets = 2 * cfg["model_params"]["future_num_frames"] model.fc = nn.Linear(in_features=2048, out_features=num_targets) return model def forward(data, model, device, criterion): inputs = data["image"].to(device) target_availabilities = data["target_availabilities"].unsqueeze(-1).to(device) targets = data["target_positions"].to(device) # Forward pass outputs = model(inputs).reshape(targets.shape) loss = criterion(outputs, targets) # not all the output steps are valid, but we can filter them out from the loss using availabilities loss = loss * target_availabilities loss = loss.mean() return loss, outputs ``` ## Load the Train Data Our data pipeline map a raw `.zarr` folder into a multi-processing instance ready for training by: - loading the `zarr` into a `ChunkedDataset` object. This object has a reference to the different arrays into the zarr (e.g. agents and traffic lights); - wrapping the `ChunkedDataset` into an `AgentDataset`, which inherits from torch `Dataset` class; - passing the `AgentDataset` into a torch `DataLoader` ``` # ===== INIT DATASET train_cfg = cfg["train_data_loader"] rasterizer = build_rasterizer(cfg, dm) train_zarr = ChunkedDataset(dm.require(train_cfg["key"])).open() train_dataset = AgentDataset(cfg, train_zarr, rasterizer) train_dataloader = DataLoader(train_dataset, shuffle=train_cfg["shuffle"], batch_size=train_cfg["batch_size"], num_workers=train_cfg["num_workers"]) print(train_dataset) # ==== INIT MODEL device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = build_model(cfg).to(device) optimizer = optim.Adam(model.parameters(), lr=1e-3) criterion = nn.MSELoss(reduction="none") ``` # Training note: if you're on MacOS and using `py_satellite` rasterizer, you may need to disable opencv multiprocessing by adding: `cv2.setNumThreads(0)` before the following cell. This seems to only affect running in python notebook and it's caused by the `cv2.warpaffine` function ``` # ==== TRAIN LOOP tr_it = iter(train_dataloader) progress_bar = tqdm(range(cfg["train_params"]["max_num_steps"])) losses_train = [] for _ in progress_bar: try: data = next(tr_it) except StopIteration: tr_it = iter(train_dataloader) data = next(tr_it) model.train() torch.set_grad_enabled(True) loss, _ = forward(data, model, device, criterion) # Backward pass optimizer.zero_grad() loss.backward() optimizer.step() losses_train.append(loss.item()) progress_bar.set_description(f"loss: {loss.item()} loss(avg): {np.mean(losses_train)}") ``` ### Plot Loss Curve We can plot the train loss against the iterations (batch-wise) ``` plt.plot(np.arange(len(losses_train)), losses_train, label="train loss") plt.legend() plt.show() ``` # Evaluation Evaluation follows a slightly different protocol than training. When working with time series, we must be absolutely sure to avoid leaking the future in the data. If we followed the same protocol of training, one could just read ahead in the `.zarr` and forge a perfect solution at run-time, even for a private test set. As such, **the private test set for the competition has been "chopped" using the `chop_dataset` function**. ``` # ===== GENERATE AND LOAD CHOPPED DATASET num_frames_to_chop = 100 eval_cfg = cfg["val_data_loader"] eval_base_path = create_chopped_dataset(dm.require(eval_cfg["key"]), cfg["raster_params"]["filter_agents_threshold"], num_frames_to_chop, cfg["model_params"]["future_num_frames"], MIN_FUTURE_STEPS) ``` The result is that **each scene has been reduced to only 100 frames**, and **only valid agents in the 100th frame will be used to compute the metrics**. Because following frames in the scene have been chopped off, we can't just look ahead to get the future of those agents. In this example, we simulate this pipeline by running `chop_dataset` on the validation set. The function stores: - a new chopped `.zarr` dataset, in which each scene has only the first 100 frames; - a numpy mask array where only valid agents in the 100th frame are True; - a ground-truth file with the future coordinates of those agents; Please note how the total number of frames is now equal to the number of scenes multipled by `num_frames_to_chop`. The remaining frames in the scene have been sucessfully chopped off from the data ``` eval_zarr_path = str(Path(eval_base_path) / Path(dm.require(eval_cfg["key"])).name) eval_mask_path = str(Path(eval_base_path) / "mask.npz") eval_gt_path = str(Path(eval_base_path) / "gt.csv") eval_zarr = ChunkedDataset(eval_zarr_path).open() eval_mask = np.load(eval_mask_path)["arr_0"] # ===== INIT DATASET AND LOAD MASK eval_dataset = AgentDataset(cfg, eval_zarr, rasterizer, agents_mask=eval_mask) eval_dataloader = DataLoader(eval_dataset, shuffle=eval_cfg["shuffle"], batch_size=eval_cfg["batch_size"], num_workers=eval_cfg["num_workers"]) print(eval_dataset) ``` ### Storing Predictions There is a small catch to be aware of when saving the model predictions. The output of the models are coordinates in `agent` space and we need to convert them into displacements in `world` space. To do so, we first convert them back into the `world` space and we then subtract the centroid coordinates. ``` # ==== EVAL LOOP model.eval() torch.set_grad_enabled(False) # store information for evaluation future_coords_offsets_pd = [] timestamps = [] agent_ids = [] progress_bar = tqdm(eval_dataloader) for data in progress_bar: _, ouputs = forward(data, model, device, criterion) # convert agent coordinates into world offsets agents_coords = ouputs.cpu().numpy() world_from_agents = data["world_from_agent"].numpy() centroids = data["centroid"].numpy() coords_offset = [] for agent_coords, world_from_agent, centroid in zip(agents_coords, world_from_agents, centroids): coords_offset.append(transform_points(agent_coords, world_from_agent) - centroid[:2]) future_coords_offsets_pd.append(np.stack(coords_offset)) timestamps.append(data["timestamp"].numpy().copy()) agent_ids.append(data["track_id"].numpy().copy()) ``` ### Save results After the model has predicted trajectories for our evaluation set, we can save them in a `csv` file. During the competition, only the `.zarr` and the mask will be provided for the private test set evaluation. Your solution is expected to generate a csv file which will be compared to the ground truth one on a separate server ``` pred_path = f"{gettempdir()}/pred.csv" write_pred_csv(pred_path, timestamps=np.concatenate(timestamps), track_ids=np.concatenate(agent_ids), coords=np.concatenate(future_coords_offsets_pd), ) ``` ### Perform Evaluation Pleae note that our metric supports multi-modal predictions (i.e. multiple predictions for a single GT trajectory). In that case, you will need to provide a confidence for each prediction (confidences must all be between 0 and 1 and sum to 1). In this simple example we don't generate multiple trajectories, so we won't pass any confidences vector. Internally, the metric computation will assume a single trajectory with confidence equal to 1 ``` metrics = compute_metrics_csv(eval_gt_path, pred_path, [neg_multi_log_likelihood, time_displace]) for metric_name, metric_mean in metrics.items(): print(metric_name, metric_mean) ``` ### Visualise Results We can also visualise some results from the ego (AV) point of view for those frames of interest (the 100th of each scene). However, as we chopped off the future from the dataset **we must use the GT csv if we want to plot the future trajectories of the agents** ``` model.eval() torch.set_grad_enabled(False) # build a dict to retrieve future trajectories from GT gt_rows = {} for row in read_gt_csv(eval_gt_path): gt_rows[row["track_id"] + row["timestamp"]] = row["coord"] eval_ego_dataset = EgoDataset(cfg, eval_dataset.dataset, rasterizer) for frame_number in range(99, len(eval_zarr.frames), 100): # start from last frame of scene_0 and increase by 100 agent_indices = eval_dataset.get_frame_indices(frame_number) if not len(agent_indices): continue # get AV point-of-view frame data_ego = eval_ego_dataset[frame_number] im_ego = rasterizer.to_rgb(data_ego["image"].transpose(1, 2, 0)) center = np.asarray(cfg["raster_params"]["ego_center"]) * cfg["raster_params"]["raster_size"] predicted_positions = [] target_positions = [] for v_index in agent_indices: data_agent = eval_dataset[v_index] out_net = model(torch.from_numpy(data_agent["image"]).unsqueeze(0).to(device)) out_pos = out_net[0].reshape(-1, 2).detach().cpu().numpy() # store absolute world coordinates predicted_positions.append(transform_points(out_pos, data_agent["world_from_agent"])) # retrieve target positions from the GT and store as absolute coordinates track_id, timestamp = data_agent["track_id"], data_agent["timestamp"] target_positions.append(gt_rows[str(track_id) + str(timestamp)] + data_agent["centroid"][:2]) # convert coordinates to AV point-of-view so we can draw them predicted_positions = transform_points(np.concatenate(predicted_positions), data_ego["raster_from_world"]) target_positions = transform_points(np.concatenate(target_positions), data_ego["raster_from_world"]) draw_trajectory(im_ego, predicted_positions, PREDICTED_POINTS_COLOR) draw_trajectory(im_ego, target_positions, TARGET_POINTS_COLOR) plt.imshow(im_ego[::-1]) plt.show() ```
github_jupyter
``` import os import re from importlib import reload import numpy as np import pandas as pd from scipy.stats import norm from scipy import stats import seaborn as sns import matplotlib.pylab as plt from luescher_nd.database import utilities as ut from luescher_nd.database.utilities import DATA_FOLDER sns.set(context="notebook", style="ticks", font_scale=1) %load_ext blackcellmagic files = [f for f in os.listdir(DATA_FOLDER) if f.endswith(".sqlite") and not "tmp" in f] print("\n".join([f"{n:2d} {f}" for n, f in enumerate(files)])) files = [f for f in files if "spherical" in f and "a1g" in f and "0.0" in f] cols = ["L", "n1d", "nstep", "x", "contact_strength", "nlevel"] dfs = [] for f in files: df = ut.read_table(os.path.join(DATA_FOLDER, f), filter_degeneracy=True)[cols] df["precise"] = False if "less-prec" in f else True dfs.append(df) df = ( pd.concat(dfs, ignore_index=True) .set_index(["precise", "L", "n1d", "nstep", "nlevel"]) .sort_index() ) diff = (df.loc[True] - df.loc[False]).dropna() diff.head() fig, ax = plt.subplots(figsize=(2, 3), dpi=250) counts = diff.reset_index().groupby(["L", "n1d", "nstep"])[["x"]].count() y = 4 * np.arange(len(counts)) ax.barh(y, counts.values.flatten(), height=4) ax.set_xlabel("count") ax.set_yticks(y) ax.set_yticklabels(["$%1.0f$, $%3d$, %3d" % el for el in counts.index], fontsize=4) ax.set_ylabel("$L$, $n_{1d}$, $n_\mathrm{step}$") sns.despine() ax.set_title("Count of $x$-values") plt.show(fig) ``` # Distribution of $x$-values ``` grid = sns.FacetGrid( diff.reset_index(), row="n1d", col="L", hue="nstep", sharex="col", sharey=False, margin_titles=True, ) grid.map(sns.distplot, "x", norm_hist=True) grid.add_legend() grid.set_xlabels("$\Delta x$") sns.despine(grid.fig, left=True, trim=True) for ax in grid.axes.flatten(): ax.set_yticks([]) plt.show(grid.fig) ``` ``` grid = sns.FacetGrid(diff.reset_index(), col="L", row="n1d", sharex="col", sharey=False) grid.map(sns.distplot, "x", norm_hist=True) grid.add_legend() grid.set_xlabels("$\Delta x$") sns.despine(grid.fig, left=True, trim=True) for ax in grid.axes.flatten(): ax.set_yticks([]) plt.show(grid.fig) grid = sns.FacetGrid(diff.reset_index(), col="L", sharex=False, sharey=False) grid.map(sns.distplot, "x", norm_hist=True) grid.add_legend() grid.set_xlabels("$\Delta x$") sns.despine(grid.fig, left=True, trim=True) for ax in grid.axes.flatten(): ax.set_yticks([]) plt.show(grid.fig) vals = diff.x.dropna().sort_values() vals = vals[(np.abs(stats.zscore(vals)) < 2)] mu, std = norm.fit(vals) x = np.linspace(diff.x.min(), diff.x.max(), 1000) p = norm.pdf(x, mu, std) grid = sns.FacetGrid(diff.reset_index(), sharex=False, sharey=False, xlim=(-1.e-12, 1.e-12)) grid.map(sns.distplot, "x", norm_hist=True) grid.map(sns.distplot, "x", norm_hist=True) grid.add_legend() grid.set_xlabels("$\Delta x$") sns.despine(grid.fig, left=True, trim=True) for ax in grid.axes.flatten(): ax.set_yticks([]) ax.plot(x, p, label=f"$N(\mu={mu:1.2e}, \sigma={std:1.2e})$", color="green") ax.legend(frameon=False, fontsize=6, loc="upper left", bbox_to_anchor=(0.6, 0.5)) grid.fig.set_dpi(250) plt.show(grid.fig) ``` # Contact interaction ``` cdiff = diff.reset_index().groupby(["L", "n1d", "nstep"])[["contact_strength"]].mean() vals = cdiff.contact_strength.dropna().sort_values() vals = vals[(np.abs(stats.zscore(vals)) < 4)] mu, std = norm.fit(vals) x = np.linspace(vals.min(), vals.max(), 1000) p = norm.pdf(x, mu, std) grid = sns.FacetGrid(cdiff.reset_index(), sharex=False, sharey=False, xlim=(-5.e-15, 5.e-15)) grid.map(sns.distplot, "contact_strength", norm_hist=True) grid.map(sns.distplot, "contact_strength", norm_hist=True) grid.add_legend() grid.set_xlabels("$\Delta c$") sns.despine(grid.fig, left=True, trim=True) for ax in grid.axes.flatten(): ax.set_yticks([]) ax.plot(x, p, label=f"$N(\mu={mu:1.2e}, \sigma={std:1.2e})$", color="green") ax.legend(frameon=False, fontsize=6, loc="upper left", bbox_to_anchor=(0.6, 0.5)) grid.fig.set_dpi(250) plt.show(grid.fig) ```
github_jupyter
``` import keras keras.__version__ ``` # 5.1 - Introduction to convnets This notebook contains the code sample found in Chapter 5, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. ---- First, let's take a practical look at a very simple convnet example. We will use our convnet to classify MNIST digits, a task that you've already been through in Chapter 2, using a densely-connected network (our test accuracy then was 97.8%). Even though our convnet will be very basic, its accuracy will still blow out of the water that of the densely-connected model from Chapter 2. The 6 lines of code below show you what a basic convnet looks like. It's a stack of `Conv2D` and `MaxPooling2D` layers. We'll see in a minute what they do concretely. Importantly, a convnet takes as input tensors of shape `(image_height, image_width, image_channels)` (not including the batch dimension). In our case, we will configure our convnet to process inputs of size `(28, 28, 1)`, which is the format of MNIST images. We do this via passing the argument `input_shape=(28, 28, 1)` to our first layer. ``` from keras import layers from keras import models model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) ``` Let's display the architecture of our convnet so far: ``` model.summary() ``` You can see above that the output of every `Conv2D` and `MaxPooling2D` layer is a 3D tensor of shape `(height, width, channels)`. The width and height dimensions tend to shrink as we go deeper in the network. The number of channels is controlled by the first argument passed to the `Conv2D` layers (e.g. 32 or 64). The next step would be to feed our last output tensor (of shape `(3, 3, 64)`) into a densely-connected classifier network like those you are already familiar with: a stack of `Dense` layers. These classifiers process vectors, which are 1D, whereas our current output is a 3D tensor. So first, we will have to flatten our 3D outputs to 1D, and then add a few `Dense` layers on top: ``` model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10, activation='softmax')) ``` We are going to do 10-way classification, so we use a final layer with 10 outputs and a softmax activation. Now here's what our network looks like: ``` model.summary() ``` As you can see, our `(3, 3, 64)` outputs were flattened into vectors of shape `(576,)`, before going through two `Dense` layers. Now, let's train our convnet on the MNIST digits. We will reuse a lot of the code we have already covered in the MNIST example from Chapter 2. ``` from keras.datasets import mnist from keras.utils import to_categorical (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28, 28, 1)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28, 28, 1)) test_images = test_images.astype('float32') / 255 train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(train_images, train_labels, epochs=5, batch_size=64) ``` Let's evaluate the model on the test data: ``` test_loss, test_acc = model.evaluate(test_images, test_labels) test_acc ``` While our densely-connected network from Chapter 2 had a test accuracy of 97.8%, our basic convnet has a test accuracy of 99.3%: we decreased our error rate by 68% (relative). Not bad!
github_jupyter
Programming Environment Extras === We use Ubuntu Linux as our main programming environment, for a number of reasons. This page covers a number of things we need to be aware of to help use and maintain a Linux computer. [Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb) Contents === - [Ubuntu Shortcuts](#Ubuntu-Shortcuts) - [Updating Ubuntu](#Updating-Ubuntu) - [Basic Terminal Commands](#Basic-Terminal-Commands) - [Terminal Shortcuts](#Terminal-Shortcuts) - [Installing Python 3.3 on Ubuntu 12.04](#Installing-Python-3.3-on-Ubuntu-12.04) - [Installing Geany](#Installing-Geany) Ubuntu Shortcuts === Knowing some simple shortcuts will help you do your programming work in linux. Switch windows Alt-tab [top](#) Updating Ubuntu === Running updates is a good idea; it keeps your computer functioning appropriately. From the terminal --- By taking this class, you have some interest in working with computers in a powerful way. The terminal is the most powerful way to use a linux computer, so let's jump in and use it whenever possible. Open a terminal using the keyboard shortcut Ctrl-Alt-T. Type the following command, and press enter: sudo apt-get update You will have to enter your password. Do this, and watch the information scroll by. Your computer is looking to find out which packages can be updated. Type the following command, and press enter: sudo apt-get dist-upgrade When you run this command, your computer downloads and installs all of the updates that were identified by the previous command. [top](#) Basic Terminal Commands === Packages --- Install a package: sudo apt-get install package_name Uninstall a package, but keep its configuration files (dependencies?): sudo apt-get remove package_name Remove a package, and its configuration files (dependencies?): sudo apt-get remove --purge package_name Python --- Start a Python interpreter, using the default version of Python on your system: python Start a Python interpreter, using a specific version of Python: python3.3 Exit out of the Python interpreter: Ctrl-D [top](#) Terminal Shortcuts === There are a few things that will help you become comfortable using the terminal. Open a terminal: Ctrl-Alt-T Scroll through your previous terminal commands: up arrow, down arrow Clear output from your terminal screen. You will still be able to scroll up and see previous output. Ctrl-L [top](#) Installing Python 3.3 on Ubuntu 12.04 === Ubuntu is still using Python 2.7 by default. It is fairly easy to install Python 3.3, however, and then be able to choose the version of Python you want to use. Add the "deadsnakes" package archive (ppa) to your system. This archive has a number of older and newer versions of Python. Then install python3.3. sudo apt-get install python-software-properties sudo add-apt-repository ppa:fkrull/deadsnakes sudo apt-get update sudo apt-get install python3.3 Now that Python 3.3 is installed on your system, you have two options. You can start a default Python 2.7 session by running the command 'python' in a terminal. You can start a Python 3.3 session by running the command 'python3.3' in a terminal. [top](#) Installing Geany === Geany is one of the text editors we will use for writing programs. Geany is a simple editor, which makes it easy to run programs. Output is shown in a separate terminal window, which gets you used to working in terminals as well. Open a terminal. sudo apt-get install geany Press the windows button, and type 'geany'. Drag the geany icon to the task bar on the left side of the screen. This creates a shortcut you can use to start geany. Write a Hello World program, and save it as 'hello.py'. There are three ways you can run a program in Geany: - Build > Execute - Press F5 - Click the icon with three gears on it You should see a terminal window pop up, with your output in it. Configuring Geany to use Python3.3 --- Open Geany, and open a Python [*Hello World*](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/hello_world.ipynb) program. If you don't have one on your system, write one and save it as *hello.py*, and run the program. This makes sure that Geany is trying to run Python programs. When you have a running *hello.py* program, go to Build >> Set Build Commands. Under 'Python commands', look for the 'Compile' line. Enter the following in the 'Command' box. Make sure you get the spaces right. You should have 'python3.3' followed by a space, and the rest of the command. If you have 'python 3.3', with a space between *python* and *3.3*, Geany will not be able to run your code. python3.3 -m py_compile "%f" Under 'Execute commands', look for the 'Execute' line. Enter the following in the 'Command' box, paying attention once again to the spaces. python3.3 "%f" Test your setup with a hello.py file, using Python 3.3 syntax. [top](#) - - - [Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb)
github_jupyter
``` %pylab inline import pandas as pd df = pd.read_csv("../data/ChungCheonDC/CompositeETCdata.csv") df_DC = pd.read_csv("../data/ChungCheonDC/CompositeDCdata.csv") df_DCstd = pd.read_csv("../data/ChungCheonDC/CompositeDCstddata.csv") # missininds = np.arange(df_DC[electrodeID[elecind]].values.size)[np.isnan(df_DC[electrodeID[elecind]].values)] electrodeID = df_DC.keys()[1:-1] ax1 = plt.subplot(111) ax1_1 = ax1.twinx() df.plot(figsize=(12,3), x='date', y='reservoirH', ax=ax1_1, color='k', linestyle='-', lw=2) df.plot(figsize=(12,3), x='date', y='upperH_med', ax=ax1_1, color='b', linestyle='-', lw=2) df.plot(figsize=(12,3), x='date', y='Temp (degree)', ax=ax1, color='r', linestyle='-', lw=2) ax1.legend(loc=3, bbox_to_anchor=(1.05, 0.7)) ax1_1.legend(loc=3, bbox_to_anchor=(1.05, 0.4)) itime_ref0 = 255 itime_ref1 = 115 ax1.plot(np.r_[itime_ref0, itime_ref0], np.r_[-5, 35], 'k-') ax1.plot(np.r_[itime_ref1, itime_ref1], np.r_[-5, 35], 'k-') print df['date'].values[itime_ref] print pd_reservoirH[2] from ipywidgets import interact, IntSlider, ToggleButtons itime = 93 itime_ref = 202 print df['date'].values[itime] elecind = [53, 110, 300] # vizDCtimeSeries(elecind, itime, itime_ref, ['k','b','r']) viz = lambda idatum, itime, flag: vizDCtimeSeries([idatum], itime, itime_ref, ['r'], flag) interact(viz, idatum=IntSlider(min=0, max=379, step=1, value=294)\ ,itime=IntSlider(min=0, max=360, step=1, value=200)\ ,flag=ToggleButtons(options=["std", "rho"])) ax1 = plt.subplot(111) ax1_1 = ax1.twinx() df_DC.plot(figsize=(12,3), x='date', y=electrodeID[elecind], ax=ax1, colors=['k', 'b', 'r']) df.plot(figsize=(12,3), x='date', y='reservoirH', ax=ax1_1, color='k', linestyle='-', lw=2) ax1.legend(loc=3, bbox_to_anchor=(1.05, 0.7)) ax1_1.legend(loc=3, bbox_to_anchor=(1.05, 0.4)) ax1.set_yscale('linear') ax1 = plt.subplot(111) df_DCstd.plot(figsize=(12,3), x='date', y=electrodeID[elecind], ax=ax1, colors=['k', 'b', 'r'], linestyle="-", marker='.', lw=1) ax1.set_yscale('log') ax1.legend(loc=3, bbox_to_anchor=(1.05, 0.7)) sys.path.append("../codes/") from DCdata import readReservoirDC_all directory = "../data/ChungCheonDC/" dat_temp,height_temp, ID = readReservoirDC_all(directory+"20151231180000.apr") from scipy import interpolate locs = dat_temp[:,:4] mida = locs[:,:2].sum(axis=1) midb = locs[:,2:].sum(axis=1) mid = (mida + midb)*0.5 dz = mida-midb x = np.linspace(mid.min(), mid.max(), 100) z = np.linspace(dz.min(), dz.max(), 100) grid_x, grid_z = np.meshgrid(x,z) def vizDCtimeSeries(idatum, itime, itime_ref, colors, flag): fig = plt.figure(figsize = (12, 12)) ax1 = plt.subplot(411) ax2 = plt.subplot(412) valsratio = df_DC[electrodeID].values[itime,:].flatten() / df_DC[electrodeID].values[itime_ref,:].flatten() valsDC = np.log10(df_DC[electrodeID].values[itime,:].flatten()) valsDCstd = df_DCstd[electrodeID].values[itime,:].flatten() grid_rho_ratio = griddata(mid, dz, valsratio, grid_x, grid_z, interp='linear') grid_rho_ratio = grid_rho_ratio.reshape(grid_x.shape) if flag =="std": vmin, vmax = 0, 10 grid_rho = griddata(mid, dz, valsDCstd, grid_x, grid_z, interp='linear') elif flag =="rho": vmin, vmax = np.log10(20), np.log10(200) grid_rho = griddata(mid, dz, valsDC, grid_x, grid_z, interp='linear') grid_rho = grid_rho.reshape(grid_x.shape) ax1.contourf(grid_x, grid_z, grid_rho, 200, vmin =vmin, vmax = vmax, clim=(vmin, vmax), cmap="jet") vmin, vmax = 0.9, 1.1 ax2.contourf(grid_x, grid_z, grid_rho_ratio, 200, vmin =vmin, vmax = vmax, clim=(vmin, vmax), cmap="jet") ax1.scatter(mid, dz, s=20, c = valsDC, edgecolor="None", vmin =vmin, vmax = vmax, clim=(vmin, vmax)) ax1.plot(mid, dz, 'k.') ax2.scatter(mid, dz, s=20, c = valsratio, edgecolor="None", vmin =vmin, vmax = vmax, clim=(vmin, vmax)) ax2.plot(mid, dz, 'k.') for i in range(len(colors)): ax1.plot(mid[idatum[i]], dz[idatum[i]], 'o', color=colors[i]) ax2.plot(mid[idatum[i]], dz[idatum[i]], 'o', color=colors[i]) ax3 = plt.subplot(413) ax3_1 = ax3.twinx() df.plot(x='date', y='reservoirH', ax=ax3_1, color='k', linestyle='-', lw=2) df.plot(x='date', y='upperH_med', ax=ax3_1, color='b', linestyle='-', lw=2) df.plot(x='date', y='Temp (degree)', ax=ax3, color='r', linestyle='-', lw=2) df.plot(x='date', y='Rainfall (mm)', ax=ax3, color='b', linestyle='-', marker="o", ms=4) ax3.legend(loc=3, bbox_to_anchor=(1.05, 0.7)) ax3_1.legend(loc=3, bbox_to_anchor=(1.05, 0.4)) itime_ref0 = itime_ref itime_ref1 = itime ax3.plot(np.r_[itime_ref0, itime_ref0], np.r_[-5, 40], 'k--', lw=2) ax3.plot(np.r_[itime_ref1, itime_ref1], np.r_[-5, 40], 'k--', lw=2) ax4 = plt.subplot(414) df_DC.plot(x='date', y=electrodeID[idatum], ax=ax4) ax4.legend(loc=3, bbox_to_anchor=(1.05, 0.7)) ax4.set_yscale('log') temp = df_DC[electrodeID[elecind]].values vmax = np.median(temp[~np.isnan(temp)]) + np.std(temp[~np.isnan(temp)])*3 vmin = np.median(temp[~np.isnan(temp)]) - np.std(temp[~np.isnan(temp)])*3 ax4.plot(np.r_[itime_ref1, itime_ref1], np.r_[vmin, vmax], 'k--', lw=2) ax4.plot(np.r_[itime_ref0, itime_ref0], np.r_[vmin, vmax], 'k--', lw=2) ax4.set_ylim(vmin, vmax) print df_reservoirH import numpy as np a = np.random.random(((5,3,3))); # example of what real input will look like # create 2D flattened version of 3D input array d1,d2,d3 = a.shape b = np.zeros([d1,d2*d3]) for i in range(len(a)): b[i] = a[i].flatten() print "shape of 3D array: ", a.shape print "shape of flattened 2D array: ", b.shape, "\n" print "flattened 2D array:\n", b, "\n" # mean-center the flattened array b -= np.mean(b, axis=0) # calculate the covariance matrix of the flattened array covar1 = np.cov(b, rowvar=0) # this makes a 9x9 array covar2 = np.dot(b, b.T) # this makes a 5x5 array print "covariance via numpy.cov:\n", covar1, "\n" print "covariance via numpy.dot:\n", covar2, "\n" # calculate eigenvalues and eigenvectors eval1, evec1 = np.linalg.eig(covar1) eval2, evec2 = np.linalg.eig(covar2) print "eigenvalues via numpy.cov covariance matrix:\n", eval1, "\n" print "eigenvectors via numpy.cov covariance matrix:\n", evec1, "\n" print "eigenvalues via numpy.dot covariance matrix:\n", eval2, "\n" print "eigenvectors via numpy.dot covariance matrix:\n", evec2, "\n" import numpy as np x=np.random.normal(size=25) >>> y=np.random.normal(size=25) >>> np.cov(x,y) array([[ 0.77568388, 0.15568432], [ 0.15568432, 0.73839014]]) x=np.random.normal(size=25) y=np.random.normal(size=25) np.cov(x,y) import pylab import random import math random.seed = 1 x1 = [1,4,7,8] y1 = [1,3,5,7] print "Mean of x is", pylab.mean(x1) print "Sample variance of x is", pylab.var(x1,ddof=1) print "Sample SD of x is", pylab.std(x1,ddof=1) print "Mean of y is", pylab.mean(y1) print "Sample variance of y is", pylab.var(y1,ddof=1) print "Sample SD of y is", pylab.std(y1,ddof=1) print "Correlation of X and Y is", pylab.corrcoef(x1,y1) pylab.scatter(x1,y1,c="blue",marker="s") pylab.xlabel("Variable X",size='x-large') pylab.ylabel("Variable Y",size='x-large') pylab.title("Scatter plot of two variables",size='x-large') pylab.savefig("scatterXYExample.png") pylab.show() pylab.scatter(x1,y1,c="blue",marker="s") pylab.xlabel("Variable X",size='x-large') pylab.ylabel("Variable Y",size='x-large') pylab.title("Scatter plot of two variables",size='x-large') pylab.axhline(y=4) pylab.axvline(x=5) pylab.annotate('Mean of X = 5', xy=(5, 6), xycoords='data', xytext=(2,6), size='large', arrowprops=dict(arrowstyle="->"), ha='center', va='center') pylab.annotate('Mean of Y = 4', xy=(7, 4), xycoords='data', xytext=(7,2), size='large', arrowprops=dict(arrowstyle="->"), ha='center', va='center') pylab.savefig("scatterXYWithMeans.png") pylab.show() pylab.scatter(x1,y1,c="blue",marker="s") pylab.xlabel("Variable X",size='x-large') pylab.ylabel("Variable Y",size='x-large') pylab.title("Scatter plot of two variables",size='x-large') pylab.axhline(y=4) pylab.axvline(x=5) pylab.annotate('Mean of X = 5', xy=(5, 6), xycoords='data', xytext=(2,6), size='large', arrowprops=dict(arrowstyle="->"), ha='center', va='center') pylab.annotate('Mean of Y = 4', xy=(7, 4), xycoords='data', xytext=(7,2), size='large', arrowprops=dict(arrowstyle="->"), ha='center', va='center') pylab.annotate('(-4,-3)', xy=(1, 1), xycoords='data', xytext=(3,1), arrowprops=dict(arrowstyle="->",shrinkA=8,shrinkB=8), ha='center', va='center') pylab.annotate('(-1,-1)', xy=(4, 3), xycoords='data', xytext=(2,3), arrowprops=dict(arrowstyle="->",shrinkA=8,shrinkB=8), ha='center', va='center') pylab.annotate('(+2,+1)', xy=(7, 5), xycoords='data', xytext=(6,5), arrowprops=dict(arrowstyle="->",shrinkA=8,shrinkB=8), ha='center', va='center') pylab.annotate('(+3,+3)', xy=(8, 7), xycoords='data', xytext=(6,7), arrowprops=dict(arrowstyle="->",shrinkA=8,shrinkB=8), ha='center', va='center') pylab.savefig("scatterXYWithMeansAndDevs.png") pylab.show() sampleSize = 500 x2 = [] y2 = [] for i in range(sampleSize): x2.append(random.normalvariate(100,10)) for i in range(sampleSize): y2.append(x2[i] + random.normalvariate(100,10)) pylab.scatter(x2,y2,c="green",marker="o") pylab.xlabel("Variable X",size='x-large') pylab.ylabel("Variable Y",size='x-large') pylab.title("Scatter plot of two variables",size='x-large') pylab.savefig("scatterXYCorrelated.png") pylab.show() print "Correlation of X and Y is", pylab.corrcoef(x2,y2)[0,1] x3 = [] y3 = [] for i in range(sampleSize): if x2[i] > 95 and x2[i] < 105: x3.append(x2[i]) y3.append(y2[i]) pylab.scatter(x3,y3,c="green",marker="o") pylab.xlabel("Variable X",size='x-large') pylab.ylabel("Variable Y",size='x-large') pylab.title("Scatter plot of two variables, limited range",size='x-large') pylab.savefig("scatterXYLimitedRange.png") pylab.show() print "Correlation of X and Y over limited range is", pylab.corrcoef(x3,y3)[0,1] ##Calculate repeated correlation coefficients for samples of 50 sampleSize = 50 print "Sampling experiment" for k in range(20): x2 = [] y2 = [] for i in range(sampleSize): x2.append(random.normalvariate(100,10)) for i in range(sampleSize): y2.append(x2[i] + random.normalvariate(100,10)) print pylab.corrcoef(x2,y2)[0,1] ```
github_jupyter
# Using Multiple Metrics in Environments This notebook will go over how to record multiple metrics with HyperparameterHunter, how to interpret the results, and how to switch between them for hyperparameter optimization. As with most examples, we will start with preparing our data. # 1. Format DataFrame ``` import pandas as pd from sklearn.datasets import load_breast_cancer data = load_breast_cancer() train_df = pd.DataFrame(data.data, columns=[_.replace(" ", "_") for _ in data.feature_names]) train_df["diagnosis"] = data.target ``` # 2. Set Up Environment Now we'll set up our `Environment`. If you've gone through the other examples, everything below should be pretty standard, except for the `metrics_map`. In most examples, we give `metrics_map` a single metric to record, but what if we just can't choose? Answer: Give `Environment` a bunch of metrics in `metrics_map`! Notice that we provide the individual metrics in a few different formats accepted and documented by `Environment`. First, near the top, we import `f1_score` from `sklearn.metrics`. Continuing to our `metrics_map`... 1. We start with the string "roc_auc_score", identifying the `sklearn.metrics` callable, and we name it **"roc_auc"** 2. We add our imported `f1_score`, and name it **"f1"** 3. We customize `f1_score` to use the `average="micro"` kwarg, and we name it **"f1_micro"**, and 4. We customize `f1_score` again, using the `average="macro"` kwarg this time, and we name it **"f1_macro"** ``` from hyperparameter_hunter import Environment, CVExperiment from sklearn.metrics import f1_score env = Environment( train_dataset=train_df, root_results_path="HyperparameterHunterAssets", target_column="diagnosis", metrics_map=dict( roc_auc="roc_auc_score", f1=f1_score, f1_micro=lambda y_true, y_pred: f1_score(y_true, y_pred, average="micro"), f1_macro=lambda y_true, y_pred: f1_score(y_true, y_pred, average="macro"), ), cross_validation_type="KFold", cross_validation_params=dict(n_splits=10, shuffle=True, random_state=42), verbose=1, ) ``` ---- Now, any Experiments we execute will record all four of these metrics! # 3. Perform Experiments ``` from lightgbm import LGBMClassifier experiment_0 = CVExperiment( model_initializer=LGBMClassifier, model_init_params=dict( boosting_type="gbdt", max_depth=-1, min_child_samples=5, subsample=0.5, verbose=-1, ), ) ``` ---- As we can see above, the final report for `experiment_0` shows all four metrics, each with different values. You may be wondering what happens when we perform hyperparameter optimization. Which of our metrics will be optimized? An excellent question! The answer is, the first metric - unless we tell our optimizer otherwise. An example will better illustrate this. # 4. Hyperparameter Optimization We'll start by setting aside a `model_init_params` dict, so we can easily reuse them later. That's all - nothing sneaky going on there! ``` from hyperparameter_hunter import BayesianOptimization, Real, Integer, Categorical OPT_MODEL_INIT_PARAMS = dict( boosting_type=Categorical(["gbdt", "dart"]), num_leaves=Integer(15, 45), max_depth=-1, min_child_samples=5, subsample=Real(0.4, 0.7), verbose=-1, ) optimizer_0 = BayesianOptimization(iterations=2, random_state=32) optimizer_0.set_experiment_guidelines(LGBMClassifier, OPT_MODEL_INIT_PARAMS) optimizer_0.go() ``` ---- Now, take note of the single saved experiment that was found by `optimizer_0`. It lists the experiment ID given to the `experiment_0` we performed above. Furthermore, `optimizer_0`, lists the value of `experiment_0` as 0.95858. Therefore, we know that `optimizer_0` is using "roc_auc" score as its `target_metric` to optimize, because that is the final "roc_auc" value reported by `experiment_0`. # 5. Changing Target Metrics Suppose we now want to perform additional rounds of `BayesianOptimization` using our "f1_micro" metric as the optimized `target_metric`, instead. We would need to start all over from scratch, right? WRONG! HyperparameterHunter recorded all four of the metrics we declared in `env` for all experiments executed during optimization, as well! Even better, telling HyperparameterHunter to switch `target_metric`s is easy! Here's how to do it: ``` optimizer_1 = BayesianOptimization(target_metric="f1_micro", iterations=2, random_state=32) optimizer_1.set_experiment_guidelines(LGBMClassifier, OPT_MODEL_INIT_PARAMS) optimizer_1.go() ``` ---- The only difference between the code for `optimizer_1` and the code for `optimizer_0` before is the addition of `target_metric="f1_micro"`. That's all we have to do! Notice that, once again, we see `experiment_0` at the top of the saved experiments being learned from, and now it shows a value of 0.96485. With a quick scroll upwards, we can verify that is the "f1_micro" score originally reported by `experiment_0`. We can also see two other saved experiments that were located, which are the two experiments produced by `optimizer_0`. Note that their values also differ from those reported by `optimizer_0`, because `target_metric="f1_micro"` now, instead of the inferred "roc_auc" default. # 6. I Can't Make Up My Mind What if we now decide that we actually want to optimize using our normal "f1" metric, instead of either "roc_auc" or "f1_micro"? Easy! ``` optimizer_2 = BayesianOptimization(target_metric="f1", iterations=2, random_state=32) optimizer_2.set_experiment_guidelines(LGBMClassifier, OPT_MODEL_INIT_PARAMS) optimizer_2.go() ``` --- Just like that, `optimizer_2` is reporting our "f1" scores! Let's finish by optimizing with the last of our four metrics. ``` optimizer_3 = BayesianOptimization(target_metric="f1_macro", iterations=2, random_state=32) optimizer_3.set_experiment_guidelines(LGBMClassifier, OPT_MODEL_INIT_PARAMS) optimizer_3.go() ``` # 7. Bonus Exercises If you've been reading the documentation as you should be, you may have noticed the `target_metric` argument of all children of `BaseOptimizationProtocol` is usually a tuple. The `BayesianOptimization` class we used above is just one of the descendants of `BaseOptimizationProtocol`, but we were passing `target_metric` values of strings. As the documentation notes, all `target_metric` values are cast to tuples, in which the first value identifies which dataset's evaluations should be used. The default behavior is to target the "oof", or out-of-fold, predictions' results. So, when we were using `target_metric="<string>"` in our examples above, our optimizer interpreted it as `target_metric=("oof", "<string>")`. This allows us to tell our optimizers to optimize metrics calculated using predictions on other datasets, like a holdout dataset. For example, had we initialized `Environment` with a `holdout_dataset`, our experiments would actually calculate 8 metrics instead of the 4 they currently do: 4 for our OOF predictions, and 4 for our holdout predictions. Then, if we wanted to optimize using holdout evaluations, we can use `target_metric=("holdout", <metric_name>)`.
github_jupyter
``` from csv import reader import pandas as pd import numpy as np import matplotlib.pyplot as plt import os import pickle data_dir = '../data_new' COPDGene_Freeze1_RNAseq_genes = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_genes.csv') COPDGene_Freeze1_RNAseq_genes_logCPM_covAdjusted = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_genes_logCPM_covAdjusted.csv') COPDGene_Freeze1_RNAseq_genes_logCPM_normalized = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_genes_logCPM_normalized.csv') COPDGene_Freeze1_RNAseq_exonicParts = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_exonicParts.csv') COPDGene_Freeze1_RNAseq_exonicParts_logCPM_covAdjusted = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_exonicParts_logCPM_covAdjusted.csv') COPDGene_Freeze1_RNAseq_exonicParts_logCPM_normalized = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_exonicParts_logCPM_normalized.csv') COPDGene_Freeze1_RNAseq_samples = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_samples.csv') data_dir2 = '../data_stranded2' COPDGene_Freeze1_RNAseq_transcripts = os.path.join(data_dir2, 'COPDGene_Freeze1_RNAseq_transcripts.csv') COPDGene_Freeze1_RNAseq_transcripts_logCPM_normalized = os.path.join(data_dir2, 'COPDGene_Freeze1_RNAseq_transcripts_logCPM_normalized.csv') transcripts = pd.read_csv(COPDGene_Freeze1_RNAseq_transcripts) transcripts_logCPM_normalized = pd.read_csv(COPDGene_Freeze1_RNAseq_transcripts_logCPM_normalized) genes = pd.read_csv(COPDGene_Freeze1_RNAseq_genes) genes_logCPM_covAdjusted = pd.read_csv(COPDGene_Freeze1_RNAseq_genes_logCPM_covAdjusted) genes_logCPM_normalized = pd.read_csv(COPDGene_Freeze1_RNAseq_genes_logCPM_normalized) exonicParts = pd.read_csv(COPDGene_Freeze1_RNAseq_exonicParts) exonicParts_logCPM_covAdjusted = pd.read_csv(COPDGene_Freeze1_RNAseq_exonicParts_logCPM_covAdjusted) exonicParts_logCPM_normalized = pd.read_csv(COPDGene_Freeze1_RNAseq_exonicParts_logCPM_normalized) samples = pd.read_csv(COPDGene_Freeze1_RNAseq_samples) # 31008 transcripts transcripts.head(5) transcripts_logCPM_normalized.head(5) genes.head(5) ''' rows: exonic parts/genes columns: annotations on which genes, chromosomes, strand, start/end position on chromosome, which transcript and exon the exonic parts belong to; also gene length for genes ''' genes_logCPM_covAdjusted.head(5) ''' COPDGene_Freeze1_RNAseq_exonicParts/genes_logCPM_normalized.csv [ logCPM expression values for exonic parts/genes] rows: exonic parts/genes columns: samples (first column is exonic parts/genes id) ''' genes_logCPM_normalized.head(5) ''' COPDGene_Freeze1_RNAseq_exonicParts/genes_logCPM_covAdjusted.csv [ logCPM expression values for exonic parts/genes adjusted for covariates/SV ] rows: exonic parts/genes columns: samples (first column is exonic parts/genes id) ''' exonicParts.head(5) ''' rows: exonic parts/genes columns: annotations on which genes, chromosomes, strand, start/end position on chromosome, which transcript and exon the exonic parts belong to; also gene length for genes ''' exonicParts_logCPM_covAdjusted.head(5) exonicParts_logCPM_normalized.head(5) samples.head(5) ''' COPDGene_Freeze1_RNAseq_samples.csv [ annotation for samples ] rows: samples columns: annotations on samples regarding current smoking status, smoking amount, age, race, gender, COPD phenotype, blood cell compositions (the 4 column names ending with _pct), and a categorical variable for batch effect (first column is sample id). ''' ```
github_jupyter
# Parametric ML and Bayesian regression Notebook version: 1.2 (Sep 28, 2018) Authors: Miguel Lázaro Gredilla Jerónimo Arenas García (jarenas@tsc.uc3m.es) Jesús Cid Sueiro (jesus.cid@uc3m.es) Changes: v.1.0 - First version. Python version v.1.1 - Python 3 compatibility. ML section. v.1.2 - Revised content. 2D visualization removed. Pending changes: ``` # Import some libraries that will be necessary for working with data and displaying plots # To visualize plots in the notebook %matplotlib inline import matplotlib import matplotlib.pyplot as plt import matplotlib.cm as cm import numpy as np import scipy.io # To read matlab files from scipy import spatial import pylab pylab.rcParams['figure.figsize'] = 8, 5 ``` ## 1. Introduction In this exercise the student will review several key concepts of Maximum Likelihood and Bayesian regression. To do so, we will assume the regression model $$s = f({\bf x}) + \varepsilon$$ where $s$ is the output corresponding to input ${\bf x}$, $f({\bf x})$ is an unobservable latent function, and $\varepsilon$ is white zero-mean Gaussian noise, i.e., $$\varepsilon \sim {\cal N}(0,\sigma_\varepsilon^2).$$ In addition, we will assume that the latent function is *linear in the parameters* $$f({\bf x}) = {\bf w}^\top {\bf z}$$ where ${\bf z} = T({\bf x})$ is a possibly non-linear transformation of the input. Along this notebook, we will explore different types of transformations. Also, we will assume an <i>a priori</i> distribution for ${\bf w}$ given by $${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ ### Practical considerations - Though sometimes unavoidable, it is recommended not to use explicit matrix inversion whenever possible. For instance, if an operation like ${\mathbf A}^{-1} {\mathbf b}$ must be performed, it is preferable to code it using python $\mbox{numpy.linalg.lstsq}$ function (see http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html), which provides the LS solution to the overdetermined system ${\mathbf A} {\mathbf w} = {\mathbf b}$. - Sometimes, the computation of $\log|{\mathbf A}|$ (where ${\mathbf A}$ is a positive definite matrix) can overflow available precision, producing incorrect results. A numerically more stable alternative, providing the same result is $2\sum_i \log([{\mathbf L}]_{ii})$, where $\mathbf L$ is the Cholesky decomposition of $\mathbf A$ (i.e., ${\mathbf A} = {\mathbf L}^\top {\mathbf L}$), and $[{\mathbf L}]_{ii}$ is the $i$th element of the diagonal of ${\mathbf L}$. - Non-degenerate covariance matrices, such as the ones in this exercise, are always positive definite. It may happen, as a consequence of chained rounding errors, that a matrix which was mathematically expected to be positive definite, turns out not to be so. This implies its Cholesky decomposition will not be available. A quick way to palliate this problem is by adding a small number (such as $10^{-6}$) to the diagonal of such matrix. ### Reproducibility of computations To guarantee the exact reproducibility of the experiments, it may be useful to start your code initializing the seed of the random numbers generator, so that you can compare your results with the ones given in this notebook. ``` np.random.seed(3) ``` ## 2. Data generation with a linear model During this section, we will assume affine transformation $${\bf z} = T({\bf x}) = (1, {\bf x}^\top)^\top$$. The <i>a priori</i> distribution of ${\bf w}$ is assumed to be $${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ ### 2.1. Synthetic data generation First, we are going to generate synthetic data (so that we have the ground-truth model) and use them to make sure everything works correctly and our estimations are sensible. * [1] Set parameters $\sigma_p^2 = 2$ and $\sigma_{\varepsilon}^2 = 0.2$. To do so, define variables `sigma_p` and `sigma_eps` containing the respectiv standard deviations. ``` # Parameter settings # sigma_p = <FILL IN> # sigma_eps = <FILL IN> ``` * [2] Generate a weight vector $\mbox{true_w}$ with two elements from the <i>a priori</i> distribution of the weights. This vector determines the regression line that we want to find (i.e., the optimum unknown solution). ``` # Data dimension: dim_x = 2 # Generate a parameter vector taking a random sample from the prior distributions # (the np.random module may be usefull for this purpose) # true_w = <FILL IN> print('The true parameter vector is:') print(true_w) ``` * [3] Generate an input matrix ${\bf X}$ (in this case, a single column) containing 20 samples with equally spaced values between 0 and 2 in the second column (method `linspace` from numpy can be useful for this) ``` # <SOL> # </SOL> ``` * [4] Finally, generate the output vector ${\mbox s}$ as the product $\mbox{X} \ast \mbox{true_w}$ plus Gaussian noise of pdf ${\cal N}(0,\sigma_\varepsilon^2)$ at each element. ``` # Expand input matrix with an all-ones column col_1 = np.ones((n_points, 1)) # Z = <FILL IN> # Generate values of the target variable # s = <FILL IN> ``` ### 2.2. Data visualization * Plot the generated data. You will notice a linear behavior, but the presence of noise makes it hard to estimate precisely the original straight line that generated them (which is stored in $\mbox{true_w}$). ``` # <SOL> # </SOL> ``` ## 3. Maximum Likelihood (ML) regression ### 3.1. Likelihood function * [1] Define a function `predict(we, Z)` that computes the linear predictions for all inputs in data matrix `Z` (a 2-D numpy arry), for a given parameter vector `we` (a 1-D numpy array). The output should be a 1-D array. Test your function with the given dataset and `we = [0.4, 0.7]` ``` # <SOL> # </SOL> # Print predictions print(p) ``` * [2] Define a function `sse(we, Z, s)` that computes the sum of squared errors (SSE) for the linear prediction with parameters `we ` (1D numpy array), inputs `Z ` (2D numpy array) and targets `s ` (1D numpy array). Using this function, compute the SSE of the true parameter vector in `true_w`. ``` # <SOL> # </SOL> print(" The SSE is: {0}".format(SSE)) ``` * [3] Define a function `likelihood(we, Z, s, sigma_eps)` that computes the likelihood of parameter vector `we` for a given dataset in matrix `Z` and vector `s`, assuming Gaussian noise with varianze $\sigma_\epsilon^2$. Note that this function can use the `sse` function defined above. Using this function, compute the likelihood of the true parameter vector in `true_w`. ``` # <SOL> # </SOL> print("The likelihood of the true parameter vector is {0}".format(L_w_true)) ``` * [4] Define a function `LL(we, Xe, s)` that computes the log-likelihood of parameter vector `we` for a given dataset in matrix `Z` and vector `s`. Note that this function can use the `likelihood` function defined above. However, for a highe numerical precission, implemening a direct expression for the log-likelihood is recommended. Using this function, compute the likelihood of the true parameter vector in `true_w`. ``` # <SOL> # </SOL> print("The log-likelihood of the true parameter vector is {0}".format(LL_w_true)) ``` ### 3.2. ML estimate * [1] Compute the ML estimate of $w_e$ given the data. ``` # <SOL> # </SOL> print(w_ML) ``` * [2] Compute the maximum likelihood, and the maximum log-likelihood. ``` # <SOL> # </SOL> print('Maximum likelihood: {0}'.format(L_w_ML)) print('Maximum log-likelihood: {0}'.format(LL_w_ML)) ``` Just as an illustration, the code below generates a set of points in a two dimensional grid going from $(-\sigma_p, -\sigma_p)$ to $(\sigma_p, \sigma_p)$, computes the log-likelihood for all these points and visualize them using a 2-dimensional plot. You can see the difference between the true value of the parameter ${\bf w}$ (black) and the ML estimate (red). ``` # First construct a grid of (theta0, theta1) parameter pairs and their # corresponding cost function values. N = 200 # Number of points along each dimension. w0_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N) w1_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N) Lw = np.zeros((N,N)) # Fill Lw with the likelihood values for i, w0i in enumerate(w0_grid): for j, w1j in enumerate(w1_grid): we = np.array((w0i, w1j)) Lw[i, j] = LL(we, Z, s, sigma_eps) WW0, WW1 = np.meshgrid(w0_grid, w1_grid, indexing='ij') contours = plt.contour(WW0, WW1, Lw, 20) plt.figure plt.clabel(contours) plt.scatter([true_w[0]]*2, [true_w[1]]*2, s=[50,10], color=['k','w']) plt.scatter([w_ML[0]]*2, [w_ML[1]]*2, s=[50,10], color=['r','w']) plt.xlabel('$w_0$') plt.ylabel('$w_1$') plt.show() ``` ### 3.3. Convergence of the ML estimate for the true model Note that the likelihood of the true parameter vector is, in general, smaller than that of the ML estimate. However, as the sample size increasis, both should converge to the same value. * [1] Generate a longer dataset, with $K_\text{max}=2^{16}$ samples, uniformly spaces between 0 and 2. Store it in the 2D-array `X2` and the 1D-array `s2` ``` # Parameter settings x_min = 0 x_max = 2 n_points = 2**16 # <SOL> # </SOL> ``` * [2] Compute the ML estimate based on the first $2^k$ samples, for $k=2,3,\ldots, 16$. For each value of $k$ compute the squared euclidean distance between the true parameter vector and the ML estimate. Represent it graphically (using a logarithmic scale in the y-axis). ``` # <SOL> # </SOL> ``` ## 4. ML estimation with real data. The stocks dataset. Once our code has been tested on synthetic data, we will use it with real data. ### 4.1. Dataset * [1] Load data corresponding to the evolution of the stocks of 10 airline companies. This data set is an adaptation of the Stock dataset from http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html, which in turn was taken from the StatLib Repository, http://lib.stat.cmu.edu/ ``` # <SOL> # </SOL> ``` * [2] Normalize the data so all training sample components have zero mean and unit standard deviation. Store the normalized training and test samples in 2D numpy arrays `Xtrain` and `Xtest`, respectively. ``` # <SOL> # </SOL> ``` ### 4.2. Polynomial ML regression with a single variable In this first part, we will work with the first component of the input only. * [1] Take the first column of `Xtrain` and `Xtest` into arrays `X0train` and `X0test`, respectively. * [2] Visualize, in a single scatter plot, the target variable (in the vertical axes) versus the input variable) * [3] Since the data have been taken from a real scenario, we do not have any *true* mathematical model of the process that generated the data. Thus, we will explore different models trying to take the one that fits better the training data. Assume a polinomial model given by $$ {\bf z} = T({\bf x}) = (1, x_0, x_0^2, \ldots, x_0^{g-1})^\top. $$ Built a method ` Ztrain, Ztest = T_poly(Xtrain, Xtest, g)` that, for a given value of $g$, computes normailzed data matrices `Ztrain` and `Ztest` that result from applying the polinomial transformation to the inputs in `X0train` and `X0test` for an arbitrary value of $g$. Note that, despite `X0train` and `X0test` where normalized, you will need to re-normalize the transformed variables. ``` # <SOL> # </SOL> ``` * [4] Fit a polynomial model with degree $g$ for $g$ ranging from 0 to 10. Store the weights of all models in a list of weight vectors, named `models`, such that `models[g]` returns the parameters estimated for the polynomial model with degree $g$. We will use these models in the following sections. * [5] Plot the polynomial models with degrees 1, 3 and 10, superimposed over a scatter plot of the training data (in blue) and the test data (in red). * [6] Show, in the same plot: - The log-likelihood function corresponding to each model, as a function of $g$, computed over the training set. - The log-likelihood function corresponding to each model, as a function of $g$, computed over the test set. * [7] [OPTIONAL] You may have seen the the likelihood function grows with the degree of the polynomial. However, large values of $g$ produce a strong data overfitting. For this reasong, $g$ cannot be selected with the same data used to fit the model. This kind of parameters, like $g$ are usually called *hyperparameters* and need to be selected by cross validation. Another hyperparameter is $\sigma_\epsilon^2$. Plot the log-likelihood function corresponding to the polynomial model with degree 3 for different values of $\sigma_\epsilon^2$, for the training set and the test set. What would be the optimal value of this hyperparameters according to the training set? In any case, not that the model coefficients do not depend on $\sigma_eps^2$. Therefore, we do not need to estimat its value for ML regression. * [8] Select the optimal value of $g$ by cross-validation. To do so, the cross validation methods provided by sklearn will simplify this task. * [9] For the selected model: - Plot the regresion function over the scater plot of the data. - Compute the log-likelihood and the SSE over the test set. ## 5. Bayesian regression. The stock dataset. In this section we will keep using the first component of the data from the stock dataset, assuming the same kind of plolynomial model. We will explore the potential advantages of using a Bayesian model. To do so, we will asume that the <i>a priori</i> distribution of ${\bf w}$ is $${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ ### 5.1. Posterior pdf of the weight vector In this section we will visualize prior and the posterior distribution functions. First, we will restore the dataset at the begining of this notebook: * [1] Define a function `posterior_stats(Z, s, sigma_eps, sigma_p)` that computes the parameters of the posterior coefficient distribution given the dataset in matrix `Z` and vector `s`, for given values of the hyperparameters. This function should return the posterior mean, the covariance matrix and the precision matrix (the inverse of the covariance matrix). Test the function to the given dataset, for $g=3$. ``` # <SOL> # </SOL> mean_w, Cov_w, iCov_w = posterior_stats(Z, s, sigma_eps, sigma_p) print('true_w = {0}'.format(true_w)) print('mean_w = {0}'.format(mean_w)) print('Cov_w = {0}'.format(Cov_w)) print('iCov_w = {0}'.format(iCov_w)) ``` * [2] Define a function `gauss_pdf(we, mean_w, iCov_w)` that computes the Gaussian pdf with mean `mean_w` and precision matrix `iCov_w`. Use this function to compute and compare the posterior pdf value of the true coefficients, the ML estimate and the MSE estimate, given the dataset. ``` # <SOL> # </SOL> print('p(true_w | s) = {0}'.format(gauss_pdf(true_w, mean_w, iCov_w))) print('p(w_ML | s) = {0}'.format(gauss_pdf(w_ML, mean_w, iCov_w))) print('p(w_MSE | s) = {0}'.format(gauss_pdf(mean_w, mean_w, iCov_w))) ``` * [3] Define a function `log_gauss_pdf(we, mean_w, iCov_w)` that computes the log of the Gaussian pdf with mean `mean_w` and precision matrix `iCov_w`. Use this function to compute and compare the log of the posterior pdf value of the true coefficients, the ML estimate and the MSE estimate, given the dataset. ### 5.2. Hyperparameter selection Since the values $\sigma_p$ and $\sigma_\varepsilon$ are no longer known, we have to select them in some way. To see their influence, assume $g=3$ and plot the regression function for different values of $\sigma_p$ To this end, we will adjust them using the LS solution to the regression problem: - $\sigma_p^2$ will be taken as the average of the square values of ${\hat {\bf w}}_{LS}$ - $\sigma_\varepsilon^2$ will be taken as two times the average of the square of the residuals when using ${\hat {\bf w}}_{LS}$ ### 5.1. Hyperparameter selection Since the values $\sigma_p$ and $\sigma_\varepsilon$ are no longer known, a first rough estimation is needed (we will soon see how to estimate these values in a principled way). To this end, we will adjust them using the LS solution to the regression problem: - $\sigma_p^2$ will be taken as the average of the square values of ${\hat {\bf w}}_{LS}$ - $\sigma_\varepsilon^2$ will be taken as two times the average of the square of the residuals when using ${\hat {\bf w}}_{LS}$ ``` # w_LS, residuals, rank, s = <FILL IN> # sigma_p = <FILL IN> # sigma_eps = <FILL IN> print(sigma_eps) ``` ### 5.2. Sampling regression curves from the posterior In this section we will plot the functions corresponding to different samples drawn from the posterior distribution of the weight vector. To this end, we will first generate an input dataset of equally spaced samples. We will compute the functions at these points ``` # Definition of the interval for representation purposes x2_min = -1 x2_max = 3 n_points = 100 # Only two points are needed to plot a straigh line # Build the input data matrix: # Input values for representation of the regression curves X2 = np.linspace(x2_min, x2_max, n_points) col_1 = np.ones((n_points,)) X2e = np.vstack((col_1, X2)).T ``` Generate random vectors ${\bf w}_l$ with $l = 1,\dots, 50$, from the posterior density of the weights, $p({\bf w}\mid{\bf s})$, and use them to generate 50 polinomial regression functions, $f({\bf x}^\ast) = {{\bf z}^\ast}^\top {\bf w}_l$, with ${\bf x}^\ast$ between $-1.2$ and $1.2$, with step $0.1$. Plot the line corresponding to the model with the posterior mean parameters, along with the $50$ generated straight lines and the original samples, all in the same plot. As you can check, the Bayesian model is not providing a single answer, but instead a density over them, from which we have extracted 50 options. ``` # Drawing weights from the posterior # First, compute the cholesky decomposition of the covariance matrix # L = <FILL IN> for l in range(50): # Generate a random sample from the posterior distribution # w_l = <FILL IN> # Compute predictions for the inputs in the data matrix # p_l = <FILL IN> # Plot prediction function # plt.plot(<FILL IN>, 'c:'); # Compute predictions for the inputs in the data matrix and using the true model # p_truew = <FILL IN> # Plot the true model plt.plot(X2, p_truew, 'b', label='True model', linewidth=2); # Plot the training points plt.plot(X,s,'r.',markersize=12); plt.xlim((x2_min,x2_max)); plt.legend(loc='best') plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); ``` ### 5.3. Plotting the confidence intervals On top of the previous figure (copy here your code from the previous section), plot functions $${\mathbb E}\left\{f({\bf x}^\ast)\mid{\bf s}\right\}$$ and $${\mathbb E}\left\{f({\bf x}^\ast)\mid{\bf s}\right\} \pm 2 \sqrt{{\mathbb V}\left\{f({\bf x}^\ast)\mid{\bf s}\right\}}$$ (i.e., the posterior mean of $f({\bf x}^\ast)$, as well as two standard deviations above and below). It is possible to show analytically that this region comprises $95.45\%$ probability of the posterior probability $p(f({\bf x}^\ast)\mid {\bf s})$ at each ${\bf x}^\ast$. ``` # Note that you can re-use code from sect. 4.2 to solve this exercise # Plot sample functions from the posterior, and the training points # <SOL> # </SOL> # Plot the posterior mean. # mean_ast = <FILL IN> plt.plot(X2, mean_ast, 'm', label='Predictive mean', linewidth=2); # Plot the posterior mean \pm 2 std # std_ast = <FILL IN> # plt.plot(<FILL IN>, 'm--', label='Predictive mean $\pm$ 2std', linewidth=2); # plt.plot(<FILL IN>, 'm--', linewidth=3); plt.legend(loc='best') plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); ``` Plot now ${\mathbb E}\left\{s({\bf x}^\ast)\mid{\bf s}\right\} \pm 2 \sqrt{{\mathbb V}\left\{s({\bf x}^\ast)\mid{\bf s}\right\}}$ (note that the posterior means of $f({\bf x}^\ast)$ and $s({\bf x}^\ast)$ are the same, so there is no need to plot it again). Notice that $95.45\%$ of observed data lie now within the newly designated region. These new limits establish a confidence range for our predictions. See how the uncertainty grows as we move away from the interpolation region to the extrapolation areas. ``` # Plot sample functions confidence intervals and sampling points # Note that you can simply copy and paste most of the code used in the cell above. # <SOL> # </SOL> # Compute the standad deviations for s and plot the confidence intervals # <SOL> # </SOL> plt.legend(loc='best') plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); ``` ### 5.3. Model assessment [OPTIONAL. You can skip this section] In order to verify the performance of the resulting model, compute the posterior mean and variance of each of the test outputs from the posterior over ${\bf w}$. I.e, compute ${\mathbb E}\left\{s({\bf x}^\ast)\mid{\bf s}\right\}$ and $\sqrt{{\mathbb V}\left\{s({\bf x}^\ast)\mid{\bf s}\right\}}$ for each test sample ${\bf x}^\ast$ contained in each row of `Xtest`. Be sure not to use the outputs `Ytest` at any point during this process. Store the predictive mean and variance of all test samples in two column vectors called `m_y` and `v_y`, respectively. ``` # <SOL> # </SOL> ``` Compute now the mean square error (MSE) and the negative log-predictive density (NLPD) with the following code: ``` # <SOL> # </SOL> ``` Results should be: ``` print('MSE = {0}'.format(MSE)) print('NLPD = {0}'.format(NLPD)) ``` These two measures reveal the quality of our predictor (with lower values revealing higher quality). The first measure (MSE) only compares the predictive mean with the actual value and always has a positive value (if zero was reached, it would mean a perfect prediction). It does not take into account predictive variance. The second measure (NLPD) takes into account both the deviation and the predictive variance (uncertainty) to measure the quality of the probabilistic prediction (a high error in a prediction that was already known to have high variance has a smaller penalty, but also, announcing a high variance when the prediction error is small won’t award such a good score). ## 6. Regression with all variables from the stocks dataset. Try to improve the performance of the best model used so far. To do so: * Explore the use of all the input variables from the dataset. * Explore other regression algorithms from the `sklearn` library.
github_jupyter
``` import pandas as pd from fractions import Fraction import nltk from nltk import FreqDist #Corpus Ram = ['I wish you the best', 'I hope to reach home by 6 P M', 'I wish to go home early', 'I do not want to buy this', 'I hope it rains today'] Raj = ['I hope to play tennis tonight', 'I hope to win this tournament', 'I hope to buy this car in the next year', 'I wish to get a good score this time', 'I wish they would come'] #Calculate number of words in Ram, Raj and calculate total words ramWords = [] for i in range(0,len(Ram)): #Split the strings based on blankspace sen = Ram[i].split(' ') #Extend the list by adding ramWords.extend(sen) print("Number of words in Ram: ", len(ramWords)) rajWords = [] for i in range(0,len(Raj)): #Split the strings based on blankspace sen = Raj[i].split(' ') #Extend the list by adding rajWords.extend(sen) print("Number of words in Raj: ", len(rajWords)) totWords = len(ramWords) + len(rajWords) print("Total words in both the corpus: ", totWords) uniqRamWords = list(set(ramWords)) uniqRajWords = list(set(rajWords)) UniqWords = uniqRamWords + uniqRajWords ttlUniqWords = set(UniqWords) print("Vocabulary of ram corpus: ", len(uniqRamWords)) print("Vocabulary of raj corpus: ", len(uniqRajWords)) print("Vocabulary of combined corpus: ", len(ttlUniqWords)) #Store the frequency distribution of words in the respective corpus as a dictionary fDistRam = dict(nltk.FreqDist(ramWords)) fDistRaj = dict(nltk.FreqDist(rajWords)) print("Frequency of words in Ram Corpus\n", fDistRam) print("Frequency of words in Raj Corpus\n", fDistRaj) #Calculate P(X1|y) = Count(X1,y)/Count(Y) #y are class labels (Ram or Raj) #X1 are words (I, wish, hope etc.) #Y is the total number of words in both the corpus (ie) 68 #Define a function to calculate probability and store result as a fraction probRam = {} probRaj = {} def probRamXY(w1): probRam[w1] = 0 for key, value in fDistRam.items(): if w1 in key: probRam[w1] = Fraction(value,totWords) return probRam[w1] def probRajXY(w1): probRaj[w1] = 0 for key, value in fDistRaj.items(): if w1 in key: probRaj[w1] = Fraction(value,totWords) return probRaj[w1] probRajXY('hope') probRamXY('I') #Calculate P(X1|y) for all unique words in Ram and Raj corpus and store it in a list prRam = {} prRaj = {} allWords = ramWords + rajWords print("Total number of words in the combined corpus: ", len(allWords)) uniqWords = set(allWords) print("\nUnique words in the combined corpus: ", len(uniqWords)) for words in uniqWords: prRam[words] = probRamXY(words) prRaj[words] = probRajXY(words) print("\nProbabilities of words in Ram corpus: \n", prRam) print("\n\nLength of words for which probability calculated in Ram corpus: ", len(prRam)) print("\nProbabilities of words in Raj corpus: \n", prRaj) print("\n\nLength of words for which probability calculated in Raj corpus: ", len(prRaj)) #Prior probability P(y) = count(y)/count(Y). As there are only two classes it is 1/2 PrProb = Fraction(1,2) print("Prior probability :", PrProb) #Guess who wrote the sentence "I wish you would come" #For Ram Corpus def bRam(w1,w2,w3,w4,w5): lstVal = [] for key, value in prRam.items(): if key == w1: lstVal.append(value) if key == w2: lstVal.append(value) if key == w3: lstVal.append(value) if key == w4: lstVal.append(value) if key == w5: lstVal.append(value) finProb = 1 for i in range(len(lstVal)): finProb = finProb*lstVal[i] print("Baye's Probability from Ram Corpus is: ", PrProb*finProb) return lstVal bRam('I','wish','you','would','come') def bRaj(w1,w2,w3,w4,w5): lstVal = [] for key, value in prRaj.items(): if key == w1: lstVal.append(value) if key == w2: lstVal.append(value) if key == w3: lstVal.append(value) if key == w4: lstVal.append(value) if key == w5: lstVal.append(value) #print(any(x == 0 for x in lstVal)) finProb = 1 for i in range(len(lstVal)): finProb = finProb*lstVal[i] print("Baye's Probability from Raj Corpus is: ", PrProb*finProb) return lstVal bRaj('I','wish','you','would','come') #Both probabilities are zero. #Hence add 1 to each of the words in the numerator only #Get the keys of Ram corpus for which the value is zero and store the keys separately keyRam0 = [] keyRaj0 = [] for k, v in prRam.items(): if v == 0: keyRam0.append(k) for k, v in prRaj.items(): if v == 0: keyRaj0.append(k) #print(keyRam0) #print("Number of words in combined corpus but not in Ram corpus: ", len(keyRam0)) #print(keyRaj0) #print("Number of words in combined corpus but not in Raj corpus: ", len(keyRaj0)) #Increase numerator values by 1 in the respective dictionary def upProbRamXY(w1): probRam[w1] = Fraction(1,68) for key, value in fDistRam.items(): if w1 in key: probRam[w1] = Fraction(value+1,totWords) return probRam[w1] def upProbRajXY(w1): probRaj[w1] = Fraction(1,68) for key, value in fDistRaj.items(): if w1 in key: probRaj[w1] = Fraction(value+1,totWords) return probRaj[w1] #print("Probability of missing word car in Ram corpus", upProbRamXY('car')) #print("Probability of missing word home in Raj corpus",upProbRajXY('home')) #print("Original Probability of present word I in Ram corpus", probRamXY('I')) #print("Updated Probability of present word I in Ram corpus", upProbRamXY('I')) #print("Original Probability of present word I in Raj corpus", probRajXY('I')) #print("Updated Probability of present word I in Raj corpus", upProbRajXY('I')) #update P(X1|y) for all unique words in Ram and Raj corpus and store it in a list uprRam = {} uprRaj = {} for words in uniqWords: uprRam[words] = upProbRamXY(words) uprRaj[words] = upProbRajXY(words) #print("\nUpdated Probabilities of words in Ram corpus: \n", uprRam) #print("\n\nUpdated number of words for which probability calculated in Ram corpus: ", len(uprRam)) #print("\nUpdated Probabilities of words in Raj corpus: \n", uprRaj) #print("\n\nUpdated number of words for which probability calculated in Raj corpus: ", len(uprRaj)) def ubRam(w1,w2,w3,w4,w5): lstVal = [] for key, value in uprRam.items(): if key == w1: lstVal.append(value) if key == w2: lstVal.append(value) if key == w3: lstVal.append(value) if key == w4: lstVal.append(value) if key == w5: lstVal.append(value) finProb = 1 for i in range(len(lstVal)): finProb = finProb*lstVal[i] print("Baye's Probability from revised Ram Corpus is: ", PrProb*finProb) return finProb def ubRaj(w1,w2,w3,w4,w5): lstVal = [] for key, value in uprRaj.items(): if key == w1: lstVal.append(value) if key == w2: lstVal.append(value) if key == w3: lstVal.append(value) if key == w4: lstVal.append(value) if key == w5: lstVal.append(value) finProb = 1 for i in range(len(lstVal)): finProb = finProb*lstVal[i] print("Baye's Probability from revised Raj Corpus is: ", PrProb*finProb) return float(finProb) #print(bRam('I','wish','you','would','come')) #print(bRaj('I','wish','you','would','come')) valUpdatedRam = ubRam('I','wish','you','would','come') valUpdatedRaj = ubRaj('I','wish','you','would','come') print("Ram sent the mail") if valUpdatedRam > valUpdatedRaj else print("Raj sent the mail") #Find the sender of the email - Ram or Raj #A new mail arrives with just three words - motivate, profit and product #Historical information provided import pandas as pd data = [['motivate',0.24,0.05],['profit',0.3,0.35],['product',0.26,0.35],['leadership',0.08,0.15],['operations',0.12,0.10]] df = pd.DataFrame(data, columns = ['Word','Ram','Raj']) df.set_index('Word', inplace = True) print(df) #Create a wordlist for search words and calculate Bayesian probability for Ram and Raj #Max value of Bayesian product will be the sender of the email wordList = ['motivate', 'profit', 'product'] probRam = 1 probRaj = 1 for i in wordList: valRam = df.loc[i,'Ram'] valRaj = df.loc[i,'Raj'] probRam = valRam*probRam probRaj = valRaj*probRaj print("Probability mail sent by Ram is: ", probRam) print("Probability mail sent by Raj is: ", probRaj) print("Mail sent by Ram") if probRam > probRaj else print("Mail sent by Raj") #Product sentiments #Assume the following likelihood for each word being part of positive or negative review #Equal prior probabilities for each class (P(positive) = 0.5 and P(negative) =0.5) #What class Naive Bayes classifier would assign to the sentence "I do not like to fill in the application form" data2 = [['I',0.09,0.16],['love',0.07,0.06],['to',0.05,0.07],['fill',0.29,0.06], ['credit',0.04,0.15],['card',0.08,0.11],['application',0.06,0.04]] df2 = pd.DataFrame(data2, columns=['Word','Positive','Negative']) #df2.set_index('Word', inplace = True) print(df2) words = ['I','do','not','like','to','fill','in','the','application','form'] #Out of vocab words are: do, not, like, in, the, form (six words) #Create two separate empty lists and populate them with matched vocabulary and out of vocabulary words wordsMatch = [] wordsNoMatch = [] for i in words: if (df2['Word'] == i).any(): wordsMatch.append(i) else: wordsNoMatch.append(i) #print("List of matched words: ", wordsMatch) #print("List of out of vocabulary words: ", wordsNoMatch) #print("Total number of words: ", len(words)) #print("Number of matched words: ", len(wordsMatch)) #print("Number of out of vocabulary words: ", len(wordsNoMatch)) #Subset df2 containing matched words into a new dataframe newDF newDF = pd.DataFrame(columns=['Word','Positive','Negative']) for i in wordsMatch: if (df2['Word'].str.contains(i)).any(): newDF = newDF.append(df2.loc[df2['Word'] == i]) #print(i, "is there") #Create a new dataframe called oov (out of vocabulary) with words from wordsNoMatch as words and probability values 0.5 oovDF = pd.DataFrame(columns=['Word','Positive','Negative']) for i in range(len(wordsNoMatch)): oovDF.loc[i] = [wordsNoMatch[i]] + [0.5] + [0.5] #Concatenate newDF and oovDF into one single dataframe and set the index as Word column frames = [newDF, oovDF] merged = pd.concat(frames, ignore_index=True) merged.set_index('Word',inplace = True) print(merged) #Calculate Bayesian probability for positive and negative reviews words = ['I','do','not','like','to','fill','in','the','application','form'] probPos = 1 probNeg = 1 for i in words: valPos = merged.loc[i,'Positive'] valNeg = merged.loc[i,'Negative'] probPos = valPos*probPos probNeg = valNeg*probNeg print("Probability Review is positive: ", probPos) print("Probability Review is negative: ", probNeg) print("Positive Review") if probPos > probNeg else print("Negative Review") ```
github_jupyter
``` from scipy import sparse from sklearn.decomposition import PCA import numpy as np import matplotlib.pyplot as plt from sklearn.externals import joblib import pandas as pd import psycopg2 from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer from sklearn.cluster import KMeans x = sparse.load_npz('model/tf_idf.npz') # First we are going to PCA this vector data reduced_data = PCA(n_components=2).fit_transform(x.todense()) km = KMeans(init='k-means++', n_clusters=15, n_init=10) km.fit(reduced_data) # step size of mesh h = 0.05 x_min, x_max = reduced_data[:, 0].min(), reduced_data[:, 0].max()+0.2 y_min, y_max = reduced_data[:, 1].min(), reduced_data[:, 1].max()+0.2 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) test_data = np.c_[xx.ravel(), yy.ravel()] # test_data.shape Z = km.predict(test_data) Z = Z.reshape(xx.shape) plt.figure(1, figsize=(7,5)) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=1) # Plot the centroids as an * centroids = km.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='*', s=169, linewidths=2, color='b', zorder=10) plt.show() def create_cluster_plot(centroids, cluster_num): ''' generates a plot that shows where the cluster at cluster_num is ''' plt.figure(1, figsize=(7,5)) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=.5) # Plot the centroids as an * plt.scatter(centroids[cluster_num, 0], centroids[cluster_num, 1], marker='*', s=169, linewidths=2, color='w', zorder=10) centroids = np.delete(centroids, cluster_num, 0) plt.scatter(centroids[:, 0], centroids[:, 1], marker='*', s=169, linewidths=2, color='b', zorder=10) plt.title("Plot for Cluster " + str(cluster_num)) path = str('plots/cluster'+str(cluster_num)+'.png') plt.savefig('plots/cluster'+str(cluster_num)+'.png') import os if not os.path.exists('plots'): os.makedirs('plots') centroids = km.cluster_centers_ for x in range(0, 15): create_cluster_plot(centroids, x) ```
github_jupyter
<a href="https://colab.research.google.com/github/eduardojdiniz/Buzznauts/blob/master/scripts/demo_VideoDataFrame_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Demo VideoDataFrame Class This demo uses the Algonauts dataset. TABLE OF CODE CONTENTS: 1. Minimal demo without image transforms 2. Minimal demo without sparse temporal sampling for single continuous frame clips, without image transforms 3. Demo with image transforms 4. Demo with image transforms and dataloader For more details about the VideoDataFrame Class, see the [VideoDataset Repo](https://video-dataset-loading-pytorch.readthedocs.io/en/latest/VideoDataset.html) ### Setup ``` # Install Buzznauts and dependencies %%capture !pip install duecredit --quiet !pip install decord --quiet !pip install git+https://github.com/eduardojdiniz/Buzznauts --quiet # Mount Google Drive from google.colab import drive drive.mount("/content/drive") from torchvision import transforms import torch from Buzznauts.data.utils import plot_video_frames from Buzznauts.data.videodataframe import VideoFrameDataset, ImglistToTensor import os import os.path as op from pathlib import Path import Buzznauts as buzz drive_root = '/content/drive/MyDrive/Buzznauts' # Data paths fmri_dir = op.join(drive_root, "data", "fmri") stimuli = op.join(drive_root, "data", "stimuli") videos_dir = op.join(stimuli, "videos") frames_dir = op.join(stimuli, "frames") annotation_file = op.join(frames_dir, 'annotations.txt') from Buzznauts.utils import seed_worker, set_generator ``` ### Demo 1 - Sampled Frames, without Image Transforms ``` dataset = VideoFrameDataset( root_path=frames_dir, annotationfile_path=annotation_file, num_segments=3, frames_per_segment=1, imagefile_template='img_{:05d}.jpg', transform=None, random_shift=True, test_mode=False) sample = dataset[0] frames = sample[0] # list of PIL images label = sample[1] # integer label plot_video_frames(rows=1, cols=3, frame_list=frames, plot_width=15., plot_height=3.) ``` ### Demo 2 - Single Continuous Frame Clip instead of Sampled Frames, without Image Transforms ``` dataset = VideoFrameDataset( root_path=frames_dir, annotationfile_path=annotation_file, num_segments=1, frames_per_segment=9, imagefile_template='img_{:05d}.jpg', transform=None, random_shift=True, test_mode=False) sample = dataset[5] frames = sample[0] # list of PIL images label = sample[1] # integer label plot_video_frames(rows=3, cols=3, frame_list=frames, plot_width=10., plot_height=5.) ``` ### Demo 3 - Sampled Frames, with Image Transforms ``` def denormalize(video_tensor): """Undoes mean/standard deviation normalization, zero to one scaling, and channel rearrangement for a batch of images. Parameters ---------- video_tensor : tensor.FloatTensor A (FRAMES x CHANNELS x HEIGHT x WIDTH) tensor Returns ---------- video_array : numpy.ndarray[float] A (FRAMES x CHANNELS x HEIGHT x WIDTH) numpy array of floats """ inverse_normalize = transforms.Normalize( mean=[-0.485 / 0.229, -0.456 / 0.224, -0.406 / 0.225], std=[1 / 0.229, 1 / 0.224, 1 / 0.225]) return (inverse_normalize(video_tensor) * 255.).type(torch.uint8).permute(0, 2, 3, 1).numpy() # As of torchvision 0.8.0, torchvision transforms support batches of images # of size (BATCH x CHANNELS x HEIGHT x WIDTH) and apply deterministic or random # transformations on the batch identically on all images of the batch. Any torchvision # transform for image augmentation can thus also be used for video augmentation. preprocess = transforms.Compose([ ImglistToTensor(), # list of PIL images to (FRAMES x CHANNELS x HEIGHT x WIDTH) tensor transforms.Resize(224), # image batch, resize smaller edge to 299 transforms.CenterCrop(224), # image batch, center crop to square 299x299 transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) dataset = VideoFrameDataset( root_path=frames_dir, annotationfile_path=annotation_file, num_segments=5, frames_per_segment=1, imagefile_template='img_{:05d}.jpg', transform=preprocess, random_shift=True, test_mode=False ) sample = dataset[2] frame_tensor = sample[0] # tensor of shape (NUM_SEGMENTS*FRAMES_PER_SEGMENT) x CHANNELS x HEIGHT x WIDTH label = sample[1] # integer label print('Video Tensor Size:', frame_tensor.size()) frame_array = denormalize(frame_tensor) plot_video_frames(rows=1, cols=5, frame_list=frame_array, plot_width=15., plot_height=3.) ``` ### Demo 4 - Sampled Frames Dataloader, with Image Transforms ``` dataloader = torch.utils.data.DataLoader( dataset=dataset, batch_size=2, shuffle=True, num_workers=2, pin_memory=True, worker_init_fn=seed_worker, generator=set_generator()) for epoch in range(10): for video_batch, labels in dataloader: """ Insert Training Code Here """ print(labels) print("\nVideo Batch Tensor Size:", video_batch.size()) break break ```
github_jupyter
# Непараметрические криетрии Критерий | Одновыборочный | Двухвыборочный | Двухвыборочный (связанные выборки) ------------- | -------------| **Знаков** | $\times$ | | $\times$ **Ранговый** | $\times$ | $\times$ | $\times$ **Перестановочный** | $\times$ | $\times$ | $\times$ ## Mirrors as potential environmental enrichment for individually housed laboratory mice (Sherwin, 2004): 16 лабораторных мышей были помещены в двухкомнатные клетки, в одной из комнат висело зеркало. С целью установить, есть ли у мышей какие-то предпочтения насчет зеркал, измерялась доля времени, которое каждая мышь проводила в каждой из своих двух клеток. ``` import numpy as np import pandas as pd import itertools from scipy import stats from statsmodels.stats.descriptivestats import sign_test from statsmodels.stats.weightstats import zconfint %pylab inline ``` ### Загрузка данных ``` mouses_data = pd.read_csv('mirror_mouses.txt', header = None) mouses_data.columns = ['proportion_of_time'] mouses_data mouses_data.describe() pylab.hist(mouses_data.proportion_of_time) pylab.show() ``` ## Одновыборочные критерии ``` print '95%% confidence interval for the mean time: [%f, %f]' % zconfint(mouses_data) ``` ### Критерий знаков $H_0\colon$ медиана доли времени, проведенного в клетке с зеркалом, равна 0.5 $H_1\colon$ медиана доли времени, проведенного в клетке с зеркалом, не равна 0.5 ``` print "M: %d, p-value: %f" % sign_test(mouses_data, 0.5) ``` ### Критерий знаковых рангов Вилкоксона ``` m0 = 0.5 stats.wilcoxon(mouses_data.proportion_of_time - m0) ``` ### Перестановочный критерий $H_0\colon$ среднее равно 0.5 $H_1\colon$ среднее не равно 0.5 ``` def permutation_t_stat_1sample(sample, mean): t_stat = sum(map(lambda x: x - mean, sample)) return t_stat permutation_t_stat_1sample(mouses_data.proportion_of_time, 0.5) def permutation_zero_distr_1sample(sample, mean, max_permutations = None): centered_sample = map(lambda x: x - mean, sample) if max_permutations: signs_array = set([tuple(x) for x in 2 * np.random.randint(2, size = (max_permutations, len(sample))) - 1 ]) else: signs_array = itertools.product([-1, 1], repeat = len(sample)) distr = [sum(centered_sample * np.array(signs)) for signs in signs_array] return distr pylab.hist(permutation_zero_distr_1sample(mouses_data.proportion_of_time, 0.5), bins = 15) pylab.show() def permutation_test(sample, mean, max_permutations = None, alternative = 'two-sided'): if alternative not in ('two-sided', 'less', 'greater'): raise ValueError("alternative not recognized\n" "should be 'two-sided', 'less' or 'greater'") t_stat = permutation_t_stat_1sample(sample, mean) zero_distr = permutation_zero_distr_1sample(sample, mean, max_permutations) if alternative == 'two-sided': return sum([1. if abs(x) >= abs(t_stat) else 0. for x in zero_distr]) / len(zero_distr) if alternative == 'less': return sum([1. if x <= t_stat else 0. for x in zero_distr]) / len(zero_distr) if alternative == 'greater': return sum([1. if x >= t_stat else 0. for x in zero_distr]) / len(zero_distr) print "p-value: %f" % permutation_test(mouses_data.proportion_of_time, 0.5) print "p-value: %f" % permutation_test(mouses_data.proportion_of_time, 0.5, 10000) ```
github_jupyter
<!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).* *The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* <!--NAVIGATION--> < [Pivot Tables](03.09-Pivot-Tables.ipynb) | [Contents](Index.ipynb) | [Working with Time Series](03.11-Working-with-Time-Series.ipynb) > # Vectorized String Operations One strength of Python is its relative ease in handling and manipulating string data. Pandas builds on this and provides a comprehensive set of *vectorized string operations* that become an essential piece of the type of munging required when working with (read: cleaning up) real-world data. In this section, we'll walk through some of the Pandas string operations, and then take a look at using them to partially clean up a very messy dataset of recipes collected from the Internet. ## Introducing Pandas String Operations We saw in previous sections how tools like NumPy and Pandas generalize arithmetic operations so that we can easily and quickly perform the same operation on many array elements. For example: ``` import numpy as np x = np.array([2, 3, 5, 7, 11, 13]) x * 2 ``` This *vectorization* of operations simplifies the syntax of operating on arrays of data: we no longer have to worry about the size or shape of the array, but just about what operation we want done. For arrays of strings, NumPy does not provide such simple access, and thus you're stuck using a more verbose loop syntax: ``` data = ['peter', 'Paul', 'MARY', 'gUIDO'] [s.capitalize() for s in data] ``` This is perhaps sufficient to work with some data, but it will break if there are any missing values. For example: ``` data = ['peter', 'Paul', None, 'MARY', 'gUIDO'] [s.capitalize() for s in data] ``` Pandas includes features to address both this need for vectorized string operations and for correctly handling missing data via the ``str`` attribute of Pandas Series and Index objects containing strings. So, for example, suppose we create a Pandas Series with this data: ``` import pandas as pd names = pd.Series(data) names ``` We can now call a single method that will capitalize all the entries, while skipping over any missing values: ``` names.str.capitalize() ``` Using tab completion on this ``str`` attribute will list all the vectorized string methods available to Pandas. ## Tables of Pandas String Methods If you have a good understanding of string manipulation in Python, most of Pandas string syntax is intuitive enough that it's probably sufficient to just list a table of available methods; we will start with that here, before diving deeper into a few of the subtleties. The examples in this section use the following series of names: ``` monte = pd.Series(['Graham Chapman', 'John Cleese', 'Terry Gilliam', 'Eric Idle', 'Terry Jones', 'Michael Palin']) ``` ### Methods similar to Python string methods Nearly all Python's built-in string methods are mirrored by a Pandas vectorized string method. Here is a list of Pandas ``str`` methods that mirror Python string methods: | | | | | |-------------|------------------|------------------|------------------| |``len()`` | ``lower()`` | ``translate()`` | ``islower()`` | |``ljust()`` | ``upper()`` | ``startswith()`` | ``isupper()`` | |``rjust()`` | ``find()`` | ``endswith()`` | ``isnumeric()`` | |``center()`` | ``rfind()`` | ``isalnum()`` | ``isdecimal()`` | |``zfill()`` | ``index()`` | ``isalpha()`` | ``split()`` | |``strip()`` | ``rindex()`` | ``isdigit()`` | ``rsplit()`` | |``rstrip()`` | ``capitalize()`` | ``isspace()`` | ``partition()`` | |``lstrip()`` | ``swapcase()`` | ``istitle()`` | ``rpartition()`` | Notice that these have various return values. Some, like ``lower()``, return a series of strings: ``` monte.str.lower() ``` But some others return numbers: ``` monte.str.len() ``` Or Boolean values: ``` monte.str.startswith('T') ``` Still others return lists or other compound values for each element: ``` monte.str.split() ``` We'll see further manipulations of this kind of series-of-lists object as we continue our discussion. ### Methods using regular expressions In addition, there are several methods that accept regular expressions to examine the content of each string element, and follow some of the API conventions of Python's built-in ``re`` module: | Method | Description | |--------|-------------| | ``match()`` | Call ``re.match()`` on each element, returning a boolean. | | ``extract()`` | Call ``re.match()`` on each element, returning matched groups as strings.| | ``findall()`` | Call ``re.findall()`` on each element | | ``replace()`` | Replace occurrences of pattern with some other string| | ``contains()`` | Call ``re.search()`` on each element, returning a boolean | | ``count()`` | Count occurrences of pattern| | ``split()`` | Equivalent to ``str.split()``, but accepts regexps | | ``rsplit()`` | Equivalent to ``str.rsplit()``, but accepts regexps | With these, you can do a wide range of interesting operations. For example, we can extract the first name from each by asking for a contiguous group of characters at the beginning of each element: ``` monte.str.extract('([A-Za-z]+)', expand=False) ``` Or we can do something more complicated, like finding all names that start and end with a consonant, making use of the start-of-string (``^``) and end-of-string (``$``) regular expression characters: ``` monte.str.findall(r'^[^AEIOU].*[^aeiou]$') ``` The ability to concisely apply regular expressions across ``Series`` or ``Dataframe`` entries opens up many possibilities for analysis and cleaning of data. ### Miscellaneous methods Finally, there are some miscellaneous methods that enable other convenient operations: | Method | Description | |--------|-------------| | ``get()`` | Index each element | | ``slice()`` | Slice each element| | ``slice_replace()`` | Replace slice in each element with passed value| | ``cat()`` | Concatenate strings| | ``repeat()`` | Repeat values | | ``normalize()`` | Return Unicode form of string | | ``pad()`` | Add whitespace to left, right, or both sides of strings| | ``wrap()`` | Split long strings into lines with length less than a given width| | ``join()`` | Join strings in each element of the Series with passed separator| | ``get_dummies()`` | extract dummy variables as a dataframe | #### Vectorized item access and slicing The ``get()`` and ``slice()`` operations, in particular, enable vectorized element access from each array. For example, we can get a slice of the first three characters of each array using ``str.slice(0, 3)``. Note that this behavior is also available through Python's normal indexing syntax–for example, ``df.str.slice(0, 3)`` is equivalent to ``df.str[0:3]``: ``` monte.str[0:3] ``` Indexing via ``df.str.get(i)`` and ``df.str[i]`` is likewise similar. These ``get()`` and ``slice()`` methods also let you access elements of arrays returned by ``split()``. For example, to extract the last name of each entry, we can combine ``split()`` and ``get()``: ``` monte.str.split().str.get(-1) ``` #### Indicator variables Another method that requires a bit of extra explanation is the ``get_dummies()`` method. This is useful when your data has a column containing some sort of coded indicator. For example, we might have a dataset that contains information in the form of codes, such as A="born in America," B="born in the United Kingdom," C="likes cheese," D="likes spam": ``` full_monte = pd.DataFrame({'name': monte, 'info': ['B|C|D', 'B|D', 'A|C', 'B|D', 'B|C', 'B|C|D']}) full_monte ``` The ``get_dummies()`` routine lets you quickly split-out these indicator variables into a ``DataFrame``: ``` full_monte['info'].str.get_dummies('|') ``` With these operations as building blocks, you can construct an endless range of string processing procedures when cleaning your data. We won't dive further into these methods here, but I encourage you to read through ["Working with Text Data"](http://pandas.pydata.org/pandas-docs/stable/text.html) in the Pandas online documentation, or to refer to the resources listed in [Further Resources](03.13-Further-Resources.ipynb). ## Example: Recipe Database These vectorized string operations become most useful in the process of cleaning up messy, real-world data. Here I'll walk through an example of that, using an open recipe database compiled from various sources on the Web. Our goal will be to parse the recipe data into ingredient lists, so we can quickly find a recipe based on some ingredients we have on hand. The scripts used to compile this can be found at https://github.com/fictivekin/openrecipes, and the link to the current version of the database is found there as well. As of Spring 2016, this database is about 30 MB, and can be downloaded and unzipped with these commands: ``` # !curl -O http://openrecipes.s3.amazonaws.com/recipeitems-latest.json.gz # !gunzip recipeitems-latest.json.gz ``` The database is in JSON format, so we will try ``pd.read_json`` to read it: ``` try: recipes = pd.read_json('recipeitems-latest.json') except ValueError as e: print("ValueError:", e) ``` Oops! We get a ``ValueError`` mentioning that there is "trailing data." Searching for the text of this error on the Internet, it seems that it's due to using a file in which *each line* is itself a valid JSON, but the full file is not. Let's check if this interpretation is true: ``` with open('recipeitems-latest.json') as f: line = f.readline() pd.read_json(line).shape ``` Yes, apparently each line is a valid JSON, so we'll need to string them together. One way we can do this is to actually construct a string representation containing all these JSON entries, and then load the whole thing with ``pd.read_json``: ``` # read the entire file into a Python array with open('recipeitems-latest.json', 'r') as f: # Extract each line data = (line.strip() for line in f) # Reformat so each line is the element of a list data_json = "[{0}]".format(','.join(data)) # read the result as a JSON recipes = pd.read_json(data_json) recipes.shape ``` We see there are nearly 200,000 recipes, and 17 columns. Let's take a look at one row to see what we have: ``` recipes.iloc[0] ``` There is a lot of information there, but much of it is in a very messy form, as is typical of data scraped from the Web. In particular, the ingredient list is in string format; we're going to have to carefully extract the information we're interested in. Let's start by taking a closer look at the ingredients: ``` recipes.ingredients.str.len().describe() ``` The ingredient lists average 250 characters long, with a minimum of 0 and a maximum of nearly 10,000 characters! Just out of curiousity, let's see which recipe has the longest ingredient list: ``` recipes.name[np.argmax(recipes.ingredients.str.len())] ``` That certainly looks like an involved recipe. We can do other aggregate explorations; for example, let's see how many of the recipes are for breakfast food: ``` recipes.description.str.contains('[Bb]reakfast').sum() ``` Or how many of the recipes list cinnamon as an ingredient: ``` recipes.ingredients.str.contains('[Cc]innamon').sum() ``` We could even look to see whether any recipes misspell the ingredient as "cinamon": ``` recipes.ingredients.str.contains('[Cc]inamon').sum() ``` This is the type of essential data exploration that is possible with Pandas string tools. It is data munging like this that Python really excels at. ### A simple recipe recommender Let's go a bit further, and start working on a simple recipe recommendation system: given a list of ingredients, find a recipe that uses all those ingredients. While conceptually straightforward, the task is complicated by the heterogeneity of the data: there is no easy operation, for example, to extract a clean list of ingredients from each row. So we will cheat a bit: we'll start with a list of common ingredients, and simply search to see whether they are in each recipe's ingredient list. For simplicity, let's just stick with herbs and spices for the time being: ``` spice_list = ['salt', 'pepper', 'oregano', 'sage', 'parsley', 'rosemary', 'tarragon', 'thyme', 'paprika', 'cumin'] ``` We can then build a Boolean ``DataFrame`` consisting of True and False values, indicating whether this ingredient appears in the list: ``` import re spice_df = pd.DataFrame(dict((spice, recipes.ingredients.str.contains(spice, re.IGNORECASE)) for spice in spice_list)) spice_df.head() ``` Now, as an example, let's say we'd like to find a recipe that uses parsley, paprika, and tarragon. We can compute this very quickly using the ``query()`` method of ``DataFrame``s, discussed in [High-Performance Pandas: ``eval()`` and ``query()``](03.12-Performance-Eval-and-Query.ipynb): ``` selection = spice_df.query('parsley & paprika & tarragon') len(selection) ``` We find only 10 recipes with this combination; let's use the index returned by this selection to discover the names of the recipes that have this combination: ``` recipes.name[selection.index] ``` Now that we have narrowed down our recipe selection by a factor of almost 20,000, we are in a position to make a more informed decision about what we'd like to cook for dinner. ### Going further with recipes Hopefully this example has given you a bit of a flavor (ba-dum!) for the types of data cleaning operations that are efficiently enabled by Pandas string methods. Of course, building a very robust recipe recommendation system would require a *lot* more work! Extracting full ingredient lists from each recipe would be an important piece of the task; unfortunately, the wide variety of formats used makes this a relatively time-consuming process. This points to the truism that in data science, cleaning and munging of real-world data often comprises the majority of the work, and Pandas provides the tools that can help you do this efficiently. <!--NAVIGATION--> < [Pivot Tables](03.09-Pivot-Tables.ipynb) | [Contents](Index.ipynb) | [Working with Time Series](03.11-Working-with-Time-Series.ipynb) >
github_jupyter
# Using Python to Access NEXRAD Level 2 Data from Unidata THREDDS Server This is a modified version of Ryan May's notebook here: http://nbviewer.jupyter.org/gist/dopplershift/356f2e14832e9b676207 The TDS provides a mechanism to query for available data files, as well as provides access to the data as native volume files, through OPeNDAP, and using its own CDMRemote protocol. Since we're using Python, we can take advantage of Unidata's Siphon package, which provides an easy API for talking to THREDDS servers. Bookmark these resources for when you want to use Siphon later! + [latest Siphon documentation](http://siphon.readthedocs.org/en/latest/) + [Siphon github repo](https://github.com/Unidata/siphon) + [TDS documentation](http://www.unidata.ucar.edu/software/thredds/current/tds/TDS.html) ## Downloading the single latest volume Just a bit of initial set-up to use inline figures and quiet some warnings. ``` import matplotlib import warnings warnings.filterwarnings("ignore", category=matplotlib.cbook.MatplotlibDeprecationWarning) %matplotlib inline ``` First we'll create an instance of RadarServer to point to the appropriate radar server access URL. ``` # The archive of data on S3 URL did not work for me, despite .edu domain #url = 'http://thredds-aws.unidata.ucar.edu/thredds/radarServer/nexrad/level2/S3/' #Trying motherlode URL url = 'http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/' from siphon.radarserver import RadarServer rs = RadarServer(url) ``` Next, we'll create a new query object to help request the data. Using the chaining methods, let's ask for the latest data at the radar KLVX (Louisville, KY). We see that when the query is represented as a string, it shows the encoded URL. ``` from datetime import datetime, timedelta query = rs.query() query.stations('KLVX').time(datetime.utcnow()) ``` We can use the RadarServer instance to check our query, to make sure we have required parameters and that we have chosen valid station(s) and variable(s) ``` rs.validate_query(query) ``` Make the request, which returns an instance of TDSCatalog; this handles parsing the returned XML information. ``` catalog = rs.get_catalog(query) ``` We can look at the datasets on the catalog to see what data we found by the query. We find one volume in the return, since we asked for the volume nearest to a single time. ``` catalog.datasets ``` We can pull that dataset out of the dictionary and look at the available access URLs. We see URLs for OPeNDAP, CDMRemote, and HTTPServer (direct download). ``` ds = list(catalog.datasets.values())[0] ds.access_urls ``` We'll use the CDMRemote reader in Siphon and pass it the appropriate access URL. ``` from siphon.cdmr import Dataset data = Dataset(ds.access_urls['CdmRemote']) ``` We define some helper functions to make working with the data easier. One takes the raw data and converts it to floating point values with the missing data points appropriately marked. The other helps with converting the polar coordinates (azimuth and range) to Cartesian (x and y). ``` import numpy as np def raw_to_masked_float(var, data): # Values come back signed. If the _Unsigned attribute is set, we need to convert # from the range [-127, 128] to [0, 255]. if var._Unsigned: data = data & 255 # Mask missing points data = np.ma.array(data, mask=data==0) # Convert to float using the scale and offset return data * var.scale_factor + var.add_offset def polar_to_cartesian(az, rng): az_rad = np.deg2rad(az)[:, None] x = rng * np.sin(az_rad) y = rng * np.cos(az_rad) return x, y ``` The CDMRemote reader provides an interface that is almost identical to the usual python NetCDF interface. We pull out the variables we need for azimuth and range, as well as the data itself. ``` sweep = 0 ref_var = data.variables['Reflectivity_HI'] ref_data = ref_var[sweep] rng = data.variables['distanceR_HI'][:] az = data.variables['azimuthR_HI'][sweep] ``` Then convert the raw data to floating point values and the polar coordinates to Cartesian. ``` ref = raw_to_masked_float(ref_var, ref_data) x, y = polar_to_cartesian(az, rng) ``` MetPy is a Python package for meteorology (Documentation: http://metpy.readthedocs.org and GitHub: http://github.com/MetPy/MetPy). We import MetPy and use it to get the colortable and value mapping information for the NWS Reflectivity data. ``` from metpy.plots import ctables # For NWS colortable ref_norm, ref_cmap = ctables.registry.get_with_steps('NWSReflectivity', 5, 5) ``` Finally, we plot them up using matplotlib and cartopy. We create a helper function for making a map to keep things simpler later. ``` import matplotlib.pyplot as plt import cartopy def new_map(fig, lon, lat): # Create projection centered on the radar. This allows us to use x # and y relative to the radar. proj = cartopy.crs.LambertConformal(central_longitude=lon, central_latitude=lat) # New axes with the specified projection ax = fig.add_subplot(1, 1, 1, projection=proj) # Add coastlines ax.coastlines('50m', 'black', linewidth=2, zorder=2) # Grab state borders state_borders = cartopy.feature.NaturalEarthFeature( category='cultural', name='admin_1_states_provinces_lines', scale='50m', facecolor='none') ax.add_feature(state_borders, edgecolor='black', linewidth=1, zorder=3) return ax ``` ## Download a collection of historical data This time we'll make a query based on a longitude, latitude point and using a time range. ``` # Our specified time #dt = datetime(2012, 10, 29, 15) # Superstorm Sandy #dt = datetime(2016, 6, 18, 1) dt = datetime(2016, 6, 8, 18) query = rs.query() query.lonlat_point(-73.687, 41.175).time_range(dt, dt + timedelta(hours=1)) ``` The specified longitude, latitude are in NY and the TDS helpfully finds the closest station to that point. We can see that for this time range we obtained multiple datasets. ``` cat = rs.get_catalog(query) cat.datasets ``` Grab the first dataset so that we can get the longitude and latitude of the station and make a map for plotting. We'll go ahead and specify some longitude and latitude bounds for the map. ``` ds = list(cat.datasets.values())[0] data = Dataset(ds.access_urls['CdmRemote']) # Pull out the data of interest sweep = 0 rng = data.variables['distanceR_HI'][:] az = data.variables['azimuthR_HI'][sweep] ref_var = data.variables['Reflectivity_HI'] # Convert data to float and coordinates to Cartesian ref = raw_to_masked_float(ref_var, ref_var[sweep]) x, y = polar_to_cartesian(az, rng) ``` Use the function to make a new map and plot a colormapped view of the data ``` fig = plt.figure(figsize=(10, 10)) ax = new_map(fig, data.StationLongitude, data.StationLatitude) # Set limits in lat/lon space ax.set_extent([-77, -70, 38, 43]) # Add ocean and land background ocean = cartopy.feature.NaturalEarthFeature('physical', 'ocean', scale='50m', edgecolor='face', facecolor=cartopy.feature.COLORS['water']) land = cartopy.feature.NaturalEarthFeature('physical', 'land', scale='50m', edgecolor='face', facecolor=cartopy.feature.COLORS['land']) ax.add_feature(ocean, zorder=-1) ax.add_feature(land, zorder=-1) ax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0); ```
github_jupyter
# Intermediate - Statistical Tests & Regression ### Authors : Juan Solorio & Andres De La Fuente ---- # Overview This is the intermediate level notebook for the Data Science (DS) and Machine Learning (ML) FredHutch.io tutorial, where we will work through beginning to end on different aspects and techniques in DS for Statistical Testing in Research and Data Analysis. In this notebook we will work through the process of data analysis for the [gene count TCGA Data Set](https://www.dropbox.com/sh/jke9h4km90ner9l/AAD1UyucvlXIFbKTjl-D15U6a?dl=0). **We will be using some findings from the Beginner Tutorial Notebook.** This is the intermediate notebook and we will focusing specifically on statistical testing and regression models in **python**. We will keep working with *python libraries* introduced in the Beginner Tutorial and introduce some new libraries with special purposes in statistics. > **Libraries Used in This Tutorial** * Data Manipulation and Processing - [pandas]( https://pandas.pydata.org/) - [numpy]( https://numpy.org/) * Data Visualization - [Matplotlib](https://matplotlib.org/) - [Seaborn](https://seaborn.pydata.org/) - [Altair](https://altair-viz.github.io/) * Statistics - [SciPy](https://www.scipy.org/) - [Statsmodels](https://www.statsmodels.org/stable/index.html) ## Questions In this Notebook, we are focused on figuring out the statistically significant differences in genes between cancer groups. We are also concerned with determining statistical power our experiment given the genes data inspired by the PANCAN dataset. # Table of Contents [1. Statistical Background](#1.-Statistical-Background) * [1.1 Hypothesis Testing](#1.1-Hypothesis-Testing) - [1.1.2 Steps in Statistical Testing](#1.1.2-Steps-in-Statistical-Testing) - [Step 1: Hypotheses](#Step-1:-Hypotheses) - [Step 2: Significance](#Step-2:-Significance) - [Step 3: Test Statistic](#Step-3:-Test-Statistic) - [1.1.3 P Value](#1.1.3-P-Value) - [1.1.4 Statistical Significance](#1.1.4-Statistical-Significance) * [1.2 Statistical Power and Sample Size Calculations](#1.1-Statistical-Power-and-Sample-Size-Calculations) - [1.2.1 One-Sample Population Calculations](#1.1.1-One-Sample-Population-Calculations) - [1.2.2 Comparing Two Samples Calculations](#1.1.2-Comparing-Two-Samples-Calculations) [2. Setup](#2.-Setup) * [2.1 Importing Python Libraries](#2.1-Importing-Python-Libraries) - [2.1.1 Imports](#2.1.1-Imports) - [2.1.2 Load TCGA Data](#2.1.2-Load-TCGA-Data) [3. Hypothesis Testing](#3.-Hypothesis-Testing) * [3.1 T test on One Sample](#3.1-T-test-on-One-Sample) [4. Power and Sample Size](#4.-Power-and-Sample-Size) * [4.1 One Sample Power](#4.1-One-Sample-Power) * [4.2 Two Sample Power](#4.2-Two-Sample-Power) ---- # 1. Statistical Background ## 1.1 Hypothesis Testing Hypothesis tests are used to answer questions about a population. Given gene abundance data we've been looking at, you might ask a question like: > Do the average count levels of 'gene X' differ between people with 'cancer Y' versus 'cancer Z?' Without any further knowledge, the best you could do is look at graphs or summary statistics from your samples. The means of your samples, however, are not good enough - you want to base your decision on the underlying populations. How can you know anything about the whole population? This is where statistical hypothesis testing comes in! ### 1.1.1 What does testing do? Basic hypothesis testing essentially asks: what is the probability that our sample came from a population with distribution A versus distribution B? For example, how likely is it that our sample came from a distribution with a mean count = X for a gene expression? Hypothesis testing is a standardized and quantitative framework from which to answer questions like this. Otherwise, you'd be left eyeing out graphs and speculating. ### 1.1.2 Steps in Statistical Testing #### Step 1: Hypotheses As we stated before, the first thing you need to perform a statistical test is a hypothesis, you need to know what you want to find out of course. We always decide on a **Null Hypothesis** $H_{0}$ which is the base case. Then we decide on an **Alternative Hypothesis** $H_{1}$, which contradicts $H_{0}$. In our imagined scenario above, our hypotheses would be: > The mean count is $X$; $H_{0}$: $\mu = X$ > > The mean count is not $X$; $H_{1}$: $\mu \neq X$ **IMPORTANT NOTE** _Using hypothesis testing, there are only two ways to interpret the outcome: you reject the Null Hypothesis, or you fail to reject it. The Null cannot be proven to be true._ #### Step 2: Significance Based on the above note, there are essentially four possible outcomes and two ways that our hypothesis test outcome could turn out to be wrong: True Statement | Reject $H_{0}$ | Do Not Reject $H_{0}$ --------------|-------------|-------------------- $H_{0}$ is True | ***Type I Error*** | *Correct Decision* $H_{0}$ is False | *Correct Decision* | ***Type II Error*** > **Type I Error:** Rejecting the Null when it is actually true > > **Type II Error:** 'Accepting' the Null when it is actually false The probability of having a Type I error is called the **significance level** of a hypothesis test. Significance level: $\alpha$ The probability of having a Type II error is denoted by $\beta$. The **power level** of a test is 1 - $\beta$. In other words, the probability of _"accepting"_ the Null when it is actually true. Utimately, we want to perform a test that __minimizes $\alpha$ and maximises power.__ The catch is that *$\alpha$ and $\beta$ are inversely related*. In designing our experiment, the way around this is to specify the $\alpha$ beforehand, and then try to maximize power. A commonly used value is $\alpha = 0.05$, but depending on the field other values such as 0.025 or 0.01 are often commonly used. We then we try to achieve a high value for the power, e.g., 0.8 or 0.9, or higher, depending on the context. Achieving high power typically requires selecting a sufficiently large sample size. #### Step 3: Test Statistic A test statistic is a standardized value that is calculated in place of just using the sample mean. This standardizes the testing process and is mathematically more convenient. One such 'test statistic' is the **t** value: $$ t = \frac{\overline{x} - m_{0}}{s * \sqrt{n}} $$ >$\overline X$ - sample mean $\mu_0$ - population mean (Null Hypothesis) $\sigma$ - standard deviation $n$ - sample size T follows something called a t distribution. Based on the $\alpha$ decided on for the test, and the sample size, the t distribution is used to calculate a **critical value**, which t is compared to. This comparison determines the outcome of a test. For example, if $\alpha = 0.05$ and **t > criticial value**, we would reject $H_{0}$ at significance of 0.05. #### [Optional]Step 4: Confidence Intervals Confidence interval is an interval calculated from the data using a rule which ensures that the interval has a certain pre-specified probability (often 95%, _1-$\alpha$_ ) of containing the true value of the target parameter. The formula to calculate the _Confidence Interval_ is then: $$ C.I. = \overline X - (Critical \ Value) * (\frac{\sigma}{\sqrt{n}}) $$ > $\overline X$ - sample mean $Critical \ Value$ - either Z or t statistic at the desired $\alpha$ $\sigma$ - standard deviation $n$ - sample size ### 1.1.3 P Value The **p value** is another way of deciding on the overall significance of your test outcome. It represents the probability of getting a more extreme result than what you got given the Null Hypothesis. Intuitively, if **p** is small, it indicates our test results are statistically significant. A commonly used value is 0.05. If p < 0.05 you can say your test result is statistically significant. ### 1.1.4 Statistical Significance Given the above, there are 2 ways to determine the statistical significance of a test: 1. Calculate test statistic (t), compare it to the critical value given a significance level ($\alpha = 0.05$). In the case of $H_{0}: mean = 0$ and $H_{1}: mean > 0$, if t > critical value, we reject $H_{0}$ and the result is *statistically significant*. Otherwise, we fail to reject, and the result is *not statistically significant*. 2. Calculate the p value, if p < 0.05, we reject $H_{0}$ and the result is *statistically significant*. Otherwise, we fail to reject, and the result is *not statistically significant*. These methods are completely equivalent. As you will see below, modern statistical testing packages will offer both versions of the result. ## 1.2 Statistical Power and Sample Size Calculations In designing our experiments, one of the most important aspects is the choice of a proper sample size, too small we wont yield useful information, too large we then waste time and resources. To find an answer to our main questions in this notebook, and any research in general, we must decide which particular alternative *Hypothesis*, or *$H_{1}$*, are important to be able to detect with high ***power***. In statistics, we refer to the **power** of an experiment as the control over the *type II* error rate: > **Power = *P* (Reject *$H_{0}$* given that the alternative *$H_{1}$* holds)** Also written as **Power = 1 - *P* (Type II error) = 1 - $\beta$** Power calculations are an important aspect of experimental design, as it might tell us if the results of our study are statistically significant or even if results from previous studies are incorrect. We can perform the calculations in a variety of ways: * formulas * simulations * on-line calculators, *like this [one]( https://www.stat.ubc.ca/~rollin/stats/ssize/n2.html)* * commercial software In this notebook we’ll work with both simulations and formulas. These formulas are based on familiar assumptions such as: - independence in our sample data - normality of errors - constant variance so they are often thought of as an initial rough calculation of power. The formulas we will be using are then derived from the general formula for the Z test statistic $$ Z=\frac{\overline X - \mu_{0}}{\frac{\sigma}{\sqrt[]{n}}} \\ $$ >$\overline X$ - sample mean $\mu_0$ - population mean (Null Hypothesis) $\sigma$ - standard deviation $n$ - sample size We algebraically manipulate the formula and allow for $Z$ to be dependent on the desired significance level $\alpha$ for the quantile values in the Normal Distribution, $N(0,1)$. The power of the test for a mean is _increased_ by: 1. Increasing the difference between the means under the null and alternative hypotheses ($\mu_1 - \mu_0$). 2. Increasing the significance level ($\alpha$). 3. Decreasing the standard deviation ($\sigma$). 4. Increasing the sample size ($n$). ### 1.2.1 One-Sample Population Calculations > $$ \hbox{Power} = P\left( N(0,1) < -Z_{1 - \alpha / 2} + \frac{ |\mu_1 - \mu_0|}{ \sigma / \sqrt n } \right) = \Phi(-Z_{1 - \alpha / 2} + \frac{ |\mu_1 - \mu_0|}{ \sigma / \sqrt n } ), $$ where $\Phi$ is the cdf of the N(0,1) distribution. The sample size that is required in order to have power equal to $1-\beta$: > $$ n = \frac{ \sigma^2 (Z_{1 - \beta} + Z_{1 - \alpha / 2})^2}{ (\mu_0 - \mu_1)^2 }. $$ ### 1.2.2 Comparing Two Samples Calculations When looking at comparing 2 samples, we consider the test of $H_0:\mu_A=\mu_B$ versus $H_0:\mu_A\neq\mu_B$, where $\mu_A$ and $\mu_B$ are means of two populations. Assuming a known population variance $\sigma^2$ and sample sizes $n$ per group, test statistic is > $$ Z=\frac{|\bar X_A - \bar X_B|}{\sqrt{\sigma_A^2/n_A+\sigma_B^2/n_B} }, $$ As a result, our power and sample size formulas become > $$ \hbox{Power} = \Phi ( -Z_{1 - \alpha / 2} + \frac{|\Delta|}{ \sqrt{\sigma_A^2/n_A+\sigma_B^2/n_B}} ), $$ $$ n = \frac{ (\sigma_A^2+\sigma_B^2) (Z_{1 - \beta} + Z_{1 - \alpha/2})^2}{ \Delta^2 }. $$ Where $|\Delta|=|\mu_A - \mu_B|$. # 2. Setup We will be moving from the PANCAN dataset to the genes count data (datasets available [here](https://www.dropbox.com/sh/jke9h4km90ner9l/AAD1UyucvlXIFbKTjl-D15U6a?dl=0)) from the same five cancer types (BRCA, KIRC, COAD, LUAD, PRAD) from the TCGA projects available from the [National Cancer Institute's Genomic Data Commons](https://gdc.cancer.gov/). The we will be using the genes and metadata datasets for our statistical experiments since the metadata provides some interesting subgroups for the cancers which we can create test for. We will also use some genomic data background information from cancer from COSMIC: [Catalog Of Somatic Mutations In Cancer](https://cancer.sanger.ac.uk/). A caveat about this genes dataset, while inspired by the PANCAN dataset, it is almost double the size of the PANCAN and we will likely have to deal with this memory usage. ## 2.1 Importing Python Libraries We will be using all the libraries from the previous tutorial notebook, but we will also introduce three libraries for statistics and data purposes: > SciPy statsmodels sklearn ### 2.1.1 Imports As always, we first import all the packages we want to use before we do anything else. _Note:_ _You will notice some extra code in the below cell besides the one importing libraries, this is for some formating for displaying outputs in this tutorial and have no extra effect on our statistical analysis (they could be omitted)._ ``` # Data Manipulation import pandas as pd import numpy as np # Statistics from scipy import stats import statsmodels.api as sm from statsmodels.formula.api import ols from sklearn.model_selection import train_test_split # visualization import altair as alt import matplotlib.pyplot as plt import seaborn as sns # setting up the plot style plt.style.use('ggplot') %matplotlib inline %%HTML <style>td {font-size: 15px}</style> ``` Now, for our statistical tests we will primarily be taking advantage of the functions in the *SciPy* and *statsmodels* libraries, as these have prepackaged functions specialized in statistics. One thing to noticed is that we have two import statements for the *statsmodels* API, the **statsmodels.api** imports the functions we will be using, the [**statsmodels.formula.api**](https://www.statsmodels.org/devel/example_formulas.html) allows us to work with _”R-style”_ formulas within python. ### 2.1.2 Load TCGA Data Ok so as we know from the Novice Tutorial, we have a very large dataset and could possibly lead to large memory usage. We will use our *metadata.csv* file to create a smaller subset that we hope will contain a representation of the larger dataset. We could go ahead and subset the genes dataset the same way we did in the Novice Notebook, however we have the possibility of not obtaining an sample representative of the dataset. The great thing about our large dataset, this gives us a pseudo way of replicating how we might go about performing a real statistical experiment. We will follow the same process as we did in the Novice Notebook so we will just put it into a python function so we can reuse it later if needed it. ``` def create_genes_subset(split_size=.1): """ Creates a smaller dataframe from the large 'genes.csv' file based on a split from the metadata file. Returns a dataframe that has been transformed by log2 and the index needed for the remaining samples from the genes.csv file to remain independent. """ metadata = pd.read_csv('../1-Data/metadata.csv') big_split, small_split = train_test_split(metadata, test_size=split_size, random_state=4) skiplines_small = np.sort(big_split.index) + 1 skiplines_big = np.sort(small_split.index) + 1 genes_small = pd.read_csv('../1-Data/genes.csv', skiprows=skiplines_small) genes_nonAllZero = genes_small.loc[:,~genes_small.isin([0]).all(axis=0)] genes_log2_trans = np.log2(genes_nonAllZero.iloc[:,1:] + 1) genes_log2_trans['barcode'] = genes_small['barcode'] genes_merged = pd.merge(left=small_split, right=genes_log2_trans, how='left', left_on='barcode', right_on='barcode') # Releasing memory by deleting dataframes del genes_small, genes_nonAllZero, genes_log2_trans, skiplines_small, skiplines_big return genes_merged, big_split, small_split ``` We have the function to return the smaller subset from the larger *genes.csv*. This is very useful as we can create a pseudo version of what it would be to have design an experiment. Given we are trying to figure out sample size and power, often use a value for the standard deviation from a previous study or a study done in a different, but comparable, population. This way we can approximate this type of scenario. ``` # load the data from function, we need to set up the correct variable names since 2 things are returned genes_small_log2, big_meta, small_meta = create_genes_subset() # check out dataset genes_small_log2.head(3) ``` Great!!! Now we have both our gene count data but also some demographics attached to them. Let's do some extra data cleaning and preping for us to jump into hypothesis testing, this step is also known as[ _feature engineering_](https://en.wikipedia.org/wiki/Feature_engineering) in Data Science. Notice that `age_at_diagnosis` contains very large numbers, this is due to the age being in terms of days, so let's conver that column into years instead. ``` # making a copy of the dataframe in case something goes wrong data = genes_small_log2.copy() # Make a age_at_diagnosis_years column by dividing by 365 and rounding with np.rint() # there are about 365.25 days in an Earth year according to NASA data['age_at_diagnosis_years'] = np.rint(data['age_at_diagnosis'] / 365.25) # check output data.head(3) ``` Great, now we have actual years at the age of diagnosis for the patients. With greater domain knowledge in the field, we can further work on more _feature engineering_ as we see fit, but for now this will suffice for our tutorial purposes. Let us actually visualize the distribution of the `age_at_diagnosis_years` values by cancer type we just computed, it will come in handy later in the tutorial. ``` # setting figure parameters fig_dims = (15, 6) fig, ax = plt.subplots(figsize=fig_dims) sns.kdeplot(data=data, x='age_at_diagnosis_years', hue='cancer_type', fill=True, alpha=.2, ax=ax) plt.title('Distribution of Age at Diagnosis of Patients') plt.xlabel('Age at diagnosis (years)') plt.savefig('../3-Outputs/age_distribution.png') plt.show() ``` # 3. Hypothesis Testing We’ll now go through some examples of how to perform 3 types of hypothesis tests: _One Sample T test or Z test, Two Sample T test, and Analysis Of Variance._ ## 3.1 One Sample Say you want to answer the question > Question1: > > Is the mean _age (in years) at diagnosis_ for 'KIRC' (kidney) cancer equal to 55? ### 3.1.1 One Sample T test To tackle this, let's do a two-sided t test on the mean. A two-sided test simply means the alternative hypothesis does not care which way the mean differs from the selected value. > $H_{0}: mean = 55$ > > $H_{1}: mean \neq 55$ > > $\alpha = 0.05$ We will compute the _t-value_ for the test by both first calculating it using the formula and by then using an existing function from the SciPy stats package. This way we will be able to compare and check our answers and then we'll be able to use one or the other depending on what we prefer. Let's do a test with the hypotheses above for all samples of 'KIRC' cancer type. ``` # get subset for only the KIRC values kirc_data = data.loc[data.cancer_type=='KIRC'] mu_k_age = np.mean(kirc_data['age_at_diagnosis_years']) # mean age of the KIRC data sd_k_age = np.std(kirc_data['age_at_diagnosis_years'], ddof=1) # standard Dev of age for KIRC data, needs ddof = 1 for sample n_k = len(kirc_data) # number of samples in KIRC data t = (mu_k_age - 55)/(sd_k_age/np.sqrt(n_k)) # calculating the t-statitic value from formula pval = 2*(1 - stats.t.cdf(np.abs(t), df=n_k-1)) # two-sided pvalue print('t-statistic: {0:5f} \np-value : {1:5f}'.format(t, pval)) # Call the 1 sample t test function from SciPy stats.ttest_1samp(kirc_data.age_at_diagnosis_years, popmean=55) ``` We see that we get the same values from both methods for both the __`test statistic = 4.422`__ and the __`p-value = 0.0004`__. At this point we might feel compel to look at the _p-value_ and interpret the value of our test, but we must hold on as we must always check the assumptions for our test. We also want to check for the __Type I__ error and __power__ or __Type II__ error for this type of test. For now we'll address the _Type I_ error and leave the _Type II_ for a later section in this tutorial. For the _One Sample T test_ to be valid, either of the following must hold: 1. The population distribution is normal. - _We can do a rough check visualy by either plotting the distibution or using a Q-Q plot_ 2. The sample size is sufficiently large. - _By tradition we tend to assume that sample sizes n > 30, we have a large enough sample size_ For this test, we know that the sample size of the independent sample is 66, so already that is satisfied, but let's also check the plots to make sure we can see the _Normal Distribution_ in both the histogram and Q-Q plot. ``` # setting figure parameters for size fig_dims = (15, 6) fig, ax = plt.subplots(1,2,figsize=fig_dims) #Distribution from seaborn sns.distplot(kirc_data['age_at_diagnosis_years'], kde=True, ax=ax[0]) ax[0].set_title('KIRC Distribution of age at diagnosis') ax[0].set(xlabel="Age at diagnosis (years)") #QQ-plot plot from stats stats.probplot(kirc_data['age_at_diagnosis_years'], dist="norm", plot=ax[1]) fig.suptitle('Visual Tests for Validity of T test on KIRC samples', fontsize=16) plt.show() plt.savefig('../3-Outputs/kirc_ttest_valid.png') ``` We see that it is both fairly normally distributed on the histogram/distribution plot, and that the values land in the Q-Q plot on the normality line. Now to test the Type I error, we will do a _Simulation Study_ with the mean of the random distribution equal to the __$H_{0}: \mu=55$__ and the _standard deviation_ of the original "KIRC" data. What we are trying to accomplish with the simulation study is to figure out the "probability" of randomly erroneously rejecting the null hypothesis (*Type I Error*), by calculating how many times out of the total simulations fall out of our critical value. We want our _probability_ to be close or under the $\alpha$ value specified (usually *0.05*). ``` def one_sample_simulation_values(N,n,mu,mu_0,sd_0,alpha=.95): """ Runs Monte-Carlo simulations for a Normal Distribution of the described mean and standard dev. """ sims = [] for i in range(N): # we can use either numpy or scipy.stats to create a sample of size n from a distribution samples_0 = np.random.normal(loc=mu_0, scale=sd_0, size=n) critical_value = stats.t.ppf(alpha, n-1) t = (np.mean(samples_0) - mu)/(np.std(samples_0,ddof=1)/np.sqrt(n)) sims.append(np.abs(t) > np.abs(critical_value)) return np.array(sims) # we set a 'seed' number so that we can reproduce the random variable np.random.seed(2) # Checking the Type I error value from simulations # we use mean to calculate this from the array returned by the function print("Observed Alpha = {0:5f}". format(np.mean(one_sample_simulation_values(N=5000,n=n_k,mu=55,mu_0=55,sd_0=sd_k_age,alpha=.025)))) ``` The Type I error rate is close to 0.05, agreeing with the theory quite well. Had we not gotten a value that met theory, we would need to use a different type of statistical test for our hypothesis. Now we can say by looking at the output from the hypothesis tests, the __`p-value = 0.0004`__ is << 0.05, the interpretation is that under the null hypothesis _mean age at diagnosis = 55_ , the probability of seeing a test statistic as extreme or more than what’s observed is __0.0004__ . Since this is less than our significance level of 0.05, we reject the null hypothesis. _mean age at diagnosis = 55._ #### One sided interpretation The test above tells us that the mean _age at diagnosis_ is unlikely to be equal to 55, but what if we want to test whether the mean is specifically *greater than* 55? A one sided test has an alternative hypothesis that only goes one way: > $H_{0}: mean = 55$ > > $H_{1}: mean \gt 55$ As described in the background section, t is compared to the value of the t distribution at a certain point (the critical value). When doing a 'greater than' test, we reject the $H_{0}$ if t > critical value. When doing a 'less than' test, we reject the $H_{0}$ if t < critical value. Since the t distribution is symmetrical around 0, the p value of the 'two sided' test we did before is simply 2 times the p value of either one sided test. &nbsp; To interpret the results above for our one sided hypothesis test, we just need to observe: 1. The sign of the test statistic 2. P value divided by 2 &nbsp; If $H_{1}: mean \gt x$ The following is needed to reject the Null - t > 0 - (p/2) < $\alpha$ &nbsp; If $H_{1}: mean \lt x$ The following is needed to reject the Null - t < 0 - (p/2) < $\alpha$ &nbsp; The results from the t test we did above have a t > 0, and (p/2) < 0.05. Since we want to use it as a one sided test with $H_{1}: mean > 55$, these results reject $H_{0}$ ### 3.1.2 Z test Another test statistic that can be used for the one sample hypothesis testing is **z**. The formula for this is very similar to the t statistic: $$ z = \frac{\overline{x} - m_{0}}{v * \sqrt{n}} $$ The difference is that in place of the standard deviation of the sample, we use the variance of the population. This alternate test statistic can be used in situations where you know this information. &nbsp; Let's look at ___KIRC data___ again and test the following using the z statistic: > $H_{0}: mean_{age \ at \ diagnosis} = 55$ > > $H_{1}: mean_{age \ at \ diagnosis}\gt 55$ > > $\alpha = 0.05$ The scipy stats package doesn't offer a simple z test function, so let's use the statsmodels package: ``` # SciPy z test, 'value' refers to the mean we are testing sm.stats.ztest(kirc_data.age_at_diagnosis_years, value = 55, alternative = 'larger') ``` Since z > 0 and p < $\alpha$, we reject that the mean age at diagnosis = 55. It is likely that the population mean is greater than 55, and we reject the null hypothesis at a 5% significance level. ### 3.1.3 Confidence Intervals Remember that Confidence intervals and hypothesis tests complement each other. We can think of the confidence interval as containing all the values that would not have been rejected by a corresponding hypothesis test. The confidence interval and hypothesis test complement each other, in most applications, it is a good idea to report the results of both. So let's calculate the _Confidence Interval_ for our _"KIRC"_ data at the $\alpha=0.05$ level. ``` def confidence_interval(sample, alpha=0.05): """ computes the confidence interval for the provided sample and alpha value, alpha is 0.05 by default. """ n = len(sample) mu = np.mean(sample) sd = np.std(sample) return mu + stats.t.ppf(np.array([alpha/2, 1 - alpha/2]), df=n-1) * (sd/np.sqrt(n)) # computing confidence interval for KIRC data lb, ub = confidence_interval(kirc_data.age_at_diagnosis_years) print("The Lower Bound of the C.I. = {0:2f} , the Upper Bound of the C.I. = {1:2f}".format(lb, ub)) ``` We see that the _Null Hypothesis_ value for the _mean age at diagnosis = 55_ is outside the bounds of the _Confidence Interval_ , agreeing that we should reject the _Null Hypothesis_ . ## 3.2 T Test on two samples So far we've only asked questions about the mean of the underlying population for one sample. More often you might want to compare two samples. Let's say you want to answer the following: > Question 2: > > Is the mean _age (years) of diagnosis_ different between people with _Colon Cancer (COAD)_ versus people with _Kidney Cancer KIRC_ ? The hypotheses would look like this: > $H_{0}: mean_{COAD} = mean_{KIRC}$ > > $H_{1}: mean_{COAD} \neq mean_{KIRC}$ > > $\alpha = 0.05$ Without getting into the formulas, which we already saw in the first section of this notebook, a test on two samples follows a similar process to one sample tests - a test statistic is computed and compared to critical values. &nbsp; **NOTE: The tests depend on how the samples relate to each other** - Are the samples paired? - (e.g. If the two samples represent the same patients over two visits, then each value in one sample is 'paired' with a value in the other sample) - Are the samples independent? - Are the sample variances the same? &nbsp; Our data to answer Question 2 consists of the samples with the PRAD label and those with the LUAD label. These two groups are not paired, and they are independent. We will assume they have equal variance. Let's use the SciPy function for two independent sample t test, it takes both samples as arguments. We should note that the function by default assumes equal variance, meaning that this would be a [_"Student's t-test"_](https://en.wikipedia.org/wiki/Student%27s_t-test) while if we set the `equal_var` to _False_ then we have a [_"Welch t-test"_](https://en.wikipedia.org/wiki/Welch%27s_t-test)(or _"unequal variances t-test"_ ). We will use the _Welch t-test_ , both the _Student t-test_ and _Welch t-test_ calculate the same result when equal size and variance can be assumed for the samples, but the _Welch t-test_ performs better than _Student t-test_ whenever sample sizes and variances are unequal between the samples. ``` # Select samples with PRAD and COAD, .isna() is used here to make sure to drop any nan values prad = data.loc[(data['cancer_type'] == 'PRAD')&(~data.age_at_diagnosis_years.isna())] coad = data.loc[(data['cancer_type'] == 'COAD')&(~data.age_at_diagnosis_years.isna())] # Call the two sample t test function from scipy, and set 'equal_var' to False tstat, pval = stats.ttest_ind(coad.age_at_diagnosis_years, prad.age_at_diagnosis_years, equal_var=False) mu_c_age = coad.age_at_diagnosis_years.mean() mu_p_age = prad.age_at_diagnosis_years.mean() print("COAD mean age at diagnosis: {0:2f} \t PRAD mean age at diagnosis: {1:2f}, \n\ test statistic: {2:2f} \t p-value: {3:2f}".format(mu_c_age, mu_p_age, tstat, pval)) ``` The results above (p < $\alpha$) reject the hypothesis that the means are equal. It is likely that the mean _age at diagnosis_ is different in the population of people with PRAD versus those with LUAD, in fact we can see that the $mean_{age \ coad}=66.8$ and the $mean_{age \ prad}=61.1$. Similarly as we did with the _One Sample T-test_ , we can run simulation studies to test the _Type I_ error for our _Two Sample_ tests, we will just have to adjust the formula in our `one_sample_simulation_values` function. _Note:_ We should by no note the usefulness of functions in Python and that we should when possible aim to convert a repetitive or convenient chunck of code into a function. We will be following this practive in the changes we'll perform to the simulations function. ``` def two_sample_simulation_values(N,n1,n2,mu1,mu2,sd1,sd2,alpha=.95, z_test=True): """ Runs Monte-Carlo simulations for a Normal Distribution of the described mean and standard dev. """ sims = [] for i in range(N): # we can use either numpy or scipy.stats to create a sample of size n from a distribution sample_1 = np.random.normal(loc=mu1, scale=sd1, size=n1) sample_2 = np.random.normal(loc=mu2, scale=sd2, size=n2) if z_test == True: test_stat = z_test_statistic_two_sample(sample_1, sample_2) critical_value = stats.norm.ppf(1-alpha) test = np.abs(test_stat) > np.abs(critical_value) else: # Call the two sample t test function from scipy, and set 'equal_var' to False test_stat, pval = stats.ttest_ind(sample_1, sample_2, equal_var=False) test = pval < 1-alpha sims.append(test) return np.array(sims) def z_test_statistic_two_sample(sample1,sample2): """ computes the z statistic for 2 sample test """ # computing means of samples mu_1 = np.mean(sample1) mu_2 = np.mean(sample2) # computing the standard errors se_1 = np.std(sample1)**2 / len(sample1) se_2 = np.std(sample2)**2 / len(sample2) z = (mu_1-mu_2 - 0)/np.sqrt(se_1+se_2) return z # getting the variables to feed into simulations n_p = len(prad) n_c = len(coad) sd_p = np.std(prad.age_at_diagnosis_years, ddof=1) sd_c = np.std(coad.age_at_diagnosis_years, ddof=1) # we set a 'seed' number so that we can reproduce the random variable np.random.seed(2) # Checking the Type I error value from simulations # we use mean to calculate this from the array returned by the function # always remember we want to set the means equal to each other print("Observed Alpha for Z-test= {0:5f} \nObserved Alpha for Welch-test= {1:5f}". format(np.mean(two_sample_simulation_values(N=5000,n1=n_p,n2=n_c,mu1=62,mu2=62,sd1=sd_p,sd2=sd_c,alpha=.95)), np.mean(two_sample_simulation_values(N=5000,n1=n_p,n2=n_c,mu1=62,mu2=62,sd1=sd_p,sd2=sd_c,alpha=.95,z_test=False)))) ``` We can see that if we run simulations for both the _Z-test_ and the _Welch t-test_ , we only expect the ___Welch t-test___ to perform as expected to theory of rejecting at about the $\alpha=0.05$. ## 3.3 ANOVA Say we want to ask a question about the different groups, such as the genes we selected as _Top value genes_ from _the Novice Tutorial_ , at once: > Question 3: > > Say we know that gene count data is important in determining how likely genes will react to a drug. > Hence we would like to know for our _Top value genes_ , > __do all the different genes have the same underlying mean in their count data?__ One-way Analysis of Variance (ANOVA) can be used to test this. Conceptually, the approach of ANOVA is to compare the variance *within* the individual groups to the variance *between* them. A test statistic called the F statistic is calculated and compared to critical values, as with simple hypothesis tests. &nbsp; Let's use ANOVA to answer Question 3. First, our hypotheses: > $H_{0}: \delta_{i} = 0$ for all samples 'i' > > $H_{1}: \delta_{i} \neq 0$ for at least one sample 'i' > > *where $\delta_{i}$ is the difference between the mean of group **i** and the overall mean* > > $\alpha = 0.05$ To do this, let's make use of the one way ANOVA function from both the SciPy and statsmodels packages. For us to use ANOVA, we must check the assumptions for ANOVA: 1. Independence (of samples and of observations within each sample) - _We already know the samples are independent_ 2. Equal variances - _We will do a visual check to see which genes meet this assumption_ 3. Large sample sizes or normal distributions - _We have sample sizes of over 300, large enough_ ``` # Select a few genes to include in the comparison # Rather than the usual method of subsetting a dataframe, we create a list of each # individual group because the statsmodels ANOVA function requires # a list of one dimensional arrays as input, rather than a two dimensional dataframe gene_names = ['ENSG00000034510','ENSG00000075624','ENSG00000087086','ENSG00000112306', 'ENSG00000137154','ENSG00000142534','ENSG00000184009','ENSG00000231500'] # Check the variances through a boxplot fig_dims = (16, 6) fig, ax = plt.subplots(figsize=fig_dims) sns.boxplot(x="variable", y="value", data=pd.melt(data[gene_names]), ax=ax) plt.title('Box-plot for "Top Gene Count" Genes') plt.savefig('../3-Outputs/boxplot_topgenes_anova.png') plt.show() ``` We see that not all the genes appear to have _equal variances_, the only ones that seem close enough are __'ENSG00000075624', 'ENSG00000112306', 'ENSG00000137154', 'ENSG00000142534'___ , so we'll run an ANOVA test for these. ``` cols = ['ENSG00000075624','ENSG00000112306','ENSG00000137154','ENSG00000142534'] # Call the ANOVA function from statsmodels stats.f_oneway(data[cols[0]],data[cols[1]],data[cols[2]],data[cols[3]]) # Anova with statsmodels functions and function to display table ######## data_anova = pd.melt(data[cols]) # data in format needed model = ols('value ~ C(variable)', data=data_anova).fit() # 'r' like formula for anova model aov_table = sm.stats.anova_lm(model, typ=2) # running anova def anova_table(aov): """ The function below was created specifically for the one-way ANOVA table results returned for Type II sum of squares """ aov['mean_sq'] = aov[:]['sum_sq']/aov[:]['df'] aov['eta_sq'] = aov[:-1]['sum_sq']/sum(aov['sum_sq']) aov['omega_sq'] = (aov[:-1]['sum_sq']-(aov[:-1]['df']*aov['mean_sq'][-1]))/(sum(aov['sum_sq'])+aov['mean_sq'][-1]) cols = ['sum_sq', 'df', 'mean_sq', 'F', 'PR(>F)', 'eta_sq', 'omega_sq'] aov = aov[cols] return aov anova_table(aov_table) ``` Both _SciPy_ and _statsmodels_ give the same values for the ___F statistic = 153.15___ and ___p-value = 8.41e-86___ , but we get the added bonus with statsmodels to perform "r style" functions and a nice display. The important thing to note is the p value, and $p < \alpha$. We can reject the hypothesis that all of these 4 selected genes have the same underlying mean count data. # 4. Power and Sample Size In designing an experiment, in addition to control of the type I error rate (typically at a level of 0.05), it is also important to control the type II error rate, or equivalently the power: > **Power = *P* (Reject *$H_{0}$* given that the alternative *$H_{1}$* holds)** Also written as **Power = 1 - *P* (Type II error) = 1 - $\beta$** Recall that for our experiment and hypothesis tests, we first address _Type I_ error and then from there address the _Type II_ errors by calculating the desired power and changing the needed sample size. A good way to think of power is: > As researchers, we would like to know how much power there will be to detect the alternative hypothesis $H_{1}$ of interest. So for us to calculate power and test for a sample size large enough, we have to test for how big of a difference in our data means are we willing to include in our experiment. Let's work through calculating the power in some of our hypothesis test from the previous section. By tradition we usually want to have at least 80% power in our experiments at an $\alpha=0.05$. So we'll compute the current power and if we don't have enough power we'll calculate the needed sample size that would give us the desired power. ## 4.1 One Sample Power Given that we are interested in the mean _age at diagnosis_ for _'KIRC'_ cancer: > Question1: > > Is the mean _age (in years) at diagnosis_ for 'KIRC' (kidney) cancer equal to 55? We are also interested in calculating the __power to detect a difference in the *mean age at diagnosis of 4 years*__ . ``` # Functions for One Sample power and sample size calculation functions def one_sample_size_required(power,N,n,mu,mu_0,sd_0,alpha=.95): """ Calculates the minimum sample size needed to achive desired power """ sim_power = 0 power_values = list() n_tem = n while(sim_power <= power): simulation_value = [np.mean(one_sample_simulation_values(N,n,mu,mu_0,sd_0,alpha))] sim_power = np.mean(simulation_value) power_values.append([sim_power]) if n_tem == n: print("Current Sample size: {0:1d} \t Current Power: {1:1f}".format(n, sim_power)) n += 1 print("Needed Sample size: {0:1d} \t\t Final Power: {1:1f}".format(n, sim_power)) def one_sample_size_calculation_formula(sd, diff_means, pow_wanted, pow_alpha): """ Calculates sample size needed for desired power through the formula """ return (sd**2 * (stats.norm.ppf(pow_wanted) + stats.norm.ppf(pow_alpha))**2)/diff_means**2 one_sample_size_required(power=.80,N=5000,n=n_k,mu=59,mu_0=55,sd_0=sd_k_age,alpha=.975) one_sample_size_calculation_formula(sd=sd_k_age,diff_means=4,pow_wanted=.8,pow_alpha=.975) ``` So we see that the starting power to detect a _difference of 4 years_ in the _mean age at diagnosis_ for our experiment is 66%, which is low compared to the 80% we want. From both the `one_sample_size_required` and `one_sample_size_calculation_formula` functions, we see that we would need around a sample size of 90 to achive the desired power. ## 4.2 Two Sample Power Given our second question: > Question 2: > > Is the mean _age (years) of diagnosis_ different between people with _Colon Cancer (COAD)_ versus people with _Kidney Cancer KIRC_ ? We want again to calculate the __power to detect a difference in *mean age at diagnosis of 6 years*__ . ``` def two_sample_size_required(power,N,n1,n2,mu1,mu2,sd1,sd2,alpha=.975): """ Calculates the minimum sample size needed to achive desired power """ sim_power = 0 power_values = list() n = min(n1,n2) n_tem = n while(sim_power <= power): simulation_value = [np.mean(two_sample_simulation_values(N,n1,n2,mu1,mu2,sd1,sd2,alpha,z_test=False))] sim_power = np.mean(simulation_value) power_values.append([sim_power]) if n_tem == n: print("Current Sample size: {0:1d} \t Current Power: {1:1f}".format(n, sim_power)) n += 1 print("Needed Sample size: {0:1d} \t\t Final Power: {1:1f}".format(n, sim_power)) def two_sample_size_calculation_formula(sd1, sd2, diff_means, pow_wanted, pow_alpha): """ Calculates sample size needed for desired power through the formula for two sample groups """ return((sd1**2 + sd2**2)*(stats.norm.ppf(pow_wanted) + stats.norm.ppf(pow_alpha))**2)/diff_means**2 two_sample_size_calculation_formula(sd1=sd_k_age,sd2=sd_c,diff_means=6,pow_wanted=.8,pow_alpha=.975) ``` We see again that our starting sample size is small for our desired power of 80% to detect a _difference in 6 years in the means at age at diagnosis_ . From our functions we see that we need abouta sample size of 72 to reach the desired power. # Wrapping up In this tutorial we were introduced to concepts in hypothesis testing, different test statistics, statistical simulations, and power calculations. We worked through examples of how to use functions in the __SciPy__ and __statsmodels__ python libraries to perform different hypothesis test pertinent to the pancan genomic data. You should now have the fundamentals to properly design and evaluate hypothesis test and build from there on your research by applying the teachings of this tutorial. You can now proceed on to the next tutorial in the PANCAN Genomic series or jump back into the previous ones: - [Tutorial 1 - Beginner](https://github.com/fredhutchio/ml-pancancer-example/blob/main/2-Tutorials/Tutorial%201%20-%20Beginner.ipynb) - [Tutorial 2 - Intermediate](https://github.com/fredhutchio/ml-pancancer-example/blob/main/2-Tutorials/Tutorial%202%20-%20Intermediate.ipynb) (*Current*) - [Tutorial 3 - Advanced](https://github.com/fredhutchio/ml-pancancer-example/blob/main/2-Tutorials/Tutorial%203%20-%20Advanced.ipynb) You can also jump into the [melenoma lesions series](https://github.com/fredhutchio/ml-melanoma-example), the other Data Science and Machine Learning tutorial.
github_jupyter
# Python을 이용한 네이버 영화 크롤링 ## 과제 내용 설명 - 대상: 예매순 상위 5개의 현재 상영 중인 영화 - 수집할 항목: 영화 제목, 주연배우 3인, 네티즌 평점, 관람객 평점, 기자/평론가 평점, 관람객 별점 리뷰 20건 공감순으로(평점, 작성자닉네임, 리뷰본문) ## 우수과제로 뽑은 이유 모듈화가 가장 잘 구현된 코드였습니다. 깔끔한 코딩으로 저장 형식이 과제 출제자의 의도 대로여서 좋았습니다. 다만, 데이터를 저장할 때 공백 문자를 제거(strip())하는 등의 개선할 부분도 보였습니다. ``` import requests from bs4 import BeautifulSoup import re ``` ### 1. 예매순 상위 5개의 현재 상영 중인 영화 가져오기 영화 검색을 위한 5개의 영화 코드 리스트를 저장한다. ``` def get_movie_codes(): codes = [] #코드를 저장할 리스트 url = 'https://movie.naver.com/movie/running/current.nhn' res = requests.get(url) html = res.text soup = BeautifulSoup(html, 'html.parser') i = 0 for tag in soup.find('ul', class_='lst_detail_t1').find_all('li'): codes.append(tag.find('a').get('href')[28:]) i += 1 if i == 5 : break return codes codes = get_movie_codes() codes ``` ### 2. 영화 제목 가져오기 ``` def get_title(code): url = "https://movie.naver.com/movie/bi/mi/basic.nhn?code=" + code res = requests.get(url) html = res.text soup = BeautifulSoup(html, 'html.parser') title = soup.find('h3', class_='h_movie').find('a').text return title get_title("186613") ``` ### 3. 출연진 3명 가져오기 ``` def get_actor(code): url = "https://movie.naver.com/movie/bi/mi/basic.nhn?code=" + code res = requests.get(url) html = res.text soup = BeautifulSoup(html, 'html.parser') people = soup.find("div", class_="people").find_all('a', class_='tx_people') actor = [] for i in range(1,4) : actor.append(people[i].text) return actor get_actor("187321") ``` ### 4. 평점 가져오기 ``` def get_grade(code): url = "https://movie.naver.com/movie/bi/mi/basic.nhn?code=" + code res = requests.get(url) html = res.text soup = BeautifulSoup(html, 'html.parser') grade = {"audience_grade" : "", "critic_grade" : "", "netizen_grade" : "" } grades = [] i = 0 for tx in soup.find_all('div', class_='star_score'): num = "" for em in tx.find_all('em'): num += em.text grades.append(num) i += 1 if i == 3 : break grade["audience_grade"] = grades[0] grade["critic_grade"] = grades[1] grade["netizen_grade"] = grades[2] return grade get_grade("187321") ``` ### 5. 관람객 평점 공감순 20건 가져오기 ``` def get_reviews(code): reviews = [] for i in range(1,3) : url = "https://movie.naver.com/movie/bi/mi/pointWriteFormList.nhn?code=" + code + \ "&type=after&isActualPointWriteExecute=false&isMileageSubscriptionAlready=false&isMileageSubscriptionReject=false&page=" + str(i) res = requests.get(url) html = res.text soup = BeautifulSoup(html, 'html.parser') for review in soup.find('div', class_="score_result").find_all("li") : grade = review.find('em').text user_id = review.find('div', class_='score_reple').find('dl').find('span').text comment = review.find('div', class_='score_reple').find('p').text.strip() reviews.append({'grade' : grade, 'user_id' : user_id, 'comment' : comment}) return reviews get_reviews("179181") ``` ### 6. 영화별 내용 저장 ``` def dict_movie_contents(code): movie = {'title' : "", 'actor' : [], 'grade' : {}, 'reviews' : [] } # 영화 제목, 출연진 3명, 평점 가져오기 movie['title'] = get_title(code) movie['actor'] = get_actor(code) movie['grade'] = get_grade(code) # 리뷰 20건 가져오기 movie['reviews'] = get_reviews(code) return movie dict_movie_contents("179181") ``` ### 7. 파일 저장하기 ``` #기본 타입은 py (json, txt 등 가능) def save(file_name = "movies", save_type = "py"): # 1차 저장 (list 형태) movies = [] # 상위 5개 영화 코드 불러오기 codes = get_movie_codes() for code in codes : movies.append(dict_movie_contents(code)) file = file_name + "." + save_type f = open(file, 'w', encoding='utf-8') f.write(str(movies)) f.close() save() ```
github_jupyter
<a href="https://colab.research.google.com/github/aqafridi/TensorFlow/blob/main/TensorFlow%3A%20Advanced%20Techniques/Advanced%20Computer%20Vision%20with%20TensorFlow/W2_Lab_1_Simple_Object_Detection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Simple Object Detection in Tensorflow This lab will walk you through how to use object detection models available in [Tensorflow Hub](https://www.tensorflow.org/hub). In the following sections, you will: * explore the Tensorflow Hub for object detection models * load the models in your workspace * preprocess an image for inference * run inference on the models and inspect the output Let's get started! ## Imports ``` import tensorflow as tf import tensorflow_hub as hub from PIL import Image from PIL import ImageOps import tempfile from six.moves.urllib.request import urlopen from six import BytesIO ``` ### Download the model from Tensorflow Hub Tensorflow Hub is a repository of trained machine learning models which you can reuse in your own projects. - You can see the domains covered [here](https://tfhub.dev/) and its subcategories. - For this lab, you will want to look at the [image object detection subcategory](https://tfhub.dev/s?module-type=image-object-detection). - You can select a model to see more information about it and copy the URL so you can download it to your workspace. - We selected a [inception resnet version 2](https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1) - You can also modify this following cell to choose the other model that we selected, [ssd mobilenet version 2](https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2) ``` # you can switch the commented lines here to pick the other model # inception resnet version 2 module_handle = "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1" # You can choose ssd mobilenet version 2 instead and compare the results #module_handle = "https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1" ``` #### Load the model Next, you'll load the model specified by the `module_handle`. - This will take a few minutes to load the model. ``` model = hub.load(module_handle) ``` #### Choose the default signature Some models in the Tensorflow hub can be used for different tasks. So each model's documentation should show what *signature* to use when running the model. - If you want to see if a model has more than one signature then you can do something like `print(hub.load(module_handle).signatures.keys())`. In your case, the models you will be using only have the `default` signature so you don't have to worry about other types. ``` # take a look at the available signatures for this particular model model.signatures.keys() ``` Please choose the 'default' signature for your object detector. - For object detection models, its 'default' signature will accept a batch of image tensors and output a dictionary describing the objects detected, which is what you'll want here. ``` detector = model.signatures['default'] ``` ### download_and_resize_image This function downloads an image specified by a given "url", pre-processes it, and then saves it to disk. ``` def download_and_resize_image(url, new_width=256, new_height=256): ''' Fetches an image online, resizes it and saves it locally. Args: url (string) -- link to the image new_width (int) -- size in pixels used for resizing the width of the image new_height (int) -- size in pixels used for resizing the length of the image Returns: (string) -- path to the saved image ''' # create a temporary file ending with ".jpg" _, filename = tempfile.mkstemp(suffix=".jpg") # opens the given URL response = urlopen(url) # reads the image fetched from the URL image_data = response.read() # puts the image data in memory buffer image_data = BytesIO(image_data) # opens the image pil_image = Image.open(image_data) # resizes the image. will crop if aspect ratio is different. pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS) # converts to the RGB colorspace pil_image_rgb = pil_image.convert("RGB") # saves the image to the temporary file created earlier pil_image_rgb.save(filename, format="JPEG", quality=90) print("Image downloaded to %s." % filename) return filename ``` ### Download and preprocess an image Now, using `download_and_resize_image` you can get a sample image online and save it locally. - We've provided a URL for you, but feel free to choose another image to run through the object detector. - You can use the original width and height of the image but feel free to modify it and see what results you get. ``` # You can choose a different URL that points to an image of your choice image_url = "https://upload.wikimedia.org/wikipedia/commons/f/fb/20130807_dublin014.JPG" # download the image and use the original height and width downloaded_image_path = download_and_resize_image(image_url, 3872, 2592) ``` ### run_detector This function will take in the object detection model `detector` and the path to a sample image, then use this model to detect objects and display its predicted class categories and detection boxes. - run_detector uses `load_image` to convert the image into a tensor. ``` def load_img(path): ''' Loads a JPEG image and converts it to a tensor. Args: path (string) -- path to a locally saved JPEG image Returns: (tensor) -- an image tensor ''' # read the file img = tf.io.read_file(path) # convert to a tensor img = tf.image.decode_jpeg(img, channels=3) return img def run_detector(detector, path): ''' Runs inference on a local file using an object detection model. Args: detector (model) -- an object detection model loaded from TF Hub path (string) -- path to an image saved locally ''' # load an image tensor from a local file path img = load_img(path) # add a batch dimension in front of the tensor converted_img = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...] # run inference using the model result = detector(converted_img) # save the results in a dictionary result = {key:value.numpy() for key,value in result.items()} # print results print("Found %d objects." % len(result["detection_scores"])) print(result["detection_scores"]) print(result["detection_class_entities"]) print(result["detection_boxes"]) ``` ### Run inference on the image You can run your detector by calling the `run_detector` function. This will print the number of objects found followed by three lists: * The detection scores of each object found (i.e. how confident the model is), * The classes of each object found, * The bounding boxes of each object You will see how to overlay this information on the original image in the next sections and in this week's assignment! ``` # runs the object detection model and prints information about the objects found run_detector(detector, downloaded_image_path) ```
github_jupyter
``` # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Vertex client library: AutoML image classification model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> ## Overview This tutorial demonstrates how to use the Vertex client library for Python to create image classification models and do batch prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). ### Dataset The dataset used for this tutorial is the [Flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip. ### Objective In this tutorial, you create an AutoML image classification model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console. The steps performed include: - Create a Vertex `Dataset` resource. - Train the model. - View the model evaluation. - Make a batch prediction. There is one key difference between using batch prediction and using online prediction: * Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time. * Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. ### Costs This tutorial uses billable components of Google Cloud (GCP): * Vertex AI * Cloud Storage Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. ## Installation Install the latest version of Vertex client library. ``` import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG ``` Install the latest GA version of *google-cloud-storage* library as well. ``` ! pip3 install -U google-cloud-storage $USER_FLAG ``` ### Restart the kernel Once you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages. ``` if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) ``` ## Before you begin ### GPU runtime *Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** ### Set up your Google Cloud project **The following steps are required, regardless of your notebook environment.** 1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs. 2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project) 3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component) 4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook. 5. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. ``` PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID ``` #### Region You can also change the `REGION` variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. - Americas: `us-central1` - Europe: `europe-west4` - Asia Pacific: `asia-east1` You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations) ``` REGION = "us-central1" # @param {type: "string"} ``` #### Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. ``` from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") ``` ### Authenticate your Google Cloud account **If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. **Otherwise**, follow these steps: In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page. **Click Create service account**. In the **Service account name** field, enter a name, and click **Create**. In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. ``` # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' ``` ### Create a Cloud Storage bucket **The following steps are required, regardless of your notebook environment.** This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. ``` BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP ``` **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. ``` ! gsutil mb -l $REGION $BUCKET_NAME ``` Finally, validate access to your Cloud Storage bucket by examining its contents: ``` ! gsutil ls -al $BUCKET_NAME ``` ### Set up variables Next, set up some variables used throughout the tutorial. ### Import libraries and define constants #### Import Vertex client library Import the Vertex client library into our Python environment. ``` import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value ``` #### Vertex constants Setup up the following constants for Vertex: - `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services. - `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources. ``` # API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION ``` #### AutoML constants Set constants unique to AutoML datasets and training: - Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is. - Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated). - Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for. ``` # Image Dataset type DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml" # Image Labeling type LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml" # Image Training task TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml" ``` #### Hardware Accelerators Set the hardware accelerators (e.g., GPU), if any, for prediction. Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100 Otherwise specify `(None, None)` to use a container image to run on a CPU. ``` if os.getenv("IS_TESTING_DEPOLY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPOLY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) ``` #### Container (Docker) image For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. #### Machine Type Next, set the machine type to use for prediction. - Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \] *Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs* ``` if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE) ``` # Tutorial Now you are ready to start creating your own AutoML image classification model. ## Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. - Dataset Service for `Dataset` resources. - Model Service for `Model` resources. - Pipeline Service for training. - Job Service for batch prediction and custom training. ``` # client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_pipeline_client(): client = aip.PipelineServiceClient(client_options=client_options) return client def create_job_client(): client = aip.JobServiceClient(client_options=client_options) return client clients = {} clients["dataset"] = create_dataset_client() clients["model"] = create_model_client() clients["pipeline"] = create_pipeline_client() clients["job"] = create_job_client() for client in clients.items(): print(client) ``` ## Dataset Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. ### Create `Dataset` resource instance Use the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following: 1. Uses the dataset client service. 2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters: - `display_name`: The human-readable name you choose to give it. - `metadata_schema_uri`: The schema for the dataset type. 3. Calls the client dataset service method `create_dataset`, with the following parameters: - `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources. - `dataset`: The Vertex dataset object instance you created. 4. The method returns an `operation` object. An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning. You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method: | Method | Description | | ----------- | ----------- | | result() | Waits for the operation to complete and returns a result object in JSON format. | | running() | Returns True/False on whether the operation is still running. | | done() | Returns True/False on whether the operation is completed. | | canceled() | Returns True/False on whether the operation was canceled. | | cancel() | Cancels the operation (this may take up to 30 seconds). | ``` TIMEOUT = 90 def create_dataset(name, schema, labels=None, timeout=TIMEOUT): start_time = time.time() try: dataset = aip.Dataset( display_name=name, metadata_schema_uri=schema, labels=labels ) operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset) print("Long running operation:", operation.operation.name) result = operation.result(timeout=TIMEOUT) print("time:", time.time() - start_time) print("response") print(" name:", result.name) print(" display_name:", result.display_name) print(" metadata_schema_uri:", result.metadata_schema_uri) print(" metadata:", dict(result.metadata)) print(" create_time:", result.create_time) print(" update_time:", result.update_time) print(" etag:", result.etag) print(" labels:", dict(result.labels)) return result except Exception as e: print("exception:", e) return None result = create_dataset("flowers-" + TIMESTAMP, DATA_SCHEMA) ``` Now save the unique dataset identifier for the `Dataset` resource instance you created. ``` # The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split("/")[-1] print(dataset_id) ``` ### Data preparation The Vertex `Dataset` resource for images has some requirements for your data: - Images must be stored in a Cloud Storage bucket. - Each image file must be in an image format (PNG, JPEG, BMP, ...). - There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image. - The index file must be either CSV or JSONL. #### CSV For image classification, the CSV index file has the requirements: - No heading. - First column is the Cloud Storage path to the image. - Second column is the label. #### Location of Cloud Storage training data. Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage. ``` IMPORT_FILE = ( "gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv" ) ``` #### Quick peek at your data You will use a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows. ``` if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head ``` ### Import data Now, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following: - Uses the `Dataset` client. - Calls the client method `import_data`, with the following parameters: - `name`: The human readable name you give to the `Dataset` resource (e.g., flowers). - `import_configs`: The import configuration. - `import_configs`: A Python list containing a dictionary, with the key/value entries: - `gcs_sources`: A list of URIs to the paths of the one or more index files. - `import_schema_uri`: The schema identifying the labeling type. The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break. ``` def import_data(dataset, gcs_sources, schema): config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}] print("dataset:", dataset_id) start_time = time.time() try: operation = clients["dataset"].import_data( name=dataset_id, import_configs=config ) print("Long running operation:", operation.operation.name) result = operation.result() print("result:", result) print("time:", int(time.time() - start_time), "secs") print("error:", operation.exception()) print("meta :", operation.metadata) print( "after: running:", operation.running(), "done:", operation.done(), "cancelled:", operation.cancelled(), ) return operation except Exception as e: print("exception:", e) return None import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA) ``` ## Train the model Now train an AutoML image classification model using your Vertex `Dataset` resource. To train the model, do the following steps: 1. Create an Vertex training pipeline for the `Dataset` resource. 2. Execute the pipeline to start the training. ### Create a training pipeline You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of: 1. Being reusable for subsequent training jobs. 2. Can be containerized and ran as a batch job. 3. Can be distributed. 4. All the steps are associated with the same pipeline job for tracking progress. Use this helper function `create_pipeline`, which takes the following parameters: - `pipeline_name`: A human readable name for the pipeline job. - `model_name`: A human readable name for the model. - `dataset`: The Vertex fully qualified dataset identifier. - `schema`: The dataset labeling (annotation) training schema. - `task`: A dictionary describing the requirements for the training job. The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters: - `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources. - `training_pipeline`: the full specification for the pipeline training job. Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification: - `display_name`: A human readable name for the pipeline job. - `training_task_definition`: The dataset labeling (annotation) training schema. - `training_task_inputs`: A dictionary describing the requirements for the training job. - `model_to_upload`: A human readable name for the model. - `input_data_config`: The dataset specification. - `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. - `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML. ``` def create_pipeline(pipeline_name, model_name, dataset, schema, task): dataset_id = dataset.split("/")[-1] input_config = { "dataset_id": dataset_id, "fraction_split": { "training_fraction": 0.8, "validation_fraction": 0.1, "test_fraction": 0.1, }, } training_pipeline = { "display_name": pipeline_name, "training_task_definition": schema, "training_task_inputs": task, "input_data_config": input_config, "model_to_upload": {"display_name": model_name}, } try: pipeline = clients["pipeline"].create_training_pipeline( parent=PARENT, training_pipeline=training_pipeline ) print(pipeline) except Exception as e: print("exception:", e) return None return pipeline ``` ### Construct the task requirements Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion. The minimal fields we need to specify are: - `multi_label`: Whether True/False this is a multi-label (vs single) classification. - `budget_milli_node_hours`: The maximum time to budget (billed) for training the model, where 1000 = 1 hour. For image classification, the budget must be a minimum of 8 hours. - `model_type`: The type of deployed model: - `CLOUD`: For deploying to Google Cloud. - `MOBILE_TF_LOW_LATENCY_1`: For deploying to the edge and optimizing for latency (response time). - `MOBILE_TF_HIGH_ACCURACY_1`: For deploying to the edge and optimizing for accuracy. - `MOBILE_TF_VERSATILE_1`: For deploying to the edge and optimizing for a trade off between latency and accuracy. - `disable_early_stopping`: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget. Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object. ``` PIPE_NAME = "flowers_pipe-" + TIMESTAMP MODEL_NAME = "flowers_model-" + TIMESTAMP task = json_format.ParseDict( { "multi_label": False, "budget_milli_node_hours": 8000, "model_type": "CLOUD", "disable_early_stopping": False, }, Value(), ) response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task) ``` Now save the unique identifier of the training pipeline you created. ``` # The full unique ID for the pipeline pipeline_id = response.name # The short numeric ID for the pipeline pipeline_short_id = pipeline_id.split("/")[-1] print(pipeline_id) ``` ### Get information on a training pipeline Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter: - `name`: The Vertex fully qualified pipeline identifier. When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`. ``` def get_training_pipeline(name, silent=False): response = clients["pipeline"].get_training_pipeline(name=name) if silent: return response print("pipeline") print(" name:", response.name) print(" display_name:", response.display_name) print(" state:", response.state) print(" training_task_definition:", response.training_task_definition) print(" training_task_inputs:", dict(response.training_task_inputs)) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", dict(response.labels)) return response response = get_training_pipeline(pipeline_id) ``` # Deployment Training the above model may take upwards of 20 minutes time. Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`. ``` while True: response = get_training_pipeline(pipeline_id, True) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_id = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: raise Exception("Training Job Failed") else: model_to_deploy = response.model_to_upload model_to_deploy_id = model_to_deploy.name print("Training Time:", response.end_time - response.start_time) break time.sleep(60) print("model to deploy:", model_to_deploy_id) ``` ## Model information Now that your model is trained, you can get some information on your model. ## Evaluate the Model resource Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. ### List evaluations for all slices Use this helper function `list_model_evaluations`, which takes the following parameter: - `name`: The Vertex fully qualified model identifier for the `Model` resource. This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric. For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (`logLoss` and `auPrc`) you will print the result. ``` def list_model_evaluations(name): response = clients["model"].list_model_evaluations(parent=name) for evaluation in response: print("model_evaluation") print(" name:", evaluation.name) print(" metrics_schema_uri:", evaluation.metrics_schema_uri) metrics = json_format.MessageToDict(evaluation._pb.metrics) for metric in metrics.keys(): print(metric) print("logloss", metrics["logLoss"]) print("auPrc", metrics["auPrc"]) return evaluation.name last_evaluation = list_model_evaluations(model_to_deploy_id) ``` ## Model deployment for batch prediction Now deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction. For online prediction, you: 1. Create an `Endpoint` resource for deploying the `Model` resource to. 2. Deploy the `Model` resource to the `Endpoint` resource. 3. Make online prediction requests to the `Endpoint` resource. For batch-prediction, you: 1. Create a batch prediction job. 2. The job service will provision resources for the batch prediction request. 3. The results of the batch prediction request are returned to the caller. 4. The job service will unprovision the resoures for the batch prediction request. ## Make a batch prediction request Now do a batch prediction to your deployed model. ### Get test item(s) Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction. ``` test_items = !gsutil cat $IMPORT_FILE | head -n2 if len(str(test_items[0]).split(",")) == 3: _, test_item_1, test_label_1 = str(test_items[0]).split(",") _, test_item_2, test_label_2 = str(test_items[1]).split(",") else: test_item_1, test_label_1 = str(test_items[0]).split(",") test_item_2, test_label_2 = str(test_items[1]).split(",") print(test_item_1, test_label_1) print(test_item_2, test_label_2) ``` ### Copy test item(s) For the batch prediction, you will copy the test items over to your Cloud Storage bucket. ``` file_1 = test_item_1.split("/")[-1] file_2 = test_item_2.split("/")[-1] ! gsutil cp $test_item_1 $BUCKET_NAME/$file_1 ! gsutil cp $test_item_2 $BUCKET_NAME/$file_2 test_item_1 = BUCKET_NAME + "/" + file_1 test_item_2 = BUCKET_NAME + "/" + file_2 ``` ### Make the batch input file Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs: - `content`: The Cloud Storage path to the image. - `mime_type`: The content type. In our example, it is an `jpeg` file. For example: {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'} ``` import json import tensorflow as tf gcs_input_uri = BUCKET_NAME + "/test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: data = {"content": test_item_1, "mime_type": "image/jpeg"} f.write(json.dumps(data) + "\n") data = {"content": test_item_2, "mime_type": "image/jpeg"} f.write(json.dumps(data) + "\n") print(gcs_input_uri) ! gsutil cat $gcs_input_uri ``` ### Compute instance scaling You have several choices on scaling the compute instances for handling your batch prediction requests: - Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one. - Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them. - Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions. The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request. ``` MIN_NODES = 1 MAX_NODES = 1 ``` ### Make batch prediction request Now that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters: - `display_name`: The human readable name for the prediction job. - `model_name`: The Vertex fully qualified identifier for the `Model` resource. - `gcs_source_uri`: The Cloud Storage path to the input file -- which you created above. - `gcs_destination_output_uri_prefix`: The Cloud Storage path that the service will write the predictions to. - `parameters`: Additional filtering parameters for serving prediction results. The helper function calls the job client service's `create_batch_prediction_job` metho, with the following parameters: - `parent`: The Vertex location root path for Dataset, Model and Pipeline resources. - `batch_prediction_job`: The specification for the batch prediction job. Let's now dive into the specification for the `batch_prediction_job`: - `display_name`: The human readable name for the prediction batch job. - `model`: The Vertex fully qualified identifier for the `Model` resource. - `dedicated_resources`: The compute resources to provision for the batch prediction job. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `starting_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`. - `model_parameters`: Additional filtering parameters for serving prediction results. - `confidence_threshold`: The threshold for returning predictions. Must be between 0 and 1. - `max_predictions`: The maximum number of predictions to return per classification, sorted by confidence. - `input_config`: The input source and format type for the instances to predict. - `instances_format`: The format of the batch prediction request file: `jsonl` only supported. - `gcs_source`: A list of one or more Cloud Storage paths to your batch prediction requests. - `output_config`: The output destination and format for the predictions. - `prediction_format`: The format of the batch prediction response file: `jsonl` only supported. - `gcs_destination`: The output destination for the predictions. You might ask, how does confidence_threshold affect the model accuracy? The threshold won't change the accuracy. What it changes is recall and precision. - Precision: The higher the precision the more likely what is predicted is the correct prediction, but return fewer predictions. Increasing the confidence threshold increases precision. - Recall: The higher the recall the more likely a correct prediction is returned in the result, but return more prediction with incorrect prediction. Decreasing the confidence threshold increases recall. In this example, you will predict for precision. You set the confidence threshold to 0.5 and the maximum number of predictions for a classification to two. Since, all the confidence values across the classes must add up to one, there are only two possible outcomes: 1. There is a tie, both 0.5, and returns two predictions. 2. One value is above 0.5 and the rest are below 0.5, and returns one prediction. This call is an asychronous operation. You will print from the response object a few select fields, including: - `name`: The Vertex fully qualified identifier assigned to the batch prediction job. - `display_name`: The human readable name for the prediction batch job. - `model`: The Vertex fully qualified identifier for the Model resource. - `generate_explanations`: Whether True/False explanations were provided with the predictions (explainability). - `state`: The state of the prediction job (pending, running, etc). Since this call will take a few moments to execute, you will likely get `JobState.JOB_STATE_PENDING` for `state`. ``` BATCH_MODEL = "flowers_batch-" + TIMESTAMP def create_batch_prediction_job( display_name, model_name, gcs_source_uri, gcs_destination_output_uri_prefix, parameters=None, ): if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accelerator_count": DEPLOY_NGPU, } else: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_count": 0, } batch_prediction_job = { "display_name": display_name, # Format: 'projects/{project}/locations/{location}/models/{model_id}' "model": model_name, "model_parameters": json_format.ParseDict(parameters, Value()), "input_config": { "instances_format": IN_FORMAT, "gcs_source": {"uris": [gcs_source_uri]}, }, "output_config": { "predictions_format": OUT_FORMAT, "gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix}, }, "dedicated_resources": { "machine_spec": machine_spec, "starting_replica_count": MIN_NODES, "max_replica_count": MAX_NODES, }, } response = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job ) print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", response.labels) return response IN_FORMAT = "jsonl" OUT_FORMAT = "jsonl" # [jsonl] response = create_batch_prediction_job( BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, {"confidenceThreshold": 0.5, "maxPredictions": 2}, ) ``` Now get the unique identifier for the batch prediction job you created. ``` # The full unique ID for the batch job batch_job_id = response.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split("/")[-1] print(batch_job_id) ``` ### Get information on a batch prediction job Use this helper function `get_batch_prediction_job`, with the following paramter: - `job_name`: The Vertex fully qualified identifier for the batch prediction job. The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter: - `name`: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- `batch_job_id` The helper function will return the Cloud Storage path to where the predictions are stored -- `gcs_destination`. ``` def get_batch_prediction_job(job_name, silent=False): response = clients["job"].get_batch_prediction_job(name=job_name) if silent: return response.output_config.gcs_destination.output_uri_prefix, response.state print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: # not all data types support explanations print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" error:", response.error) gcs_destination = response.output_config.gcs_destination print(" gcs_destination") print(" output_uri_prefix:", gcs_destination.output_uri_prefix) return gcs_destination.output_uri_prefix, response.state predictions, state = get_batch_prediction_job(batch_job_id) ``` ### Get the predictions When the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`. Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name `prediction`, and under that folder will be a file called `predictions*.jsonl`. Now display (cat) the contents. You will see multiple JSON objects, one for each prediction. The first field `ID` is the image file you did the prediction on, and the second field `annotations` is the prediction, which is further broken down into: - `confidences`: The percent of confidence between 0 and 1. - `display_name`: The corresponding class name. ``` def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subfolder > latest: latest = folder[:-1] return latest while True: predictions, state = get_batch_prediction_job(batch_job_id, True) if state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", state) if state == aip.JobState.JOB_STATE_FAILED: raise Exception("Batch Job Failed") else: folder = get_latest_predictions(predictions) ! gsutil ls $folder/prediction*.jsonl ! gsutil cat $folder/prediction*.jsonl break time.sleep(60) ``` # Cleaning up To clean up all GCP resources used in this project, you can [delete the GCP project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: - Dataset - Pipeline - Model - Endpoint - Batch Job - Custom Job - Hyperparameter Tuning Job - Cloud Storage Bucket ``` delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and "dataset_id" in globals(): clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex fully qualified identifier for the pipeline try: if delete_pipeline and "pipeline_id" in globals(): clients["pipeline"].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model and "model_to_deploy_id" in globals(): clients["model"].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex fully qualified identifier for the endpoint try: if delete_endpoint and "endpoint_id" in globals(): clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex fully qualified identifier for the batch job try: if delete_batchjob and "batch_job_id" in globals(): clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex fully qualified identifier for the custom job try: if delete_customjob and "job_id" in globals(): clients["job"].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and "hpt_job_id" in globals(): clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME ```
github_jupyter
``` from google.colab import drive drive.mount('/content/drive') import torch.nn as nn import torch.nn.functional as F import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torchvision import torchvision.transforms as transforms from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils from matplotlib import pyplot as plt import copy # Ignore warnings import warnings warnings.filterwarnings("ignore") a = torch.arange(2*3*1*1) print(a.shape,a) torch.reshape(a, (2,3,1,1)) torch.reshape(a, (2*3,1,1)) transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True) testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') foreground_classes = {'horse','ship', 'truck'} background_classes = {'plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog'} # print(type(foreground_classes)) dataiter = iter(trainloader) background_data=[] background_label=[] foreground_data=[] foreground_label=[] batch_size=10 for i in range(5000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() background_data.append(img) background_label.append(labels[j]) else: img = images[j].tolist() foreground_data.append(img) foreground_label.append(labels[j]) foreground_data = torch.tensor(foreground_data) foreground_label = torch.tensor(foreground_label) background_data = torch.tensor(background_data) background_label = torch.tensor(background_label) # print(foreground_data.size()) # print(background_data.size()) # torch.save(foreground_data,'foreground_data.pt') # torch.save(background_data,'background_data.pt') # torch.save(foreground_label,'foreground_label.pt') # torch.save(background_label,'background_label.pt') # torch.load() # torch.load('foreground_data.pt') # # print(foreground_data.size()) # # print(background_data.size()) # foreground_data = torch.load('foreground_data.pt') # background_data = torch.load('background_data.pt') # foreground_label = torch.load('foreground_label.pt') # background_label = torch.load('background_label.pt') def create_mosaic_img(bg_idx,fg_idx,fg): """ bg_idx : list of indexes of background_data[] to be used as background images in mosaic fg_idx : index of image to be used as foreground image from foreground data fg : at what position/index foreground image has to be stored out of 0-8 """ image_list=[] j=0 for i in range(9): if i != fg: image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor")) j+=1 else: image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor")) label = foreground_label[fg_idx]-7 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2 #image_list = np.concatenate(image_list ,axis=0) image_list = torch.stack(image_list) return image_list,label desired_num = 30000 mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9 mosaic_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(desired_num): bg_idx = np.random.randint(0,35000,8) fg_idx = np.random.randint(0,15000) fg = np.random.randint(0,9) fore_idx.append(fg) image_list,label = create_mosaic_img(bg_idx,fg_idx,fg) mosaic_list_of_images.append(image_list) mosaic_label.append(label) class MosaicDataset(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list_of_images self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx], self.fore_idx[idx] batch = 250 msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx) train_loader = DataLoader( msd,batch_size= batch ,shuffle=True) import torch.nn as nn import torch.nn.functional as F class Conv_module(nn.Module): def __init__(self,inp_ch,f,s,k,pad): super(Conv_module,self).__init__() self.inp_ch = inp_ch self.f = f self.s = s self.k = k self.pad = pad self.conv = nn.Conv2d(self.inp_ch,self.f,k,stride=s,padding=self.pad) self.bn = nn.BatchNorm2d(self.f) self.act = nn.ReLU() def forward(self,x): x = self.conv(x) x = self.bn(x) x = self.act(x) return x class inception_module(nn.Module): def __init__(self,inp_ch,f0,f1): super(inception_module, self).__init__() self.inp_ch = inp_ch self.f0 = f0 self.f1 = f1 self.conv1 = Conv_module(self.inp_ch,self.f0,1,1,pad=0) self.conv3 = Conv_module(self.inp_ch,self.f1,1,3,pad=1) #self.conv1 = nn.Conv2d(3,self.f0,1) #self.conv3 = nn.Conv2d(3,self.f1,3,padding=1) def forward(self,x): x1 = self.conv1.forward(x) x3 = self.conv3.forward(x) #print(x1.shape,x3.shape) x = torch.cat((x1,x3),dim=1) return x class downsample_module(nn.Module): def __init__(self,inp_ch,f): super(downsample_module,self).__init__() self.inp_ch = inp_ch self.f = f self.conv = Conv_module(self.inp_ch,self.f,2,3,pad=0) self.pool = nn.MaxPool2d(3,stride=2,padding=0) def forward(self,x): x1 = self.conv(x) #print(x1.shape) x2 = self.pool(x) #print(x2.shape) x = torch.cat((x1,x2),dim=1) return x,x1 class inception_net(nn.Module): def __init__(self): super(inception_net,self).__init__() self.conv1 = Conv_module(3*9,96,1,3,0) self.incept1 = inception_module(96,32,32) self.incept2 = inception_module(64,32,48) self.downsample1 = downsample_module(80,80) self.incept3 = inception_module(160,112,48) self.incept4 = inception_module(160,96,64) self.incept5 = inception_module(160,80,80) self.incept6 = inception_module(160,48,96) self.downsample2 = downsample_module(144,96) self.incept7 = inception_module(240,176,60) self.incept8 = inception_module(236,176,60) self.pool = nn.AvgPool2d(5) self.linear1 = nn.Linear(236,10) self.linear2 = nn.Linear(10,3) def forward(self,x): x = self.conv1.forward(x) #act1 = x x = self.incept1.forward(x) #act2 = x x = self.incept2.forward(x) #act3 = x x,act4 = self.downsample1.forward(x) x = self.incept3.forward(x) #act5 = x x = self.incept4.forward(x) #act6 = x x = self.incept5.forward(x) #act7 = x x = self.incept6.forward(x) #act8 = x x,act9 = self.downsample2.forward(x) x = self.incept7.forward(x) #act10 = x x = self.incept8.forward(x) #act11 = x #print(x.shape) x = self.pool(x) #print(x.shape) x = x.view(-1,1*1*236) x = self.linear1(x) x = self.linear2(x) # print(x.shape) return x block_net = inception_net() block_net = block_net.to("cuda").double() block_net test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image test_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(10000): bg_idx = np.random.randint(0,35000,8) fg_idx = np.random.randint(0,15000) fg = np.random.randint(0,9) fore_idx_test.append(fg) image_list,label = create_mosaic_img(bg_idx,fg_idx,fg) test_images.append(image_list) test_label.append(label) test_data = MosaicDataset(test_images,test_label,fore_idx_test) test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False) import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(block_net.parameters(), lr=0.001, momentum=0.9) # Training def train(epoch): print('\nEpoch: %d' % epoch) block_net.train() train_loss = 0 correct = 0 total = 0 for batch_idx, data in enumerate(train_loader): inputs , labels , fore_idx = data inputs = inputs.double() # zero the parameter gradients # print(inputs.shape) inputs = torch.reshape(inputs,(batch,9*3,32,32)) # print(inputs.shape) inputs , labels , fore_idx = inputs.to("cuda") , labels.to("cuda"), fore_idx.to("cuda") optimizer.zero_grad() outputs = block_net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() train_loss += loss.item() _, predicted = outputs.max(1) total += labels.size(0) correct += predicted.eq(labels).sum().item() print(batch_idx, len(trainloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'% (train_loss/(batch_idx+1), 100.*correct/total, correct, total)) return train_loss/(batch_idx+1) # Commented out IPython magic to ensure Python compatibility. def test(epoch): global best_acc block_net.eval() test_loss = 0 correct = 0 total = 0 with torch.no_grad(): for batch_idx, data in enumerate(test_loader): inputs , labels , fore_idx = data inputs = inputs.double() # zero the parameter gradients # print(inputs.shape) inputs = torch.reshape(inputs,(batch,9*3,32,32)) # print(inputs.shape) inputs , labels , fore_idx = inputs.to("cuda") , labels.to("cuda"), fore_idx.to("cuda") outputs = block_net(inputs) loss = criterion(outputs, labels) test_loss += loss.item() _, predicted = outputs.max(1) total += labels.size(0) correct += predicted.eq(labels).sum().item() print(batch_idx, len(testloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)' % (test_loss/(batch_idx+1), 100.*correct/total, correct, total)) return test_loss/(batch_idx+1) best_acc = 0 start_epoch =0 tr_loss = [] ts_loss = [] for epoch in range(start_epoch, start_epoch+150): tr_loss.append(train(epoch)) ts_loss.append(test(epoch)) if(tr_loss[-1]<=0.001): break plt.plot(tr_loss,label='training_loss') plt.plot(ts_loss,label = 'test_loss') plt.xlabel("epochs") plt.ylabel("cross_entropy loss") plt.legend() torch.save(block_net.state_dict(),"/content/drive/My Drive/Research/block_net/block_net_mini_inception_epoch"+str(epoch)+".pt") correct = 0 total = 0 with torch.no_grad(): for data in train_loader: inputs , labels , fore_idx = data inputs = torch.reshape(inputs,(batch,9*3,32,32)) inputs , labels , fore_idx = inputs.to("cuda") , labels.to("cuda"), fore_idx.to("cuda") outputs = block_net(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the %d train images: %d %%' % (total, 100 * correct / total)) print(total,correct) correct = 0 total = 0 out = [] pred = [] with torch.no_grad(): for data in test_loader: inputs , labels , fore_idx = data inputs = torch.reshape(inputs,(batch,9*3,32,32)) inputs , labels , fore_idx = inputs.to("cuda") , labels.to("cuda"), fore_idx.to("cuda") outputs = block_net(inputs) out.append(labels.cpu().numpy()) _, predicted = torch.max(outputs.data, 1) pred.append(predicted.cpu().numpy()) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) print(total,correct) ```
github_jupyter
# Operators Operators are special symbols in Python that carry out arithmetic or logical computation. The value that the operator operates on is called the operand. # Operator Types 1. Arithmetic operators 2. Comparison (Relational) operators 3. Logical (Boolean) operators 4. Bitwise operators 5. Assignment operators 6. Special operators # Arithmetic Operators Arithmetic operators are used to perform mathematical operations like addition, subtraction, multiplication etc. + , -, *, /, %, //, ** are arithmetic operators Example: ``` x, y = 2, 3 #addition print(x ** y) #subtraction(-) #multiplication(*) #division(/) #modulo division (%) #Floor Division (//) #Exponent (**) ``` # Comparision Operators Comparison operators are used to compare values. It either returns True or False according to the condition. >, <, ==, !=, >=, <= are comparision operators ``` a, b = 10, 20 print(a < b) #check a is less than b #check a is greater than b #check a is equal to b #check a is not equal to b (!=) #check a greater than or equal to b #check a less than or equal to b ``` # Logical Operators Logical operators are **and, or, not** operators. ``` a, b = True, False #print a and b print(a and b) #print a or b #print not b ``` # Bitwise operators Bitwise operators act on operands as if they were string of binary digits. It operates bit by bit &, |, ~, ^, >>, << are Bitwise operators ``` a, b = 10, 4 #Bitwise AND print(a & b) #Bitwise OR #Bitwise NOT #Bitwise XOR #Bitwise rightshift #Bitwise Leftshift ``` # Assignment operators Assignment operators are used in Python to assign values to variables. a = 5 is a simple assignment operator that assigns the value 5 on the right to the variable a on the left. =, +=, -=, *=, /=, %=, //=, **=, &=, |=, ^=, >>=, <<= are Assignment operators ``` a = 10 a += 10 #add AND print(a) #subtract AND (-=) #Multiply AND (*=) #Divide AND (/=) #Modulus AND (%=) #Floor Division (//=) #Exponent AND (**=) ``` # Special Operators # Identity Operators **is and is not** are the identity operators in Python. They are used to check if two values (or variables) are located on the same part of the memory. ``` a = 5 b = 5 print(a is b) #5 is object created once both a and b points to same object #check is not l1 = [1, 2, 3] l2 = [1, 2, 3] print(l1 is l2) s1 = "Satish" s2 = "Satish" print(s1 is not s2) ``` # MemberShip Operators **in and not in** are the membership operators in Python. They are used to test whether a value or variable is found in a sequence (string, list, tuple, set and dictionary). ``` lst = [1, 2, 3, 4] print(1 in lst) #check 1 is present in a given list or not #check 5 is present in a given list d = {1: "a", 2: "b"} print(1 in d) ```
github_jupyter
``` rdd = sc.parallelize([("to", 1), ("be", 1), ("or", 1) , ("not", 1), ("to", 1), ("be", 1)]) rdd.take(1) ``` # reduceByKey ``` rdd3= rdd.reduceByKey(lambda x, y: x + y) rdd3.collect() ``` # distinct ``` sc.parallelize([1, 1, 2, 3]).distinct().collect() ``` # join ``` rdd1 = sc.parallelize( [(1, 'a'), (1, 'b'), (5, 'c'), (2, 'd'), (3, 'e')]) rdd2 = sc.parallelize([(1, 'AA'), (5, 'BB'), (5, 'CC'), (6, 'DD')]) rdd1.join(rdd2).collect() ``` # leftOuterJoin ``` rdd1.leftOuterJoin(rdd2).collect() ``` # rightOuterJoin ``` rdd1.rightOuterJoin(rdd2).collect() ``` # fullOuterJoin ``` rdd1.fullOuterJoin(rdd2).collect() ``` # reduce ``` sc.parallelize([1, 2, 3, 4, 5]).reduce(lambda x, y: x+y) ``` # mapValues ``` rdd = sc.parallelize([("a", ["apple", "banana", "lemon"]), ("b", ["grapes"])]) rdd.mapValues(lambda x: len(x)).collect() ``` # map ``` rdd.map(lambda x: (x[0], len(x[1]))).collect() ``` # first ``` sc.parallelize([4, 2, 3]).first() sc.parallelize([(4, 2), (1, 2), (3, 2)]).first() ``` # countByValue ``` sc.parallelize([1, 2, 1, 2, 2], 2).countByValue() ``` # coalesce ``` sc.parallelize([1, 2, 3, 4, 5], 3).coalesce(1).collect() ``` # glom ``` sc.parallelize([1, 2, 3, 4, 5], 3).coalesce(2).glom().collect() rdd = sc.parallelize([1,2,3,4,5,6,7,8], 4) rdd.glom().collect() ``` # groupByKey ``` rdd = sc.parallelize([("a", 1), ("b", 1), ("a", 1)]) rdd.groupByKey().collect() rdd = sc.parallelize([("c", 1), ("a", 1), ("b", 1)]) rdd.collect() ``` # sortByKey ``` rdd.sortByKey().collect() rdd = sc.parallelize([("c1", "p1"), ("c2", "p1"), ("c1", "p1") , ("c2", "p2") , ("c2", "p3")]) rdd.collect() ``` # aggregateByKey ``` def mySequenceFunction(x, y): x.add(y) return x def myCombinerFunction(x, y): x.update(y) return x rdd.aggregateByKey(set([]), mySequenceFunction, myCombinerFunction ).collect() ``` # createDataFrame ``` a = [('Chris', 'Berliner', 5)] sqlContext.createDataFrame(a, ['drinker', 'beer', 'score']).collect() spark.createDataFrame(a, ['drinker', 'beer', 'score']).collect() likes = [('Chris', 'Bud'), ('Kia', 'Berliner'), ('Matt', 'ARJK')] frequents = [('Chris', 'Bohene'), ('Kia', 'Little'), ('Oscar', 'Griff')] likesName=['Drinker', 'Beer'] frequentsName=['Drinker', 'Bar'] likesDF = sqlContext.createDataFrame(likes, likesName) frequentsDF = sqlContext.createDataFrame(frequents, frequentsName) likesDF.show() frequentsDF.show() ``` # join ``` likesDF.join(frequentsDF, likesDF.Drinker == frequentsDF.Drinker, 'right').show() ``` # full join ``` likesDF.join(frequentsDF, likesDF.Drinker == frequentsDF.Drinker, 'full').show() ``` # left_anti ``` likesDF.join(frequentsDF, likesDF.Drinker == frequentsDF.Drinker, 'left_anti').show() likesDF.count() df = spark.createDataFrame([('a', 1), ('b', 1), ('b', 1), ('a', 2)], ('id', 'c')) df.show() df.distinct().show() df = spark.createDataFrame([('a', 1), ('b', 1), ('b', 1), ('a', 2)], ('id', 'c')) df.show() rdd = df.rdd print(rdd.collect()) print(df.rdd.map(list).collect()) print(df.rdd.map(tuple).collect()) ``` # withColumn ``` df = spark.createDataFrame([['a'], ['b'], ['b'], ['c']], (['word'])) df.show() from pyspark.sql.functions import lit new_df=df.withColumn("COUNT", lit(1)) new_df.show() ``` # groupBy ``` from pyspark.sql import functions as func new_df.groupBy("word").agg(func.sum("COUNT")).show() from pyspark.sql.types import StringType from pyspark.sql.functions import udf l = [('Alice', 25), ('Robert', 12), ('Chris', 45)] df = sqlContext.createDataFrame(l, ['Name', 'Age']) df.show() maturity_udf = udf(lambda age: "Adult" if age >=18 else "Child", StringType()) newdf=df.withColumn("Maturity", maturity_udf(df.Age)) newdf.show() df.orderBy("Age", ascending=False).limit(1).show() # generate some data to demonstrate # mat = np.arange(100).reshape(10, -1) import numpy as np mat = np.random.rand(8,1).reshape(4, -1) rdd = sc.parallelize(mat) print(rdd.collect()) rdd.reduce(lambda x, y: np.add(x, y)) from pyspark.ml.linalg import Vectors size=2 data = [(0, Vectors.dense(np.random.rand(size)),), (1, Vectors.dense(np.random.rand(size)),), (1, Vectors.dense(np.random.rand(size)),), (0, Vectors.dense(np.random.rand(size)),)] df = spark.createDataFrame(data, ["label", "features"]) df.show() # a = df.rdd.map(lambda x: x[1]).reduce(lambda x,y: x + y ) # print(a) a = Vectors.dense(np.round(np.random.rand(size), 2)) b = Vectors.dense(np.round(np.random.rand(size), 2)) print(a) print(b) np.add(a , b) np.random.sample([0, 1]) ```
github_jupyter
# Donbot The donbot module is a simple module w/ a class that makes it super easy to automate interactions with mafiascum.net. Create an instance of the Donbot class with your username and password (and potentially other parameters), and you'll be able to: - Collect a range of posts from a thread - Make posts in a specified thread with specified content - Send pms to a user with a specified subject and body - Collect the number of posts in a specified thread - Collect id matching a specified scummer's username - And, eventually, more! `donbot.py` is produced by converting the front-facing notebook `donbot.ipynb` using the jupyter command `jupyter nbconvert --to script donbot.ipynb`. Consult `donbotdemo.ipynb` for a tutorial on how to use the module. **Please** don't use these functions haphazardly, especially those that make posts or send pms, as misuse thereof can be against Site Rules, get you banned, and most importantly cause trouble for a lot of decent people. ## Setup ### Dependencies ``` from datetime import datetime as dt # to parse timestamps from datetime import timedelta # parsing hours/minutes from math import floor # to get page# from post from lxml import html # to help parse website content import requests # for interacting with website import time # need delays before post requests ``` ### Urls donbot will construct requests with ``` # generic site url; will start other urls siteurl = 'https://forum.mafiascum.net/' # where bot logs into mafiascum.net loginurl = siteurl + 'ucp.php?mode=login' # format w/ username and get to obtain page w/ their userid on it userurl = siteurl + 'search.php?keywords=&terms=all&author={}' # make post request here w/ right format to make a post to thread posturl = siteurl + 'posting.php?mode=reply&{}' # post request here w/ form to send a pm pmurl = siteurl + 'ucp.php?i=pm&mode=compose' ``` ### Paths to elements donbot will grab info from ``` # number of posts in thread assoc'd w/ page postcountpath = "//div[@class='pagination']/text()" # every post on current page postspath = '//div[@class="post bg2" or @class="post bg1"]' # post# of a post numberpath = ".//p[@class='author']/a/strong/text()" # username assoc'd w/ a post userpath = ".//dl[@class='postprofile']/dt/a/text()" # content of a post contentpath = ".//div[@class='content']" # timestamp of a post datetimepath = ".//p[@class='author']/text()" # path to value of all input elements on page with specified name postformpath = "//input[@name='{}']/@value" # at userurl, path to link that has their userid userlinkpath = "//dt[@class='author']/a/@href" # at activityoverview page, path to cells of page's main table activitypath = "//table//table//div" ``` ### Other static variables used across instances ``` postsperpage = 25 # number of posts per thread page poststamp = '%a %b %d, %Y %I:%M %p' # post timestamp structure ``` ## The Donbot Class ``` class Donbot(object): def __init__(self, username, password, thread=None, postdelay=1.5): self.postdelay = postdelay # seconds to wait before post requests self.thread = thread self.username = username self.session = requests.Session() self.session.post(loginurl, {'username': username, 'password': password, 'redirect': 'index.php', 'login': 'Login'}) def getUserID(self, username): # Search for posts by user; userID is in link in first result. username = username.replace(' ', '+') page = self.session.get(userurl.format(username)).content userposts = html.fromstring(page) userlink = userposts.xpath(userlinkpath)[0] return userlink[userlink.rfind('=')+1:] def getNumberOfPosts(self, thread=None): thread = thread if thread else self.thread if len(thread) == 0: raise ValueError('No thread specified!') page = self.session.get(thread).content numberOfPosts = html.fromstring(page).xpath(postcountpath)[0] return int(numberOfPosts[:numberOfPosts.find(' ')].strip()) def getActivityOverview(self, thread=None): thread = thread if thread else self.thread if len(thread) == 0: raise ValueError('No thread specified!') page = self.session.get(thread+'&activity_overview=1').content userinfo = [] for row in html.fromstring(page).xpath(activitypath)[1:]: rowtext = row.xpath(".//text()") userinfo.append({'user': rowtext[5], 'firstpost': rowtext[8].strip(), 'lastpost': rowtext[10].strip(), 'sincelast': rowtext[12].strip(), 'totalposts': rowtext[15]}) return userinfo def getPosts(self, thread=None, start=0, end=float('infinity'), loggedin=True): thread = self.thread if not thread else thread if len(thread) == 0: raise ValueError('No thread specified!') # check end or # of posts in thread to find pages we need to examine startpage = floor(start/postsperpage) endpage = (floor(end/postsperpage) if end != float('infinity') else floor(self.getNumberOfPosts(thread)/postsperpage)) # collect on each page key content from posts after currentpost newposts = [] for i in range(startpage*25, (endpage+1)*25, 25): if loggedin: page = self.session.get(thread+'&start='+str(i)).content else: page = requests.get(thread+'&start='+str(i)).content for post in html.fromstring(page).xpath(postspath): p = {} p['number'] = int(post.xpath(numberpath)[0][1:]) if p['number'] >= start and p['number'] <= end: p['user'] = post.xpath(userpath)[0] p['content'] = html.tostring(post.xpath(contentpath)[0]) p['content'] = p['content'].decode('UTF-8').strip()[21:-6] # requires some postprocessing to turn into a datetime stamp = post.xpath(datetimepath)[-1] p['datetime'] = stamp[stamp.find('» ')+2:].strip() p['datetime'] = dt.strptime(p['datetime'], poststamp) newposts.append(p) return newposts def makePost(self, content, thread=None, postdelay=None): postdelay = postdelay if postdelay else self.postdelay thread = thread if thread else self.thread if len(thread) == 0: raise ValueError('No thread specified!') # one request to get form info for post, and another to make it threadid = thread[thread.find('?')+1:] page = html.fromstring(self.session.get( posturl.format(thread[thread.find('?')+1:])).content) form = {'message': content, 'addbbcode20': 100, 'post': 'Submit', 'disable_smilies': 'on', 'attach_sig': 'on', 'icon': 0} for name in ['topic_cur_post_id', 'lastclick', 'creation_time','form_token']: form[name] = page.xpath(postformpath.format(name))[0] time.sleep(postdelay) self.session.post(posturl.format( thread[thread.find('?')+1:]), form) def sendPM(self, subject, body, sendto, postdelay=None): # one request to get form info for pm, and another to send it # a third request gets userid matching user sendto = [sendto] if isinstance(sendto, str) else sendto uids = [self.getUserID(user) for user in sendto] postdelay = postdelay if postdelay else self.postdelay compose = html.fromstring(self.session.get(pmurl).content) form = {'username_list':'', 'subject':subject, 'message':body, 'addbbcode20':100, 'message':body, 'status_switch':0, 'post':'Submit', 'attach_sig':'on', 'disable_smilies':'on'} for user in uids: form['address_list[u][{}]'.format(user.value)] = 'to' for name in ['lastclick', 'creation_time', 'form_token']: form[name] = compose.xpath(postformpath.format(name))[0] time.sleep(postdelay) self.session.post(pmurl, form) ```
github_jupyter
# 原文: https://www.kaggle.com/adithya44/anomaly-detection-with-time-series-forecasting Here we will see about detecting anomalies with time series forecasting. Time series is any data which is associated with time(daily, hourly, monthly etc). For eg: revenue at a store every day is a time series data at a day level. Many use cases like demand estimation, sales forecasting is a typical time series forecasting problem which could be solved by algorithms like SARIMA, LSTM, Holtwinters etc. Time series forecasting helps us in preparing us for future needs by estimating them with the current data. Once we have the forecast we can use that data to detect anomalies on comparing them with actuals. Let's implement it and look at its pros and cons. ``` import numpy as np import pandas as pd import os print(os.listdir("../input")) import warnings warnings.filterwarnings('ignore' from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot import plotly.plotly as py import matplotlib.pyplot as plt from matplotlib import pyplot import plotly.graph_objs as go init_notebook_mode(connected=True) import pandas as pd time_series_df=pd.read_csv('../input/time-series-data/time_series_data.csv') time_series_df.head() ``` The order of data here is important and should be chronological as we are going to forecast the next point. Convert the load_date column to datetime format and sort the data based on date. ``` time_series_df.load_date = pd.to_datetime(time_series_df.load_date, format='%Y%m%d') time_series_df = time_series_df.sort_values(by="load_date") time_series_df = time_series_df.reset_index(drop=True) time_series_df.head() ``` Extract the values and apply log transform to stabilize the variance in the data or to make it stationary before feeding it to the model. ``` actual_vals = time_series_df.actuals.values actual_log = np.log10(actual_vals) ``` First let's try to apply SARIMA algorithm for forecasting. SARIMA stands for Seasonal Auto Regressive Integrated Moving Average. It has a seasonal parameter which we initialize as 7 due to weekly seasonality of our sales data. Other parameters are p,d,q which are identified based on ACF and PACF plots or ideally we should use the parameters with minimal error in forecasting. More details can be found here: https://people.duke.edu/~rnau/arimrule.htm I m not getting into the problem of getting the right set of parameters here which we will solve later using Auto Arima which allows us to get the best set of parameters in a range with minimal error. Here I m specifying the differencing factor(d) as 1. It helps us to remove trends and cycles in the data. ``` import math import statsmodels.api as sm import statsmodels.tsa.api as smt from sklearn.metrics import mean_squared_error from matplotlib import pyplot import matplotlib.pyplot as plt import plotly.plotly as py import plotly.tools as tls train, test = actual_vals[0:-70], actual_vals[-70:] train_log, test_log = np.log10(train), np.log10(test) my_order = (1, 1, 1) my_seasonal_order = (0, 1, 1, 7) ``` At a time we predict the next data point and we loop through train data to predict the next data and add the next data point after prediction for further forecasting. This is like a moving window daily level data(For eg: Previous 90 points are used to predict the next point at any given time). Convert the predicted data back to scale by power 10 transform and plot the results. ``` history = [x for x in train_log] predictions = list() predict_log=list() for t in range(len(test_log)): model = sm.tsa.SARIMAX(history, order=my_order, seasonal_order=my_seasonal_order,enforce_stationarity=False,enforce_invertibility=False) model_fit = model.fit(disp=0) output = model_fit.forecast() predict_log.append(output[0]) yhat = 10**output[0] predictions.append(yhat) obs = test_log[t] history.append(obs) # print('predicted=%f, expected=%f' % (output[0], obs)) #error = math.sqrt(mean_squared_error(test_log, predict_log)) #print('Test rmse: %.3f' % error) # plot figsize=(12, 7) plt.figure(figsize=figsize) pyplot.plot(test,label='Actuals') pyplot.plot(predictions, color='red',label='Predicted') pyplot.legend(loc='upper right') pyplot.show() ``` This is a good time series forecast. Trend, Seasonality are two important factors in time series data and if your algorithm is able to capture the trend of your data(upward/downward) and in case your data is seasonal(weekly,daily,yearly pattern) visually then your algorithm fits your case. Here we can observe our SARIMA algorithm captures the trend from the spikes(not by replicating it but by just capturing the spike) and predicts well with the actuals during normal days. The parameter we specified here seems to work well for the metric but it would be an exhaustive task to do the plots verify and tune the parameters. A solution to this is Auto Arima which returns the best set of parameters for the algorithm in our specified range. Install pyramid-arima for auto arima. Lets find p and q parameters using auto_arima and specify d as 1 for first order differencing and seasonality as 7 for weekly seasonality. ``` from pyramid.arima import auto_arima stepwise_model = auto_arima(train_log, start_p=1, start_q=1, max_p=3, max_q=3, m=7, start_P=0, seasonal=True, d=1, D=1, trace=True, error_action='ignore', suppress_warnings=True, stepwise=True) print(stepwise_model) ``` Now the auto arima model can be used for stepwise forecast by the same process we performed above: ``` import math import statsmodels.api as sm import statsmodels.tsa.api as smt from sklearn.metrics import mean_squared_error train, test = actual_vals[0:-70], actual_vals[-70:] train_log, test_log = np.log10(train), np.log10(test) # split data into train and test-sets history = [x for x in train_log] predictions = list() predict_log=list() for t in range(len(test_log)): #model = sm.tsa.SARIMAX(history, order=my_order, seasonal_order=my_seasonal_order,enforce_stationarity=False,enforce_invertibility=False) stepwise_model.fit(history) output = stepwise_model.predict(n_periods=1) predict_log.append(output[0]) yhat = 10**output[0] predictions.append(yhat) obs = test_log[t] history.append(obs) #print('predicted=%f, expected=%f' % (output[0], obs)) #error = math.sqrt(mean_squared_error(test_log, predict_log)) #print('Test rmse: %.3f' % error) # plot figsize=(12, 7) plt.figure(figsize=figsize) pyplot.plot(test,label='Actuals') pyplot.plot(predictions, color='red',label='Predicted') pyplot.legend(loc='upper right') pyplot.show() predicted_df=pd.DataFrame() predicted_df['load_date']=time_series_df['load_date'][-70:] predicted_df['actuals']=test predicted_df['predicted']=predictions predicted_df.reset_index(inplace=True) del predicted_df['index'] predicted_df.head() ``` We have results of forecast and actuals, to detect anomalies using this information,I m using a property of distribution of data. Note this will work only if the data is distributed normal/gaussian. Steps I do to detect anomalies: 1. Compute the error term(actual- predicted). 2. Compute the rolling mean and rolling standard deviation(window is a week). 3. Classify data with an error of 1.5,1.75 and 2 standard deviations as limits for low,medium and high anomalies. (5% of data point would be identified anomalies based on this property) I have used lambda function for classifying anomalies based error and standard deviation rather than having separate loops and function for it. ``` import numpy as np def detect_classify_anomalies(df,window): df.replace([np.inf, -np.inf], np.NaN, inplace=True) df.fillna(0,inplace=True) df['error']=df['actuals']-df['predicted'] df['percentage_change'] = ((df['actuals'] - df['predicted']) / df['actuals']) * 100 df['meanval'] = df['error'].rolling(window=window).mean() df['deviation'] = df['error'].rolling(window=window).std() df['-3s'] = df['meanval'] - (2 * df['deviation']) df['3s'] = df['meanval'] + (2 * df['deviation']) df['-2s'] = df['meanval'] - (1.75 * df['deviation']) df['2s'] = df['meanval'] + (1.75 * df['deviation']) df['-1s'] = df['meanval'] - (1.5 * df['deviation']) df['1s'] = df['meanval'] + (1.5 * df['deviation']) cut_list = df[['error', '-3s', '-2s', '-1s', 'meanval', '1s', '2s', '3s']] cut_values = cut_list.values cut_sort = np.sort(cut_values) df['impact'] = [(lambda x: np.where(cut_sort == df['error'][x])[1][0])(x) for x in range(len(df['error']))] severity = {0: 3, 1: 2, 2: 1, 3: 0, 4: 0, 5: 1, 6: 2, 7: 3} region = {0: "NEGATIVE", 1: "NEGATIVE", 2: "NEGATIVE", 3: "NEGATIVE", 4: "POSITIVE", 5: "POSITIVE", 6: "POSITIVE", 7: "POSITIVE"} df['color'] = df['impact'].map(severity) df['region'] = df['impact'].map(region) df['anomaly_points'] = np.where(df['color'] == 3, df['error'], np.nan) df = df.sort_values(by='load_date', ascending=False) df.load_date = pd.to_datetime(df['load_date'].astype(str), format="%Y-%m-%d") return df def plot_anomaly(df,metric_name): #error = pd.DataFrame(Order_results.error.values) #df = df.sort_values(by='load_date', ascending=False) #df.load_date = pd.to_datetime(df['load_date'].astype(str), format="%Y%m%d") dates = df.load_date #meanval = error.rolling(window=window).mean() #deviation = error.rolling(window=window).std() #res = error #upper_bond=meanval + (2 * deviation) #lower_bond=meanval - (2 * deviation) #anomalies = pd.DataFrame(index=res.index, columns=res.columns) #anomalies[res < lower_bond] = res[res < lower_bond] #anomalies[res > upper_bond] = res[res > upper_bond] bool_array = (abs(df['anomaly_points']) > 0) #And a subplot of the Actual Values. actuals = df["actuals"][-len(bool_array):] anomaly_points = bool_array * actuals anomaly_points[anomaly_points == 0] = np.nan #Order_results['meanval']=meanval #Order_results['deviation']=deviation color_map= {0: "'rgba(228, 222, 249, 0.65)'", 1: "yellow", 2: "orange", 3: "red"} table = go.Table( domain=dict(x=[0, 1], y=[0, 0.3]), columnwidth=[1, 2 ], #columnorder=[0, 1, 2,], header = dict(height = 20, values = [['<b>Date</b>'],['<b>Actual Values </b>'], ['<b>Predicted</b>'], ['<b>% Difference</b>'],['<b>Severity (0-3)</b>']], font = dict(color=['rgb(45, 45, 45)'] * 5, size=14), fill = dict(color='#d562be')), cells = dict(values = [df.round(3)[k].tolist() for k in ['load_date', 'actuals', 'predicted', 'percentage_change','color']], line = dict(color='#506784'), align = ['center'] * 5, font = dict(color=['rgb(40, 40, 40)'] * 5, size=12), #format = [None] + [",.4f"] + [',.4f'], #suffix=[None] * 4, suffix=[None] + [''] + [''] + ['%'] + [''], height = 27, #fill = dict(color=['rgb(235, 193, 238)', 'rgba(228, 222, 249, 0.65)'])) fill=dict(color= # ['rgb(245,245,245)',#unique color for the first column [df['color'].map(color_map)], ) )) #df['ano'] = np.where(df['color']==3, df['error'], np.nan) anomalies = go.Scatter(name="Anomaly", x=dates, xaxis='x1', yaxis='y1', y=df['anomaly_points'], mode='markers', marker = dict(color ='red', size = 11,line = dict( color = "red", width = 2))) upper_bound = go.Scatter(hoverinfo="skip", x=dates, showlegend =False, xaxis='x1', yaxis='y1', y=df['3s'], marker=dict(color="#444"), line=dict( color=('rgb(23, 96, 167)'), width=2, dash='dash'), fillcolor='rgba(68, 68, 68, 0.3)', fill='tonexty') lower_bound = go.Scatter(name='Confidence Interval', x=dates, xaxis='x1', yaxis='y1', y=df['-3s'], marker=dict(color="#444"), line=dict( color=('rgb(23, 96, 167)'), width=2, dash='dash'), fillcolor='rgba(68, 68, 68, 0.3)', fill='tonexty') Actuals = go.Scatter(name= 'Actuals', x= dates, y= df['actuals'], xaxis='x2', yaxis='y2', mode='line', marker=dict(size=12, line=dict(width=1), color="blue")) Predicted = go.Scatter(name= 'Predicted', x= dates, y= df['predicted'], xaxis='x2', yaxis='y2', mode='line', marker=dict(size=12, line=dict(width=1), color="orange")) # create plot for error... Error = go.Scatter(name="Error", x=dates, y=df['error'], xaxis='x1', yaxis='y1', mode='line', marker=dict(size=12, line=dict(width=1), color="red"), text="Error") anomalies_map = go.Scatter(name = "anomaly actual", showlegend=False, x=dates, y=anomaly_points, mode='markers', xaxis='x2', yaxis='y2', marker = dict(color ="red", size = 11, line = dict( color = "red", width = 2))) Mvingavrg = go.Scatter(name="Moving Average", x=dates, y=df['meanval'], mode='line', xaxis='x1', yaxis='y1', marker=dict(size=12, line=dict(width=1), color="green"), text="Moving average") axis=dict( showline=True, zeroline=False, showgrid=True, mirror=True, ticklen=4, gridcolor='#ffffff', tickfont=dict(size=10)) layout = dict( width=1000, height=865, autosize=False, title= metric_name, margin = dict(t=75), showlegend=True, xaxis1=dict(axis, **dict(domain=[0, 1], anchor='y1', showticklabels=True)), xaxis2=dict(axis, **dict(domain=[0, 1], anchor='y2', showticklabels=True)), yaxis1=dict(axis, **dict(domain=[2 * 0.21 + 0.20 + 0.09, 1], anchor='x1', hoverformat='.2f')), yaxis2=dict(axis, **dict(domain=[0.21 + 0.12, 2 * 0.31 + 0.02], anchor='x2', hoverformat='.2f'))) fig = go.Figure(data = [table,anomalies,anomalies_map, upper_bound,lower_bound,Actuals,Predicted, Mvingavrg,Error], layout = layout) iplot(fig) pyplot.show() classify_df=detect_classify_anomalies(predicted_df,7) classify_df.reset_index(inplace=True) del classify_df['index'] classify_df.head() plot_anomaly(classify_df.iloc[:-6,:],"metric_name") ``` # LSTM ``` from pandas import DataFrame from pandas import Series from pandas import concat from pandas import read_csv from pandas import datetime from sklearn.metrics import mean_squared_error from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from math import sqrt # frame a sequence as a supervised learning problem def timeseries_to_supervised(data, lag=1): df = DataFrame(data) columns = [df.shift(i) for i in range(1, lag+1)] columns.append(df) df = concat(columns, axis=1) df.fillna(0, inplace=True) return df # create a differenced series def difference(dataset, interval=1): diff = list() for i in range(interval, len(dataset)): value = dataset[i] - dataset[i - interval] diff.append(value) return Series(diff) # invert differenced value def inverse_difference(history, yhat, interval=1): return yhat + history[-interval] # scale train and test data to [-1, 1] def scale(train, test): # fit scaler scaler = MinMaxScaler(feature_range=(-1, 1)) scaler = scaler.fit(train) # transform train train = train.reshape(train.shape[0], train.shape[1]) train_scaled = scaler.transform(train) # transform test test = test.reshape(test.shape[0], test.shape[1]) test_scaled = scaler.transform(test) return scaler, train_scaled, test_scaled # inverse scaling for a forecasted value def invert_scale(scaler, X, value): new_row = [x for x in X] + [value] array = np.array(new_row) array = array.reshape(1, len(array)) inverted = scaler.inverse_transform(array) return inverted[0, -1] # fit an LSTM network to training data def fit_lstm(train, batch_size, nb_epoch, neurons): X, y = train[:, 0:-1], train[:, -1] X = X.reshape(X.shape[0], 1, X.shape[1]) model = Sequential() model.add(LSTM(neurons, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') for i in range(nb_epoch): model.fit(X, y, epochs=1, batch_size=batch_size, verbose=0, shuffle=False) model.reset_states() return model # make a one-step forecast def forecast_lstm(model, batch_size, X): X = X.reshape(1, 1, len(X)) yhat = model.predict(X, batch_size=batch_size) return yhat[0,0] # LSTM supervised = timeseries_to_supervised(actual_log, 1) supervised_values = supervised.values # split data into train and test-sets train_lstm, test_lstm = supervised_values[0:-70], supervised_values[-70:] # transform the scale of the data scaler, train_scaled_lstm, test_scaled_lstm = scale(train_lstm, test_lstm) # fit the model batch,Epoch,Neurons lstm_model = fit_lstm(train_scaled_lstm, 1, 850 , 3) # forecast the entire training dataset to build up state for forecasting train_reshaped = train_scaled_lstm[:, 0].reshape(len(train_scaled_lstm), 1, 1) #lstm_model.predict(train_reshaped, batch_size=1) from matplotlib import pyplot import matplotlib.pyplot as plt import plotly.plotly as py import plotly.tools as tls # walk-forward validation on the test data predictions = list() for i in range(len(test_scaled_lstm)): #make one-step forecast X, y = test_scaled_lstm[i, 0:-1], test_scaled_lstm[i, -1] yhat = forecast_lstm(lstm_model, 1, X) # invert scaling yhat = invert_scale(scaler, X, yhat) # invert differencing #yhat = inverse_difference(raw_values, yhat, len(test_scaled)+1-i) # store forecast predictions.append(10**yhat) expected = actual_log[len(train_lstm) + i ] # line plot of observed vs predicted figsize=(12, 7) plt.figure(figsize=figsize) pyplot.plot(actual_vals[-70:],label='Actuals') pyplot.plot(predictions, color = "red",label='Predicted') pyplot.legend(loc='upper right') pyplot.show() ``` Now lets try this out in a different metric data. The data is for same time period. ``` tf_df=pd.read_csv('../input/forecast-metric2/time_series_metric2.csv') tf_df.head() ``` With the same procedure followed above we use auto arima to get the best parameters and forecast stepwise. Plot the results of actuals and predictions made. ``` actual_vals = tf_df.actuals.values train, test = actual_vals[0:-70], actual_vals[-70:] train_log, test_log = np.log10(train), np.log10(test) from pyramid.arima import auto_arima stepwise_model = auto_arima(train_log, start_p=1, start_q=1, max_p=3, max_q=3, m=7, start_P=0, seasonal=True, d=1, D=1, trace=True, error_action='ignore', suppress_warnings=True, stepwise=True) history = [x for x in train_log] predictions = list() predict_log=list() for t in range(len(test_log)): #model = sm.tsa.SARIMAX(history, order=my_order, seasonal_order=my_seasonal_order,enforce_stationarity=False,enforce_invertibility=False) stepwise_model.fit(history,enforce_stationarity=False,enforce_invertibility=False) output = stepwise_model.predict(n_periods=1) predict_log.append(output[0]) yhat = 10**output[0] predictions.append(yhat) obs = test_log[t] history.append(obs) #print('predicted=%f, expected=%f' % (output[0], obs)) #error = math.sqrt(mean_squared_error(test_log, predict_log)) #print('Test rmse: %.3f' % error) # plot figsize=(12, 7) plt.figure(figsize=figsize) pyplot.plot(test,label='Actuals') pyplot.plot(predictions, color='red',label='Predicted') pyplot.legend(loc='upper right') pyplot.show() ``` Here the algorithm tries to chase down the actuals. Though this might be a good forecast where the error is low but the anomalous behaviour in the actuals cant be identified using this. This is a problem of using forecasting techniques for anomaly detection.We are trying to capture trends/seasonality in data along with not optimising too much on the error to get an exact replica of actuals(which makes us difficult to find anomalies). Every metric needs to be validated with parameters fine-tuned so that anomalies are detected when using forecasting for detecting anomalies. Also for metrics with different distribution of data a different approach in identifying anomalies needs to be followed. One more con is, Isolation forest we detected anomalies for a use case which comprised of multiple metrics at a time and we drilled down to anomalies on individual metrics in them.Whereas using forecasting mechanism we need a separate correlation logic as forecasting is individual for metrics. Whereas an algorithm like isolation forest separates out anomalous behavior from the data which can be used to generalize to multiple metrics.
github_jupyter
``` import numpy as np # t_end = 4000 # input_spikes = np.zeros((t_end, 1)) # input_spikes[1020:1060:20, :] = 1 # input_spikes[1080:1180:30, :] = 1 # input_spikes[1240, :] = 1 # input_spikes[1900:2020:40, :] = 1 # input_spikes[3000, :] = 1 t_end = 300 input_spikes = np.zeros((t_end, 1)) input_spikes[20:160:15, :] = 1 input_spikes[250, :] = 1 def thr_trace(thr_trace, thr, beta): dt = 1. tau_adaptation = 700 b_decay = np.exp(-dt / tau_adaptation) b = 0 for t in range(input_spikes.shape[0]): # new_b = self.decay_b * state.b + (np.ones(self.n_rec) - self.decay_b) * state.z b = b_decay * b + (1. - b_decay) * input_spikes[t] thr_trace[t] = thr + beta * b return thr_trace alif_thr = np.zeros_like(input_spikes) alif_thr = thr_trace(alif_thr, thr=0.01, beta=1) elif_thr = np.zeros_like(input_spikes) elif_thr = thr_trace(elif_thr, thr=0.02, beta=-0.5) def w_trace(w_trace, tauD, tauF): U = 0.2 dt = 1. u_trc = np.ones_like(input_spikes)# * U x_trc = np.ones_like(input_spikes) u = 0. x = 0. for t in range(input_spikes.shape[0]): z = input_spikes[t] # old wrong: # u = U + (u - U * u * z - U * (1 - z)) * np.exp(-dt / tauF) # x = 1. + (x - u * x * z - 1.) * np.exp(-dt / tauD) # Guillaume mail # u_t+1 = exp(-dt/tau_f) u_t + U(1 - u_t) z_t # x'_t+1 = exp(-dt/tau_d) x'_t + u_t+1 (1 - x'_t) z_t # x_t = 1 - x'_t # Guillaume: Mongillo Tsodyks u = np.exp(-dt / tauF) * u + U * (1. - (u + U)) * z u_trc[t] = u + U x = np.exp(-dt / tauD) * x + u * (1. - x) * z x_trc[t] = 1. - x return u_trc * x_trc, u_trc, x_trc def v_trace(w_trace): dt = 1. v_trc = np.zeros_like(input_spikes) v = 0 for t in range(input_spikes.shape[0]): z = input_spikes[t] v = np.exp(-dt / 20) * v + z * w_trace[t-1] # no reset v_trc[t] = v return v_trc stpd_w = np.zeros_like(input_spikes) stpd_w, stpd_u, stpd_x = w_trace(stpd_w, tauD=700, tauF=20) stpd_v = v_trace(stpd_w) stpf_w = np.zeros_like(input_spikes) stpf_w, stpf_u, stpf_x = w_trace(stpf_w, tauD=200, tauF=500) stpf_v = v_trace(stpf_w) import matplotlib.pyplot as plt def raster_plot(ax,spikes,linewidth=0.8,**kwargs): n_t,n_n = spikes.shape event_times,event_ids = np.where(spikes) max_spike = 10000 event_times = event_times[:max_spike] event_ids = event_ids[:max_spike] for n,t in zip(event_ids,event_times): ax.vlines(t, n + 0., n + 1., linewidth=linewidth, **kwargs) ax.set_ylim([0 + .5, n_n + .5]) ax.set_xlim([0, n_t]) ax.set_yticks([0, n_n]) def strip_right_top_axis(ax): ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() def hide_bottom_axis(ax): #ax.spines['bottom'].set_visible(False) ax.set_xticklabels([]) #ax.get_xaxis().set_visible(False) fig, ax_list = plt.subplots(nrows=5, figsize=(14, 10), gridspec_kw={'wspace': 0, 'hspace': 0.4}) lw = 2 ylabel_x = -0.05 ylabel_y = 0.5 la = 0.9 fs = 12 plt.rcParams.update({'font.size': fs}) # Clear the axis to print new plots for k in range(ax_list.shape[0]): ax = ax_list[k] ax.clear() strip_right_top_axis(ax) # INPUT SPIKES ax = ax_list[0] raster_plot(ax, input_spikes, linewidth=1.6) hide_bottom_axis(ax) ax.set_ylabel("input\nspikes", fontsize=fs) ax.get_yaxis().set_label_coords(ylabel_x, ylabel_y) ax.set_yticks([0, 1]) ax.set_yticklabels([]) ax.set_ylim([0, 1]) ax.set_xlim([0,t_end]) # ALIF threshold trace ax = ax_list[1] ax.plot(alif_thr, color='r', alpha=la, linewidth=lw) hide_bottom_axis(ax) ax.set_ylabel("ALIF\nthreshold", fontsize=fs) ax.get_yaxis().set_label_coords(ylabel_x, ylabel_y) ax.set_yticks([0.01, 0.02]) ax.set_xlim([0,t_end]) # ELIF threshold trace ax = ax_list[2] ax.plot(elif_thr, color='r', alpha=la, linewidth=lw) hide_bottom_axis(ax) ax.set_ylabel("ELIF\nthreshold", fontsize=fs) ax.get_yaxis().set_label_coords(ylabel_x, ylabel_y) ax.set_yticks([0.01, 0.02]) ax.set_xlim([0,t_end]) # STP-D effective psp trace ax = ax_list[3] ax.plot(stpd_v, color='r', alpha=la, linewidth=lw) ax.plot(np.ones_like(stpd_v) * 0.2, color='grey', alpha=la, linewidth=lw, linestyle='--', label='initial PSP spike amplitude') # ax.plot(stpd_u, color='b', alpha=la*0.5, linewidth=1, label='u') # ax.plot(stpd_x, color='g', alpha=la*0.5, linewidth=1, label='x') hide_bottom_axis(ax) ax.set_ylabel("STP-D\nPSP", fontsize=fs) ax.get_yaxis().set_label_coords(ylabel_x, ylabel_y) ax.set_yticks([0, 0.5]) ax.set_ylim([-0.02, 0.5]) ax.set_xlim([0,t_end]) ax.legend() # STP-F effective psp trace ax = ax_list[4] ax.plot(stpf_v, color='r', alpha=la, linewidth=lw, label='weight') ax.plot(np.ones_like(stpd_v) * 0.2, color='grey', alpha=la, linewidth=lw, linestyle='--') # ax.plot(stpf_u, color='b', alpha=la*0.5, linewidth=1, label='u') # ax.plot(stpf_x, color='g', alpha=la*0.5, linewidth=1, label='x') # hide_bottom_axis(ax) ax.set_ylabel("STP-F\nPSP", fontsize=fs) ax.get_yaxis().set_label_coords(ylabel_x, ylabel_y) ax.set_yticks([0, 0.5]) ax.set_ylim([-0.02, 0.5]) ax.set_xlim([0,t_end]) # # STP-D effective weight trace # ax = ax_list[5] # ax.plot(stpd_w, color='r', alpha=la, linewidth=1, label='weight') # # ax.plot(stpd_u, color='b', alpha=la*0.5, linewidth=1, label='u') # # ax.plot(stpd_x, color='g', alpha=la*0.5, linewidth=1, label='x') # hide_bottom_axis(ax) # ax.set_ylabel("STP-D\nweight", fontsize=fs) # ax.get_yaxis().set_label_coords(ylabel_x, ylabel_y) # ax.set_yticks([0, 0.4]) # ax.set_xlim([0,t_end]) # # STP-F effective weight trace # ax = ax_list[6] # ax.plot(stpf_w, color='r', alpha=la, linewidth=1, label='weight') # # ax.plot(stpf_u, color='b', alpha=la*0.5, linewidth=1, label='u') # # ax.plot(stpf_x, color='g', alpha=la*0.5, linewidth=1, label='x') # hide_bottom_axis(ax) # ax.set_yticks([0, 0.4]) # ax.set_xlim([0,t_end]) # ax.set_ylabel("STP-F\nweight", fontsize=fs) # ax.get_yaxis().set_label_coords(ylabel_x, ylabel_y) # # STP-D ux trace # ax = ax_list[7] # # ax.plot(stpd_w, color='r', alpha=la, linewidth=1, label='weight') # ax.plot(stpd_u, color='b', alpha=la, linewidth=3, label='u') # ax.plot(stpd_x, color='r', alpha=la, linewidth=3, label='x') # hide_bottom_axis(ax) # ax.set_ylabel("STP-D\nu, x", fontsize=fs) # ax.get_yaxis().set_label_coords(ylabel_x, ylabel_y) # ax.set_yticks([0, 1.]) # ax.set_xlim([0,t_end]) # # STP-F ux trace # ax = ax_list[8] # #ax.plot(stpf_w, color='r', alpha=la, linewidth=1, label='weight') # ax.plot(stpf_u, color='b', alpha=la, linewidth=3, label='u') # ax.plot(stpf_x, color='r', alpha=la, linewidth=3, label='x') # ax.set_yticks([0, 1.]) # ax.set_ylabel("STP-F\nu, x", fontsize=fs) # ax.get_yaxis().set_label_coords(ylabel_x, ylabel_y) #ax.set_xticks([t*1000 for t in range(5)]) #ax.set_xticklabels(['0', '', '', '', '4']) #ax.set_xlabel("seconds", fontsize=fs) ax.set_xlim([0,t_end]) plt.tight_layout() plt.show() fig.savefig('560_fig1.png', format='png', bbox_inches = 'tight', pad_inches = 0) fig.savefig('560_fig1.svg', format='svg', bbox_inches = 'tight', pad_inches = 0) ```
github_jupyter
## Passive tracers We can follow the time evolution of material in a flow field with passive tracers. Here we trace particles through a time-independent flow field and show how to march through time. The flow field could easily be changed at each step (exercise). **New concepts:** Passive tracer swarms, tracer advection (equation template), variables on particle swarms ``` import underworld as uw from underworld import function as fn import glucifer import numpy as np res = 32 boxHeight = 1.0 aspect_ratio = 2.0 # A mesh to solve velocity and pressure VPmesh = uw.mesh.FeMesh_Cartesian( elementType = ("Q2/dPc1"), elementRes = (int(res * aspect_ratio), res), minCoord = (0., 0.), maxCoord = (boxHeight*aspect_ratio, boxHeight)) velocityField = uw.mesh.MeshVariable( mesh=VPmesh, nodeDofCount=2 ) pressureField = uw.mesh.MeshVariable( mesh=VPmesh.subMesh, nodeDofCount=1 ) velocityField.data[:,:] = 0.0 pressureField.data[:] = 0.0 # Boundary conditions - specify: # Vx on side walls (0) and Vx on top (1) # Vy on top / bottom (0) vxWalls = VPmesh.specialSets["MinI_VertexSet"] + \ VPmesh.specialSets["MaxI_VertexSet"] + \ VPmesh.specialSets["MaxJ_VertexSet"] vyWalls = VPmesh.specialSets["MinJ_VertexSet"] + \ VPmesh.specialSets["MaxJ_VertexSet"] # We only need to specify the non-zero value of the driving terms because we zeroed everything previously for index in VPmesh.specialSets["MaxJ_VertexSet"]: velocityField.data[index,0] = 1.0 # Now register that information velocityBC = uw.conditions.DirichletCondition( variable = velocityField, indexSetsPerDof = (vxWalls, vyWalls) ) ``` ## Passive tracer "swarm" In parallel, the management of swarms of particles with a changing distribution across processors can be quite complicated. Underworld does most things behind the scenes but only when we use the built in functionality to handle and transport particles. Here is how it works: ``` pt_swarm = uw.swarm.Swarm( mesh=VPmesh, particleEscape=True) pt_swarmLayout = uw.swarm.layouts.PerCellRandomLayout( swarm=pt_swarm, particlesPerCell=5 ) pt_swarm.populate_using_layout( layout=pt_swarmLayout ) # The passive tracer can carry information as well # We can store the initial y coordinate, for example pt_data = pt_swarm.add_variable( dataType="float", count=1) # scalar value pt_data.data[:,0] = pt_swarm.particleCoordinates.data[:,1] fig1 = glucifer.Figure() fig1.append( glucifer.objects.Mesh(mesh=VPmesh, opacity=0.5)) fig1.append( glucifer.objects.Points(swarm=pt_swarm, pointSize=5.0, opacity=0.95, colourBar=False, fn_colour=pt_data) ) fig1.show() # The equations are templated already stokesPIC = uw.systems.Stokes( velocityField = velocityField, pressureField = pressureField, conditions = [velocityBC,], fn_viscosity = 1.0, fn_bodyforce = (0.0,0.0) ) # And a suitable solver package is already attached to it solver = uw.systems.Solver( stokesPIC ) solver.solve() fig2 = glucifer.Figure() fig2.append( glucifer.objects.VectorArrows( VPmesh, velocityField, arrowHead=0.2, scaling=0.1 ) ) fig2.append( glucifer.objects.Surface( VPmesh, pressureField ) ) fig2.show() ``` ## Timestep the particles in a steady flow ``` advector = uw.systems.SwarmAdvector( swarm=pt_swarm, velocityField=velocityField, order=2 ) advector.get_max_dt() # similar to CFL condition, not allowing particles to move too far. time=0.0 timeEnd = 25.0 while time<timeEnd: dt = advector.get_max_dt() advector.integrate(dt) time += dt # Note the lazy evaluation in fig1 ... it updates whenever fig1 = glucifer.Figure() fig1.append( glucifer.objects.Mesh(mesh=VPmesh, opacity=0.5)) fig1.append( glucifer.objects.Points(swarm=pt_swarm, pointSize=5.0, opacity=0.95, colourBar=False, fn_colour=pt_data) ) fig1.show() ``` ## Exercises You could reproduce the work of Kellogg and Turcotte (1990) ! 1. Either using their prescribed, time dependent flow equations and loading them into the velocity field or 2. Developing a time dependent surface boundary condition to create a similar chaotic flow. Kellogg, L. H., and D. L. Turcotte (1990), Mixing and the distribution of heterogeneities in a chaotically convecting mantle, Journal of Geophysical Research: Solid Earth (1978–2012), 95(B1), 421–432, doi:10.1029/JB095iB01p00421. You could try making a streak plot by adding a number of new passive swarm particles launched from the same spot for the first N timesteps and then plotting all of them [Solution](042-Exercise-StreakPlot-Solution.ipynb)
github_jupyter
``` # Erasmus+ ICCT project (2018-1-SI01-KA203-047081) # Toggle cell visibility from IPython.display import HTML tag = HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide() } else { $('div.input').show() } code_show = !code_show } $( document ).ready(code_toggle); </script> Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''') display(tag) # Hide the code completely # from IPython.display import HTML # tag = HTML('''<style> # div.input { # display:none; # } # </style>''') # display(tag) ``` ## Matrici diagonali: solo modi convergenti Una matrice diagonale è una matrice particolare che ha elementi nulli al di fuori della diagonale principale. Due esempi di tali matrici sono: $$D_{1}=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \quad \text{e} \quad D_{2}=\begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}. $$ Questo tipo di matrici, dal punto di vista della teoria dei sistemi, sono la rappresentazione di sistemi in cui la dinamica di ogni singola variabile di stato non è influenzata dalle altre. Si dice che la dinamica delle variabili di stato è disaccoppiata. Di seguito viene fornito un esempio di matrice diagonale con tutti i modi convergenti, è possibile modificare qualsiasi valore e verificare se la matrice è diagonale ed ha tutti i modi convergenti. ``` %matplotlib notebook import control import numpy from IPython.display import display, Markdown import ipywidgets as widgets import matplotlib.pyplot as plt from matplotlib import animation #print a matrix latex-like def bmatrix(a): """Returns a LaTeX bmatrix - by Damir Arbula (ICCT project) :a: numpy array :returns: LaTeX bmatrix as a string """ if len(a.shape) > 2: raise ValueError('bmatrix can at most display two dimensions') lines = str(a).replace('[', '').replace(']', '').splitlines() rv = [r'\begin{bmatrix}'] rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines] rv += [r'\end{bmatrix}'] return '\n'.join(rv) # Display formatted matrix: def vmatrix(a): if len(a.shape) > 2: raise ValueError('bmatrix can at most display two dimensions') lines = str(a).replace('[', '').replace(']', '').splitlines() rv = [r'\begin{vmatrix}'] rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines] rv += [r'\end{vmatrix}'] return '\n'.join(rv) #matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value ! class matrixWidget(widgets.VBox): def updateM(self,change): for irow in range(0,self.n): for icol in range(0,self.m): self.M_[irow,icol] = self.children[irow].children[icol].value #print(self.M_[irow,icol]) self.value = self.M_ #def dummychangecallback(self,change): #pass def __init__(self,n,m): self.n = n self.m = m self.M_ = numpy.matrix(numpy.zeros((self.n,self.m))) self.value = self.M_ widgets.VBox.__init__(self, children = [ widgets.HBox(children = [widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)] ) for j in range(n) ]) #fill in widgets and tell interact to call updateM each time a children changes value for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].value = self.M_[irow,icol] self.children[irow].children[icol].observe(self.updateM, names='value') #value = Unicode('example@example.com', help="The email value.").tag(sync=True) self.observe(self.updateM, names='value', type= 'All') def setM(self, newM): #disable callbacks, change values, and reenable self.unobserve(self.updateM, names='value', type= 'All') for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].unobserve(self.updateM, names='value') self.M_ = newM self.value = self.M_ for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].value = self.M_[irow,icol] for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].observe(self.updateM, names='value') self.observe(self.updateM, names='value', type= 'All') #self.children[irow].children[icol].observe(self.updateM, names='value') #overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?) class sss(control.StateSpace): def __init__(self,*args): #call base class init constructor control.StateSpace.__init__(self,*args) #disable function below in base class def _remove_useless_states(self): pass A=matrixWidget(4,4) A.setM(numpy.matrix('-1,0,0,0;0,-2,0,0;0,0,-3,0;0,0,0,-4')) def main_callback(matA,DW): (r,c) = numpy.shape(matA) print('Gli autovalori sono: %s' %str(numpy.linalg.eig(matA)[0])) for i in range(0,r): for j in range(0,c): if i != j: if matA[i,j] != 0: if all(numpy.real(numpy.linalg.eig(matA)[0]) <= 0): if all(numpy.real(numpy.linalg.eig(matA)[0]) < 0): print('La matrice è stabile ma non è diagonale') return else: print('La matrice ha poli con parte reale nulla e non è diagonale') return if any(numpy.real(numpy.linalg.eig(matA)[0]) > 0): print('La matrice è instabile e non è diagonale') return else: if matA[i,j] > 0: print('La matrice è instabile') return if matA[i,j] == 0: print('La matrice ha poli con parte reale nulla') return print('Ok!') #create dummy widget DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px')) #create button widget START = widgets.Button( description='Test', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Test', icon='check' ) def on_start_button_clicked(b): #This is a workaround to have intreactive_output call the callback: # force the value of the dummy widget to change if DW.value> 0 : DW.value = -1 else: DW.value = 1 pass START.on_click(on_start_button_clicked) out = widgets.interactive_output(main_callback,{'matA':A,'DW':DW}) display(A,START,out) ```
github_jupyter
``` from google.colab import drive drive.mount('/gdrive') %cd /gdrive %cd /gdrive/My Drive/chest_xray from __future__ import absolute_import, division, print_function, unicode_literals try: # %tensorflow_version only exists in colab %tensorflow_version 2.x except Exception: pass import tensorflow as tf from tensorflow.keras.applications import ResNet50 from tensorflow.keras.models import Model, Sequential from tensorflow.keras.layers import Input, Dense, Flatten, Dropout, BatchNormalization, GlobalAveragePooling2D from tensorflow.keras.layers import Conv2D, SeparableConv2D, MaxPool2D, LeakyReLU, Activation from tensorflow.keras.optimizers import Adam from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau, EarlyStopping import os import numpy as np import pandas as pd # import random import cv2 import matplotlib.pyplot as plt %matplotlib inline train_dir = os.path.join(os.getcwd(), 'train') dir_pneumonia_train = os.path.join(train_dir, 'PNEUMONIA_train') dir_normal_train = os.path.join(train_dir, 'NORMAL_train') val_dir = os.path.join(os.getcwd(), 'validation') dir_normal_val = os.path.join(val_dir, 'NORMAL_test') dir_pneumonia_val = os.path.join(val_dir, 'PNEUMONIA_test') #checking the number of images in each directory num_pneumonia_train = len(os.listdir(dir_pneumonia_train)) num_pneumonia_val = len(os.listdir(dir_pneumonia_val)) total_pneumonia_images = num_pneumonia_train + num_pneumonia_val num_normal_train = len(os.listdir(dir_normal_train)) num_normal_val = len(os.listdir(dir_normal_val)) total_normal_images = num_normal_train + num_normal_val total_train = num_pneumonia_train + num_normal_train total_val = num_pneumonia_val + num_normal_val print('total training pneumonia images: ', num_pneumonia_train) print('total validation pneumonia images: ', num_pneumonia_val) print('total pneumonia images: ', total_pneumonia_images) print('\ntotal training normal images: ', num_normal_train) print('total validation normal images: ', num_normal_val) print('total normal images: ', total_normal_images) print('\ntotal train images: ', total_train) print('\ntotal validation images: ', total_val) #setting variables to use while preprocessing data batch_size = 100 epochs = 15 IMG_HEIGHT = 100 IMG_WIDTH = 100 # This is useful to get the confusion matrix test_data = [] test_labels = [] for dir in [dir_pneumonia_val, dir_normal_val]: for img in (os.listdir(dir)): img = plt.imread(os.path.join(dir, img)) img = cv2.resize(img, (IMG_HEIGHT, IMG_WIDTH)) img = np.dstack([img, img, img]) img = img.astype('float32') / 255 if dir == dir_pneumonia_val: label = 1 elif dir == dir_normal_val: label = 0 test_data.append(img) test_labels.append(label) # Data generation train_image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.3, vertical_flip=True) val_image_gen = ImageDataGenerator(rescale=1./255) train_data_gen = train_image_gen.flow_from_directory(batch_size=batch_size, directory=train_dir, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') val_data_gen = val_image_gen.flow_from_directory(batch_size=batch_size, directory=val_dir, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') # Creating Model img_dims = (IMG_HEIGHT, IMG_WIDTH, 3) model = Sequential([ Conv2D(16, 3, activation='relu', padding='same', input_shape=img_dims), MaxPool2D(), SeparableConv2D(32, 3, activation='relu', padding='same'), BatchNormalization(), MaxPool2D(), SeparableConv2D(64, 3, activation='relu', padding='same'), BatchNormalization(), MaxPool2D(), SeparableConv2D(128, 3, activation='relu', padding='same'), BatchNormalization(), MaxPool2D(), Dropout(rate=0.2), SeparableConv2D(256, 3, activation='relu', padding='same'), BatchNormalization(), MaxPool2D(), Dropout(rate=0.1), Flatten(), Dense(512, activation='relu'), Dropout(rate=0.3), Dense(units=128, activation='relu'), Dropout(rate=0.2), Dense(64, activation='relu'), # Dropout(rate=0.15), Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Callbacks checkpoint = ModelCheckpoint(filepath='best_weights.hdf5', save_best_only=True, save_weights_only=True) lr_reduce = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=2, verbose=2, mode='max') early_stop = EarlyStopping(monitor='val_loss', min_delta=0.1, patience=1, mode='min') model.summary() history = model.fit( train_data_gen, steps_per_epoch=total_train // batch_size, epochs=epochs, validation_data = val_data_gen, validation_steps=total_val // batch_size, callbacks=[checkpoint, lr_reduce]) from sklearn.metrics import accuracy_score, confusion_matrix test_data = np.array(test_data) preds = model(test_data) pred = np.array([int(x) for x in np.round(preds)]).reshape(-1, 1) acc = accuracy_score(test_labels, pred)*100 cm = confusion_matrix(test_labels, pred) tn, fp, fn, tp = cm.ravel() print('CONFUSION MATRIX ------------------') print(cm) print('\nTEST METRICS ----------------------') precision = tp/(tp+fp)*100 recall = tp/(tp+fn)*100 print('Accuracy: {}%'.format(acc)) print('Precision: {}%'.format(precision)) print('Recall: {}%'.format(recall)) print('F1-score: {}'.format(2*precision*recall/(precision+recall))) print('\nTRAIN METRIC ----------------------') print('Train acc: {}'.format(np.round((history.history['accuracy'][-1])*100, 2))) 7# Visualize training results acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() #Tranfer learning img_dims = (IMG_HEIGHT, IMG_WIDTH, 3) model_learn = Sequential() model_learn.add( ResNet50( include_top=False, pooling='avg', weights='imagenet' ) ) model_learn.add( Dense(1, activation='softmax') ) # Say not to train first layer(ResNet) model. it is already trained model_learn.layers[0].trainable = False model_learn.compile(ptimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Callbacks checkpoint = ModelCheckpoint(filepath='best_weights_learn.hdf5', save_best_only=True, save_weights_only=True) lr_reduce = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=2, verbose=2, mode='max') early_stop = EarlyStopping(monitor='val_loss', min_delta=0.1, patience=1, mode='min') model_learn.summary() history = model_learn.fit_generator( train_data_gen, steps_per_epoch=total_train // batch_size, epochs=10, validation_data = val_data_gen, validation_steps=total_val // batch_size, callbacks=[lr_reduce] ) ```
github_jupyter
``` import imgaug.augmenters as iaa import mlflow.pytorch import numpy as np import torch from torch.utils.data import DataLoader from torchvision.transforms import Compose from tqdm import tqdm import sys sys.path.append('../../') from src import MODELS_DIR, MLFLOW_TRACKING_URI, DATA_PATH from src.data import TrainValTestSplitter, MURASubset from src.data.transforms import GrayScale, Resize, HistEqualisation, MinMaxNormalization, ToTensor from src.features.augmentation import Augmentation from src.models.alphagan import AlphaGan from src.models.sagan import SAGAN from src.models.autoencoders import BottleneckAutoencoder, BaselineAutoencoder, SkipConnection from src.models.gans import DCGAN from src.models.vaetorch import VAE from sklearn.metrics import roc_auc_score, average_precision_score import matplotlib.pyplot as plt %matplotlib inline run_params = { 'image_resolution': (512, 512), 'pipeline': { 'hist_equalisation': False, 'data_source': 'XR_HAND_PHOTOSHOP', } } augmentation_seq = iaa.Sequential([iaa.PadToFixedSize(*run_params['image_resolution'], position='center')]) composed_transforms = Compose([GrayScale(), HistEqualisation(active=run_params['pipeline']['hist_equalisation']), Resize(run_params['image_resolution'], keep_aspect_ratio=True), Augmentation(augmentation_seq), MinMaxNormalization(), ToTensor()]) data_path = f'{DATA_PATH}/{run_params["pipeline"]["data_source"]}' splitter = TrainValTestSplitter(path_to_data=data_path) composed_transforms_val = Compose([GrayScale(), HistEqualisation(active=run_params['pipeline']['hist_equalisation']), Resize(run_params['image_resolution'], keep_aspect_ratio=True), Augmentation(iaa.Sequential( [iaa.PadToFixedSize(*run_params['image_resolution'], position='center')])), # Padding(max_shape=run_params['image_resolution']), # max_shape - max size of image after augmentation MinMaxNormalization(), ToTensor()]) test = MURASubset(filenames=splitter.data_test.path, true_labels=splitter.data_test.label, patients=splitter.data_test.patient, transform=composed_transforms_val) test_loader = DataLoader(test, batch_size=64, shuffle=True, num_workers=5) ``` ## Baseline autoencoder ``` path_to_model = '/home/ubuntu/mlruns/1/5ca7f67c33674926a00590752c877fe5/artifacts/BaselineAutoencoder.pth' model = torch.load(path_to_model, map_location='cpu') model.eval().to('cpu') # Evaluation mode model.eval() with torch.no_grad(): scores = [] true_labels = [] for batch_data in tqdm(test_loader, total=len(test_loader)): # Format input batch inp = batch_data['image'].to('cpu') mask = batch_data['mask'].to('cpu') # Forward pass output = model(inp) loss = model.outer_loss(output, inp, mask) if model.masked_loss_on_val else model.outer_loss(output, inp) # Scores, based on MSE - higher MSE correspond to abnormal image if model.masked_loss_on_val: sum_loss = loss.to('cpu').numpy().sum(axis=(1, 2, 3)) sum_mask = mask.to('cpu').numpy().sum(axis=(1, 2, 3)) score = sum_loss / sum_mask else: score = loss.to('cpu').numpy().mean(axis=(1, 2, 3)) scores.extend(score) true_labels.extend(batch_data['label'].numpy()) scores = np.array(scores) true_labels = np.array(true_labels) # ROC-AUC and APS roc_auc = roc_auc_score(true_labels, scores) aps = average_precision_score(true_labels, scores) print(f'ROC-AUC on test: {roc_auc}') print(f'APS on test: {aps}') ``` ## Bottleneck autoencoder ``` path_to_model = '/home/ubuntu/mlruns/2/d4fc0453d67b4d5aaac6c353e9264716/artifacts/BottleneckAutoencoder/data/model.pth' model = torch.load(path_to_model, map_location='cpu') model.eval().to('cpu') # Evaluation mode model.eval() with torch.no_grad(): scores = [] true_labels = [] for batch_data in tqdm(test_loader, total=len(test_loader)): # Format input batch inp = batch_data['image'].to('cpu') mask = batch_data['mask'].to('cpu') # Forward pass output = model(inp) loss = model.outer_loss(output, inp, mask) if model.masked_loss_on_val else model.outer_loss(output, inp) # Scores, based on MSE - higher MSE correspond to abnormal image if model.masked_loss_on_val: sum_loss = loss.to('cpu').numpy().sum(axis=(1, 2, 3)) sum_mask = mask.to('cpu').numpy().sum(axis=(1, 2, 3)) score = sum_loss / sum_mask else: score = loss.to('cpu').numpy().mean(axis=(1, 2, 3)) scores.extend(score) true_labels.extend(batch_data['label'].numpy()) scores = np.array(scores) true_labels = np.array(true_labels) # ROC-AUC and APS roc_auc = roc_auc_score(true_labels, scores) aps = average_precision_score(true_labels, scores) print(f'ROC-AUC on test: {roc_auc}') print(f'APS on test: {aps}') ``` ## Variational autoencoder ``` path_to_model = '/home/diana/xray/models/VAE.pth' model = torch.load(path_to_model, map_location='cpu') model.eval().to('cpu') model.device = 'cpu' # Evaluation mode model.eval() with torch.no_grad(): losses = [] true_labels = [] for batch_data in tqdm(test_loader, total=len(test_loader)): # Format input batch inp = batch_data['image'].to('cpu') mask = batch_data['mask'].to('cpu') # forward pass output, mu, var = model(inp) loss = model.loss(output, inp, mu, var, reduction='none') losses.extend(loss.to('cpu').numpy().mean(axis=1)) true_labels.extend(batch_data['label'].numpy()) losses = np.array(losses) true_labels = np.array(true_labels) # ROC-AUC and APS roc_auc = roc_auc_score(true_labels, losses) aps = average_precision_score(true_labels, losses) print(f'ROC-AUC on test: {roc_auc}') print(f'APS on test: {aps}') ``` ## DCGAN ``` path_to_model = '/home/ubuntu/mlruns/4/bc66df523f424e978c68cd25f472a696/artifacts/DCGAN_good.pth' model = torch.load(path_to_model, map_location='cpu') model.eval().to('cpu') model.device = 'cpu' with torch.no_grad(): scores = [] true_labels = [] for batch_data in tqdm(test_loader, total=len(test_loader)): # Format input batch inp = batch_data['image'].to(model.device) # Forward pass output = model.discriminator(inp).to('cpu').numpy().reshape(-1) # Scores, based on output of discriminator - Higher score must correspond to positive labeled images score = output if bool(model.real_label) else 1 - output scores.extend(score) true_labels.extend(batch_data['label'].numpy()) scores = np.array(scores) true_labels = np.array(true_labels) # ROC-AUC and APS roc_auc = roc_auc_score(true_labels, -scores) aps = average_precision_score(true_labels, -scores) print(f'ROC-AUC on test: {roc_auc}') print(f'APS on test: {aps}') ``` ## Bi-GAN ``` run_params = { 'image_resolution': (128, 128), 'pipeline': { 'hist_equalisation': False, 'data_source': 'XR_HAND_PHOTOSHOP', } } augmentation_seq = iaa.Sequential([iaa.PadToFixedSize(*run_params['image_resolution'], position='center')]) composed_transforms = Compose([GrayScale(), HistEqualisation(active=run_params['pipeline']['hist_equalisation']), Resize(run_params['image_resolution'], keep_aspect_ratio=True), Augmentation(augmentation_seq), MinMaxNormalization(), ToTensor()]) test = MURASubset(filenames=splitter.data_test.path, true_labels=splitter.data_test.label, patients=splitter.data_test.patient, transform=composed_transforms_val) test_loader = DataLoader(test, batch_size=1, shuffle=True, num_workers=5) path_to_model = '/home/ubuntu/xray/models/SAGAN200.pth' model = torch.load(path_to_model, map_location='cpu') model.eval().to('cpu') model.device = 'cpu' with torch.no_grad(): scores_mse = [] scores_proba = [] true_labels = [] for batch_data in tqdm(test_loader, total=len(test_loader)): # Format input batch inp = batch_data['image'].to(model.device) mask = batch_data['mask'].to(model.device) # Forward pass # Forward pass real_z, _, _ = model.encoder(inp) if len(real_z.size()) == 1: real_z = real_z.view(1, real_z.size(0)) reconstructed_img, _, _ = model.generator(real_z) loss = model.outer_loss(reconstructed_img, inp, mask) if model.masked_loss_on_val \ else model.outer_loss(reconstructed_img, inp) # Scores, based on output of discriminator - Higher score must correspond to positive labeled images proba = self.discriminator(inp, real_z)[0].to('cpu').numpy().reshape(-1) # Scores, based on MSE - higher MSE correspond to abnormal image if model.masked_loss_on_val: sum_loss = loss.to('cpu').numpy().sum(axis=(1, 2, 3)) sum_mask = mask.to('cpu').numpy().sum(axis=(1, 2, 3)) score = sum_loss / sum_mask else: score = loss.to('cpu').numpy().mean(axis=(1, 2, 3)) scores_mse.extend(score) scores_proba.extend(proba) true_labels.extend(batch_data['label'].numpy()) scores_mse = np.array(scores_mse) scores_proba = np.array(scores_proba) true_labels = np.array(true_labels) # ROC-AUC and APS roc_auc = roc_auc_score(true_labels, -scores) aps = average_precision_score(true_labels, -scores) print(f'ROC-AUC on test: {roc_auc}') print(f'APS on test: {aps}') ``` ## Alpha-GAN ``` path_to_model = '/home/ubuntu/xray/models/AlphaGan300_best.pth' model = torch.load(path_to_model, map_location='cpu') model.eval().to('cpu') model.device = 'cpu' with torch.no_grad(): scores_mse = [] scores_proba = [] true_labels = [] for batch_data in tqdm(test_loader, total=len(test_loader)): # Format input batch inp = batch_data['image'].to(model.device) mask = batch_data['mask'].to(model.device) # Forward pass z_mean, _, _, _ = model.encoder(inp) if len(z_mean.size()) == 1: z_mean = z_mean.view(1, z_mean.size(0)) reconstructed_img, _, _ = model.generator(z_mean) loss = model.outer_loss(reconstructed_img, inp, mask) if model.masked_loss_on_val \ else model.outer_loss(reconstructed_img, inp) # Scores, based on output of discriminator - Higher score must correspond to positive labeled images proba = self.discriminator(inp, real_z)[0].to('cpu').numpy().reshape(-1) # Scores, based on MSE - higher MSE correspond to abnormal image if model.masked_loss_on_val: sum_loss = loss.to('cpu').numpy().sum(axis=(1, 2, 3)) sum_mask = mask.to('cpu').numpy().sum(axis=(1, 2, 3)) score = sum_loss / sum_mask else: score = loss.to('cpu').numpy().mean(axis=(1, 2, 3)) scores_mse.extend(score) scores_proba.extend(proba) true_labels.extend(batch_data['label'].numpy()) scores_mse = np.array(scores_mse) scores_proba = np.array(scores_proba) true_labels = np.array(true_labels) ```
github_jupyter
# Export von KITTI Rohdaten ### Download Für das entwickelte SLAM Verfahren werden Punktwolken, eine Startposition sowie als Vergleich eine Ground Truth Trajektorie benötigt. Mit diesem Skript werden Datensätze aus Rohdaten der [KITTI Vision Benchmark Suite](http://www.cvlibs.net/) exportiert. [Hier](http://www.cvlibs.net/datasets/kitti/raw_data.php) können Messfahrten einzeln heruntergeladen werden. Nötig sind die synchronisierten *synced+rectified data* Daten sowie die Kalibrierung *calibration* der Sensoren. Die Daten und Kalibrierungen werden beispielsweise in diesen Pfad entpackt: *C:/KITTI/2011_09_26/* Sodass folgende Ordner und Dateien vorhanden sind: *2011_09_26_drive_0001_sync* *calib_cam_to_cam.txt* *calib_imu_to_velo.txt* *calib_velo_to_cam.txt* Die für das SLAM erforderlichen Daten werden ebenfalls unter dem Pfad in diesem Ordner gespeichert: *2011_09_26_drive_0001_export* ### Angabe der Messfahrt und Laden der Daten Auswahl der Messfahrt: ``` basedir = 'C:/KITTI' date = '2011_09_26' drive = '0013' ``` Zum auslesen der KITTI-Daten wird die Bibliothek [pykitti](https://github.com/utiasSTARS/pykitti) genutzt. ``` import numpy as np import pykitti import utm import os pathSave = basedir+'/'+date+'/'+date+'_drive_'+drive+'_export/' if not os.path.exists(pathSave): os.makedirs(pathSave) dataset = pykitti.raw(basedir,date,drive) ``` ### Ausgabe Ground Truth Trajektorie Das Ursprung des Koordinatensystem des Positionierungssystems und damit der Trajektorie liegt wie in der Abbildung ersichtlich an einer anderen Position als der des Laserscanners. Als Position des Fahrzeugs wurde der Ursprung des Koordinatensystems des Laserscanners gewählt, daher muss die Trajektorie so transformiert werden, dass sie an der Position des Ursprung des Koordinatensystem des Laserscanners liegt. <img src="fig/KITTI_sensor.png" alt="Aufbau Messfahrzeug" style="width: 500px;"/> Dafür wird die Transformationsmatrix aus der Kalibrierungsdatei *calib_imu_to_velo* geladen und die Translation genutzt: ``` T = np.matrix(dataset.calib.T_velo_imu[0:2,3]).transpose() ``` Für jede Position der Trajektorie des Positionierungssystem wird dieser Vektor um den Gierwinkel (yaw) des Fahrzeugs gedreht. Mit dem rotierten Verschiebungsvektor kann der Punkt der Trajektorie des Positionierungssystem an den Ursprung des Laserscanners transformiert werden. ``` # save ground truth trajectory here groundTruth = [] PosSystem = dataset.oxts for _ in dataset.velo: # get next pose of position system pose = next(PosSystem) # get latitude and longitude from position system latitude = pose.packet.lat longitude = pose.packet.lon # calculate UTM coordinates from latitude and longitude posUtm = utm.from_latlon(latitude,longitude) posUtm = np.matrix([posUtm[0],posUtm[1]]) # get yaw of position system and create 2D rotation matrix yaw = pose.packet.yaw R = np.matrix([[np.cos(yaw), -np.sin(yaw)],[np.sin(yaw),np.cos(yaw)]]) # rotate translation vektor T_rot = R*T # Calculate Ground Truth at the position of the velodyne LIDAR posUtmTrans = posUtm-np.transpose(T_rot) # add position x y yaw to list groundTruth.append(np.matrix([posUtmTrans[0,0], posUtmTrans[0,1], pose.packet.yaw])) # save ground truth trajectory groundTruth = np.vstack(groundTruth) np.savetxt(pathSave+'groundTruth.txt',groundTruth,delimiter=',',fmt='%1.3f') print('Finished') ``` ### Ausgabe Initiale Position Die initiale Position ist gleich der ersten Position der Ground Truth Trajektorie. Um die Datenstruktur beizubehalten wenn keine Ground Truth Trajektorie vorhanden ist, wird die intiale Position in einer seperaten Datei gespeichert. ``` np.savetxt(pathSave+'firstPose.txt',groundTruth[0],delimiter=',',fmt='%1.3f') ``` ### Ausgabe Punktwolken Die Punktwolken werden wie sie auch in der KITTI-Datenstruktur vorliegen im Koordinatensystem des Velodyne-Laserscanners gespeichert. Für das SLAM sollten die Punktwolken im binären NumPy Formate gespeichert werden. Dadurch werden die Einlesezeiten gegenüber einer Speicherung als .txt deutlich reduziert. Die Punktwolken können zusätzlich als Textdatei ausgegeben werden um sie beispielsweise mit [CloudCompare](http://www.danielgm.net/cc/) zu betrachten. ``` printTxt = False ii = 0 for scan in dataset.velo: # get pointcloud (x y z intensity) and delete intensity pointcloud = np.asarray(scan) pointcloud = np.delete(pointcloud,3,1) # save pointcloud as binary np.save(pathSave+'pointcloudNP_'+str(ii),pointcloud) # save pointcloud as comma seperated txt if printTxt: np.savetxt(pathSave+'pointcloud_'+str(ii)+'.txt',pointcloud,delimiter=',',fmt='%1.3f') # show update if ii%100 == 0: print('Process measurement: '+str(ii)) ii = ii + 1 print('Finished: '+str(ii)+' measurements') ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Load-saved-models-for-both-people" data-toc-modified-id="Load-saved-models-for-both-people-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Load saved models for both people</a></span></li><li><span><a href="#Load-Spark-Dataframe" data-toc-modified-id="Load-Spark-Dataframe-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Load Spark Dataframe</a></span></li><li><span><a href="#Plot-Total-Screen-Time-for-Multiple-People-by-Show" data-toc-modified-id="Plot-Total-Screen-Time-for-Multiple-People-by-Show-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Plot Total Screen Time for Multiple People by Show</a></span><ul class="toc-item"><li><span><a href="#Plot-Difference-in-Screen-Time-Between-Pairs-of-People-By-Show" data-toc-modified-id="Plot-Difference-in-Screen-Time-Between-Pairs-of-People-By-Show-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Plot Difference in Screen Time Between Pairs of People By Show</a></span></li></ul></li><li><span><a href="#Compare-Screen-Time-Over-Time-For-Multiple-People-on-a-Single-Show" data-toc-modified-id="Compare-Screen-Time-Over-Time-For-Multiple-People-on-a-Single-Show-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Compare Screen Time Over Time For Multiple People on a Single Show</a></span></li><li><span><a href="#Co-occurence-on-Screen" data-toc-modified-id="Co-occurence-on-Screen-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Co-occurence on Screen</a></span></li></ul></div> ``` from esper.stdlib import * from esper.prelude import * from esper.identity import * from esper.spark_identity import * from esper.spark_util import * from esper.validation import * people = ['Donald Trump', 'Hillary Clinton', 'Bernie Sanders'] ``` # Load saved models for both people ``` def load_model(name): print('Loading model for {}.'.format(name)) model = FaceIdentityModel.load_from_gcs(name=name) imshow(tile_imgs([ cv2.resize(x[1][0], (200, 200)) for x in model.model_params['images']], cols=10 )) plt.show() plot_precision_and_cdf(model) return model face_models = [load_model(x) for x in people] ``` # Load Spark Dataframe ``` face_identities = get_face_identities() print('Schema:', face_identities) ``` # Plot Total Screen Time for Multiple People by Show ``` date_range = ['2016-01-01', '2016-11-09'] screen_time_by_canonical_show = [ get_screen_time_by_canonical_show_spark( name.lower(), face_identities.where(face_identities.in_commercial == False), date_range=date_range ) for name in people ] plot_screen_time_by_show(people, screen_time_by_canonical_show) ``` ## Plot Difference in Screen Time Between Pairs of People By Show ``` from itertools import combinations for i, j in combinations(range(len(people)), 2): plot_difference_in_screen_time_by_show( [x.lower() for x in [people[i], people[j]]], [screen_time_by_canonical_show[i], screen_time_by_canonical_show[j]], plot_proportion=False ) ``` # Compare Screen Time Over Time For Multiple People on a Single Show ``` canonical_show_name = 'MSNBC Live' face_identities_filtered = face_identities.where( face_identities.canonical_show_id == CanonicalShow.objects.get(name=canonical_show_name).id ) screen_times_by_video = [ { vid : st for vid, (st, var) in get_screen_time_by_video_spark( name.lower(), face_identities_filtered, date_range=date_range ).items() } for name in people ] plot_screentime_over_time(people, canonical_show_name, screen_times_by_video) ``` # Co-occurence on Screen ``` get_person_in_shot_similarity_spark( [x.lower() for x in people], face_identities, date_range=date_range ) ```
github_jupyter
``` import numpy as np vocab = {'<PAD>': 0, 'is': 1, 'it': 2, 'too': 3, 'late': 4, 'now': 5, 'say': 6, 'sorry': 7, 'ooh': 8, 'yeah': 9} X = [[0, 1, 2, 3, 4, 5, 6], [7, 7], [6, 8]] # get the length of each sentence X_lengths = [len(sentence) for sentence in X] # create an empty matrix with padding tokens pad_token = vocab['<PAD>'] longest_sent = max(X_lengths) batch_size = len(X) padded_X = np.ones((batch_size, longest_sent)) * pad_token # copy over the actual sequences for i, x_len in enumerate(X_lengths): sequence = X[i] padded_X[i, 0:x_len] = sequence[:x_len] padded_X XX = [x+1 for sublist in X for x in sublist] X[0] + 1 from pycocotools.coco import COCO coco = COCO('captions_train2014.json') ids = coco.anns.keys() for key, item in coco.anns.items(): print (key, item) break coco.anns[48] len(ids) from generate_vocab_dict import Vocabulary import pickle with open('./vocab.pkl', 'rb') as f: vocab = pickle.load(f) import nltk from collections import Counter from pycocotools.coco import COCO import logging import numpy as np coco = COCO("./captions_train2014.json") ids = coco.anns.keys() counter = Counter() for i, id in enumerate(ids): caption = str(coco.anns[id]['caption']) tokens = nltk.tokenize.word_tokenize(caption.lower()) counter.update(tokens) if (i+1) % 5000 == 0: print("Tokenization Process: {0:.2f}%.".format((i+1)*100/len(ids))) #logger.info("Tokenization Process: {0:.2f}%.".format((i+1)*100/len(ids))) # Keep the most frequently appeared words counts = [] for _, count in counter.items(): counts.append(count) counts.sort(reverse=True) len(counts) portion = 0.993 cum_ratio = np.cumsum(counts) / np.sum(counts) threshold = min(4,counts[np.argmax(cum_ratio > portion)]) threshold words = [] for word, count in counter.items(): if count >= threshold: words.append(word) words.sort() len(words) import pickle from generate_vocab_dict import Vocabulary vocab_path = './vocab.pkl' with open(vocab_path, 'rb') as f: vocab = pickle.load(f) ``` ## beam search ``` import torch k = 3; vocab_size=6 k_prev_words = torch.LongTensor([[0]] * k) complete_seqs = list() complete_seqs_scores = list() seqs = k_prev_words seqs top_k_scores = torch.zeros(k, 1) top_k_scores ``` ### image inputs ``` # repeat k times scores = torch.FloatTensor([[1,2,3,4,5,1], [1,2,3,4,5,1],[1,2,3,4,5,1]]) scores = top_k_scores.expand_as(scores) + scores scores top_k_scores, top_k_words = scores[0].topk(k, 0, True, True) top_k_scores, top_k_words prev_word_inds = top_k_words / vocab_size next_word_inds = top_k_words % vocab_size prev_word_inds, next_word_inds seqs = torch.cat([seqs[prev_word_inds], next_word_inds.unsqueeze(1)], dim=1) seqs incomplete_inds = [ind for ind, next_word in enumerate(next_word_inds) if next_word != 5] complete_inds = list(set(range(len(next_word_inds))) - set(incomplete_inds)) incomplete_inds,complete_inds if len(complete_inds) > 0: complete_seqs.extend(seqs[complete_inds].tolist()) complete_seqs_scores.extend(top_k_scores[complete_inds]) k -= len(complete_inds) # reduce beam length accordingly seqs = seqs[incomplete_inds] seqs top_k_scores = top_k_scores[incomplete_inds].unsqueeze(1) k_prev_words = next_word_inds[incomplete_inds].unsqueeze(1) top_k_scores, k_prev_words ``` ## text input ``` scores = torch.FloatTensor([[1,2,5,4,3,1], [1,5,3,4,2,1],[1,2,3,5,4,1]]) scores = top_k_scores.expand_as(scores) + scores scores ``` **row: previous**, **column: next** ``` top_k_scores, top_k_words = scores.view(-1).topk(k, 0, True, True) top_k_scores, top_k_words prev_word_inds = top_k_words / vocab_size next_word_inds = top_k_words % vocab_size prev_word_inds, next_word_inds incomplete_inds = [ind for ind, next_word in enumerate(next_word_inds) if next_word != 5] complete_inds = list(set(range(len(next_word_inds))) - set(incomplete_inds)) incomplete_inds,complete_inds if len(complete_inds) > 0: complete_seqs.extend(seqs[complete_inds].tolist()) complete_seqs_alpha.extend(seqs_alpha[complete_inds].tolist()) complete_seqs_scores.extend(top_k_scores[complete_inds]) k -= len(complete_inds) # reduce beam length accordingly seqs = torch.cat([seqs[prev_word_inds], next_word_inds.unsqueeze(1)], dim=1) seqs top_k_scores = top_k_scores[incomplete_inds].unsqueeze(1) k_prev_words = next_word_inds[incomplete_inds].unsqueeze(1) top_k_scores, k_prev_words ``` ## Meet the `<<end>>` ``` scores = torch.FloatTensor([[1,2,5,4,3,100], [1,5,3,4,2,1],[1,2,3,5,4,1]]) scores = top_k_scores.expand_as(scores) + scores scores top_k_scores, top_k_words = scores.view(-1).topk(k, 0, True, True) top_k_scores, top_k_words prev_word_inds = top_k_words / vocab_size next_word_inds = top_k_words % vocab_size prev_word_inds, next_word_inds incomplete_inds = [ind for ind, next_word in enumerate(next_word_inds) if next_word != 5] complete_inds = list(set(range(len(next_word_inds))) - set(incomplete_inds)) incomplete_inds,complete_inds if len(complete_inds) > 0: complete_seqs.extend(seqs[complete_inds].tolist()) complete_seqs_scores.extend(top_k_scores[complete_inds]) k -= len(complete_inds) # reduce beam length accordingly k complete_seqs, complete_seqs_scores seqs = torch.cat([seqs[prev_word_inds], next_word_inds.unsqueeze(1)], dim=1) seqs def beam_search(img_feature_embedding, decoder_path, beam_size=3, vocab=vocab): k = beam_size vocab_size=len(vocab) decoder = torch.load(decoder_path) k_prev_words = torch.LongTensor([[vocab.word2idx["<<start>>"]]] * k) complete_seqs = list() complete_seqs_scores = list() seqs = k_prev_words seqs vocab.word2idx["<<start>>"] len(vocab) ```
github_jupyter
# ObJAX CIFAR10 example This example is based on [cifar10_simple.py](https://github.com/google/objax/blob/master/examples/classify/img/cifar10_simple.py) with few minor changes: * it demonstrates how to do weight decay, * it uses Momentum optimizer with learning rate schedule, * it uses `tensorflow_datasets` instead of `Keras` dataset. It's recommended to run this notebook on GPU. In Google Colab this could be set through `Runtime -> Change runtime type` menu. # Installation and Imports ``` %pip --quiet install objax import math import random import jax import jax.numpy as jn from jax.lax import lax import numpy as np import tensorflow_datasets as tfds import objax from objax.zoo.wide_resnet import WideResNet ``` ## Parameters ``` base_learning_rate = 0.1 # Learning rate lr_decay_epochs = 30 # How often to decay learning rate lr_decay_factor = 0.2 # By how much to decay learning rate weight_decay = 0.0005 # Weight decay batch_size = 128 # Batch size num_train_epochs = 100 # Number of training epochs wrn_width = 2 # Width of WideResNet wrn_depth = 28 # Depth of WideResNet ``` # Setup dataset and model ``` # Augmentation function for input data def augment(x): # x is NCHW """Random flip and random shift augmentation of image batch.""" if random.random() < .5: x = x[:, :, :, ::-1] # Flip the batch images about the horizontal axis # Pixel-shift all images in the batch by up to 4 pixels in any direction. x_pad = np.pad(x, [[0, 0], [0, 0], [4, 4], [4, 4]], 'reflect') rx, ry = np.random.randint(0, 4), np.random.randint(0, 4) x = x_pad[:, :, rx:rx + 32, ry:ry + 32] return x # Data data = tfds.as_numpy(tfds.load(name='cifar10', batch_size=-1)) x_train = data['train']['image'].transpose(0, 3, 1, 2) / 255.0 y_train = data['train']['label'] x_test = data['test']['image'].transpose(0, 3, 1, 2) / 255.0 y_test = data['test']['label'] del data # Model model = WideResNet(nin=3, nclass=10, depth=wrn_depth, width=wrn_width) model_vars = model.vars() weight_decay_vars = [v for k, v in model_vars.items() if k.endswith('.w')] # Optimizer opt = objax.optimizer.Momentum(model_vars, nesterov=True) # Prediction operation predict_op = lambda x: objax.functional.softmax(model(x, training=False)) predict_op = objax.Jit(predict_op, model_vars) # Loss and training op def loss_fn(x, label): logit = model(x, training=True) xe_loss = objax.functional.loss.cross_entropy_logits_sparse(logit, label).mean() wd_loss = sum((v.value ** 2).sum() for v in weight_decay_vars) return xe_loss + weight_decay * wd_loss loss_gv = objax.GradValues(loss_fn, model.vars()) def train_op(x, y, learning_rate): grads, loss = loss_gv(x, y) opt(learning_rate, grads) return loss all_vars = model_vars + opt.vars() train_op = objax.Jit(train_op, all_vars) ``` **Model parameters** ``` print(model_vars) ``` # Training loop ``` def lr_schedule(epoch): return base_learning_rate * math.pow(lr_decay_factor, epoch // lr_decay_epochs) num_train_examples = x_train.shape[0] num_test_examples = x_test.shape[0] for epoch in range(num_train_epochs): # Training example_indices = np.arange(num_train_examples) np.random.shuffle(example_indices) for idx in range(0, num_train_examples, batch_size): x = x_train[example_indices[idx:idx + batch_size]] y = y_train[example_indices[idx:idx + batch_size]] loss = train_op(augment(x), y, lr_schedule(epoch))[0] # Eval accuracy = 0 for idx in range(0, num_test_examples, batch_size): x = x_test[idx:idx + batch_size] y = y_test[idx:idx + batch_size] p = predict_op(x) accuracy += (np.argmax(p, axis=1) == y).sum() accuracy /= num_test_examples print(f'Epoch {epoch+1:3} -- train loss {loss:.3f} test accuracy {accuracy*100:.1f}', flush=True) ```
github_jupyter
***** ***** # Exercises (with solutions) ### Read the data To start with, read in the two data files representing the master source list and observations source list. The fields for the two tables are respectively documented in: - [master_sources](http://cxc.harvard.edu/csc/columns/master.html) - [obs_sources](http://cxc.harvard.edu/csc/columns/persrc.html) ``` master_sources = Table.read('data/cdfs_master_sources.fits') obs_sources = Table.read('data/cdfs_obs_sources.fits') ``` **`master_sources`** Each distinct X-ray source identified on the sky is represented in the catalog by a single "master source" entry and one or more "source observation" entries, one for each observation in which the source has been detected. The master source entry records the best estimates of the properties of a source, based on the data extracted from the set of observations in which the source has been detected. The subset of fields in our exercise table file are: Name | Description ------ | ------------ msid | Master source ID name | Source name in the Chandra catalog ra | Source RA (deg) dec | Source Dec (deg) **`obs_sources`** The individual source entries record all of the properties about a detection extracted from a single observation, as well as associated file-based data products, which are observation-specific. The subset of fields in our exercise table file are: Name | Description ------ | ------------ obsid | Observation ID obi | Observation interval targname | Target name gti_obs | Observation date flux_aper_b | Broad band (0.5 - 7 keV) flux (erg/cm2/sec) src_cnts_aper_b | Broad band source counts ra_b | Source RA (deg) dec_b | Source Dec (deg) livetime | Observation duration (sec) posid | Position ID theta | Off-axis angle (arcmin) msid | Master source ID ### Exploring the data Do the following to explore the two tables: - Display the data for each table in IPython notebook using the normal way of showing the value of a variable. - Get a list of the column names for each table. *Hint*: use `<TAB>` completion to easily discover all the attributes and methods, e.g. type `master_sources.` and then hit the `<TAB>` key. - Find the length of each table. - Find the column datatypes for each table. Normally one displays a table in IPython notebook by entering the variable name in a cell and pressing `shift-Enter`. In a terminal session the default method is using something like `print(my_table)`. In both cases the `Table` object prefers to display only a screenful of data to prevent having a zillion lines of output if the table is huge. If you really want to see all the data you can use the [Table.pprint](http://astropy.readthedocs.org/en/stable/api/astropy.table.Table.html#astropy.table.Table.pprint) method. If you are using a Jupyter notebook interface, try the `show_in_notebook()` method. - Display all the rows of the `master_sources` table using its `pprint()` method. - If you are working in a regular terminal window (not IPython notebook), try the `more()` method as well. ``` master_sources.pprint() obs_sources.show_in_notebook() ``` ### Modifying tables For our analysis we don't actually need the `obi` (observation interval) column in the `obs_sources` table. - Remove the `obi` column from the `obs_sources` table. The `gti_obs` column name is a bit obscure (GTI is a good time interval, FWIW). - Rename the `gti_obs` column to `obs_date`. It would be nice to have a count rate in addition to the source counts. - Add a new column `src_rate_aper_b` which is the source counts divided by observation duration in sec. Some of the sources have a negative net flux in the broad band ``` obs_sources.remove_column('obi') obs_sources.rename_column("gti_obs", "obs_date") obs_sources['src_rate_aper_b'] = obs_sources['src_cnts_aper_b'] / obs_sources['livetime'] ``` ### Looking at the observation source data For each source detected in an individual observation (in the `obs_sources` table), let's look at the source flux values. - Use the matplotlib [`hist()`]( http://matplotlib.org/api/pyplot_api.html?highlight=pyplot.hist#matplotlib.pyplot.hist) function to make a histogram of the source fluxes. Since the fluxes vary by orders of magnitude, use the `numpy.log10` to put the fluxes in log space. - Also make the same plot but using only sources within 4 arcmin of the center. *HINT*: use a boolean mask to select values of `theta` that are less than 4.0. ``` plt.figure() plt.hist(np.log10(obs_sources['flux_aper_b'])) plt.show() mask = obs_sources['theta'] < 4.0 plt.figure() plt.hist(np.log10(obs_sources[mask]['flux_aper_b'])) plt.show() ``` ### Join the master_sources and obs_sources tables The `master_sources` and `obs_sources` tables share a common `msid` column. What we now want is to join the master RA and Dec positions and master source names with the individual observations table. - Use the [table.join()](http://astropy.readthedocs.org/en/stable/table/operations.html#join) function to make a single table called `sources` that has the master RA, Dec, and name included for each observation source. *HINT*: the defaults for `keys` and `join_type='inner'` are correct in this case, so the simplest possible call to `join()` will work! - *Intermediate*: Is the length of the new `sources` the same as `obs_sources`? What happened? - *Advanced*: Make a scatter plot of the RA (x-axis) and Dec (y-axis) difference between the master source position and the observation source position. You'll need to use `coordinates`! ``` sources = join(master_sources, obs_sources, join_type='inner') len(sources), len(master_sources), len(obs_sources) sources.colnames from astropy.coordinates import SkyCoord import astropy.units as u src_coord = SkyCoord(ra=sources['ra'], dec=sources['dec'], unit=(u.hourangle, u.deg)) obs_coord = SkyCoord(ra=sources['ra_b'], dec=sources['dec_b'], unit=(u.hourangle, u.deg)) d_ra = src_coord.ra - obs_coord.ra d_dec = src_coord.dec - obs_coord.dec plt.figure() # convert degrees to arcsec plt.scatter(3600 * d_ra, 3600 * d_dec) plt.show() ``` ### Grouped properties of `sources` Finally, we can look at the variability properties of sources in the CDFS using the [`group_by()`](http://astropy.readthedocs.org/en/stable/table/operations.html#id2) functionality. This method makes a new table in which all the sources with identical master ID are next to each other. - Make a new table `g_sources` which is the `sources` table grouped by the `msid` key using the `group_by()` method. The `g_sources` table is just a regular table with all the `sources` in a particular order. The attribute `g_sources.groups` is an object that provides access to the `msid` sub-groups. You can access the $i^{th}$ group with `g_sources.groups[i]`. In addition the `g_sources.groups.indices` attribute is an array with the indicies of the group boundaries. - Using `np.diff()` find the number of repeat observations of each master sources. *HINT*: use the indices, Luke. - Print the 50th group and note which columns are the same for all group members and which are different. Does this make sense? In these few observations how many different target names were provided by observers? ``` g_sources = sources.group_by('msid') np.diff(g_sources.groups.indices) g_sources.groups[50] ``` ### Aggregation The real power of grouping comes in the ability to create aggregate values for each of the groups, for instance the mean flux for each unique source. This is done with the [`aggregate()`](http://astropy.readthedocs.org/en/stable/table/operations.html#aggregation) method, which takes a function reference as its input. This function must take as input an array of values and return a single value. Aggregate returns a new table that has a length equal to the number of groups. - Compute the mean of all columns for each unique source (i.e. each group) using `aggregate` and the `np.mean` function. Call this table `g_sources_mean`. - Notice that aggregation cannot form a mean for certain columns and these are dropped from the output. Use the `join()` function to restore the `master_sources` information to `g_sources_mean`. ``` g_sources_mean = join(g_sources.groups.aggregate(np.mean), master_sources, keys=['msid'], join_type='inner') g_sources_mean ``` [Back to top](#Tables-introduction)
github_jupyter
# Pretty-printing DNA and protein sequences with `monoseq` [`monoseq`](https://github.com/martijnvermaat/monoseq/) is a Python library for pretty-printing DNA and protein sequences using a monospace font. It also provides a simple command line interface. Sequences are pretty-printed in the traditional way using blocks of letters where each line is prefixed with the sequence position. User-specified regions are highlighted and the output format can be HTML or plaintext with optional styling using ANSI escape codes for use in a terminal. Here we show how `monoseq` can be used in the IPython Notebook environment. See the `monoseq` [documentation](https://monoseq.readthedocs.org/) for more. **Note:** Some applications (e.g., GitHub) will not show the annotation styling in this notebook. [View this notebook on nbviewer](http://nbviewer.ipython.org/github/martijnvermaat/monoseq/blob/master/doc/monoseq.ipynb) to see all styling. ## Use in the IPython Notebook If you haven't already done so, install `monoseq` using `pip`. pip install monoseq The `monoseq.ipynb` module provides `Seq`, a convenience wrapper around `monoseq.pprint_sequence` providing easy printing of sequence strings in an IPython Notebook. ``` from monoseq.ipynb import Seq s = ('cgcactcaaaacaaaggaagaccgtcctcgactgcagaggaagcaggaagctgtc' 'ggcccagctctgagcccagctgctggagccccgagcagcggcatggagtccgtgg' 'ccctgtacagctttcaggctacagagagcgacgagctggccttcaacaagggaga' 'cacactcaagatcctgaacatggaggatgaccagaactggtacaaggccgagctc' 'cggggtgtcgagggatttattcccaagaactacatccgcgtcaag') Seq(s) ``` ### Block and line lengths We can change the number of characters per block and the number of blocks per line. ``` Seq(s, block_length=8, blocks_per_line=8) ``` ### Annotations Let's say we want to highlight two subsequences because they are conserved between species. We define each region as a tuple *start,stop* (zero-based, stop not included) and include this in the *annotation* argument. ``` conserved = [(11, 37), (222, 247)] Seq(s, annotations=[conserved]) ``` As a contrived example to show several levels of annotation, let's also annotate every 12th character and the middle third of the sequence. ``` twelves = [(p, p + 1) for p in range(11, len(s), 12)] middle = [(len(s) / 3, len(s) / 3 * 2)] Seq(s, annotations=[conserved, twelves, middle]) ``` ### Custom styling The default CSS that is applied can be overridden with the *style* argument. ``` style = """ {selector} {{ background: beige; color: gray }} {selector} .monoseq-margin {{ font-style: italic; color: green }} {selector} .monoseq-annotation-0 {{ color: blue; font-weight: bold }} """ Seq(s, style=style, annotations=[conserved]) ``` See the string in `monoseq.ipynb.DEFAULT_STYLE` for a longer example.
github_jupyter
### fast local attention + hidden states #### learning rate = 0.001, windows size 5, batch_size = 128, num_gpus = 6 ``` import tensorflow as tf tf.logging.set_verbosity(tf.logging.WARN) import pickle import numpy as np import os from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score import os from tensorflow.python.client import device_lib from collections import Counter import time VERY_BIG_NUMBER = 1e30 f = open('../../Glove/word_embedding_glove', 'rb') word_embedding = pickle.load(f) f.close() word_embedding = word_embedding[: len(word_embedding)-1] f = open('../../Glove/vocab_glove', 'rb') vocab = pickle.load(f) f.close() word2id = dict((w, i) for i,w in enumerate(vocab)) id2word = dict((i, w) for i,w in enumerate(vocab)) unknown_token = "UNKNOWN_TOKEN" # Model Description model_name = 'model-aw-lex-local-att-fast-v2-6' model_dir = '../output/all-word/' + model_name save_dir = os.path.join(model_dir, "save/") log_dir = os.path.join(model_dir, "log") if not os.path.exists(model_dir): os.mkdir(model_dir) if not os.path.exists(save_dir): os.mkdir(save_dir) if not os.path.exists(log_dir): os.mkdir(log_dir) with open('../../../dataset/train_val_data_fine/all_word_lex','rb') as f: train_data, val_data = pickle.load(f) # Parameters mode = 'train' num_senses = 45 num_pos = 12 batch_size = 128 vocab_size = len(vocab) unk_vocab_size = 1 word_emb_size = len(word_embedding[0]) max_sent_size = 200 hidden_size = 256 num_filter = 256 window_size = 5 kernel_size = 5 keep_prob = 0.3 l2_lambda = 0.001 init_lr = 0.001 decay_steps = 500 decay_rate = 0.99 clip_norm = 1 clipping = True moving_avg_deacy = 0.999 num_gpus = 6 width = int(window_size/2) def average_gradients(tower_grads): average_grads = [] for grad_and_vars in zip(*tower_grads): # Note that each grad_and_vars looks like the following: # ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN)) grads = [] for g, _ in grad_and_vars: # Add 0 dimension to the gradients to represent the tower. expanded_g = tf.expand_dims(g, 0) # Append on a 'tower' dimension which we will average over below. grads.append(expanded_g) # Average over the 'tower' dimension. grad = tf.concat(grads, 0) grad = tf.reduce_mean(grad, axis=0) # Keep in mind that the Variables are redundant because they are shared # across towers. So .. we will just return the first tower's pointer to # the Variable. v = grad_and_vars[0][1] grad_and_var = (grad, v) average_grads.append(grad_and_var) return average_grads # MODEL device_num = 0 tower_grads = [] losses = [] predictions = [] predictions_pos = [] x = tf.placeholder('int32', [num_gpus, batch_size, max_sent_size], name="x") y = tf.placeholder('int32', [num_gpus, batch_size, max_sent_size], name="y") y_pos = tf.placeholder('int32', [num_gpus, batch_size, max_sent_size], name="y") x_mask = tf.placeholder('bool', [num_gpus, batch_size, max_sent_size], name='x_mask') sense_mask = tf.placeholder('bool', [num_gpus, batch_size, max_sent_size], name='sense_mask') is_train = tf.placeholder('bool', [], name='is_train') word_emb_mat = tf.placeholder('float', [None, word_emb_size], name='emb_mat') input_keep_prob = tf.cond(is_train,lambda:keep_prob, lambda:tf.constant(1.0)) pretrain = tf.placeholder('bool', [], name="pretrain") global_step = tf.Variable(0, trainable=False, name="global_step") learning_rate = tf.train.exponential_decay(init_lr, global_step, decay_steps, decay_rate, staircase=True) summaries = [] def global_attention(input_x, input_mask, W_att): h_masked = tf.boolean_mask(input_x, input_mask) h_tanh = tf.tanh(h_masked) u = tf.matmul(h_tanh, W_att) a = tf.nn.softmax(u) c = tf.reduce_sum(tf.multiply(h_masked, a), axis=0) return c with tf.variable_scope("word_embedding"): unk_word_emb_mat = tf.get_variable("word_emb_mat", dtype='float', shape=[unk_vocab_size, word_emb_size], initializer=tf.contrib.layers.xavier_initializer(uniform=True, seed=0, dtype=tf.float32)) final_word_emb_mat = tf.concat([word_emb_mat, unk_word_emb_mat], 0) with tf.variable_scope(tf.get_variable_scope()): for gpu_idx in range(num_gpus): if gpu_idx>=3: device_num = 1 with tf.name_scope("model_{}".format(gpu_idx)) as scope, tf.device('/gpu:%d' % device_num): if gpu_idx > 0: tf.get_variable_scope().reuse_variables() with tf.name_scope("word"): Wx = tf.nn.embedding_lookup(final_word_emb_mat, x[gpu_idx]) float_x_mask = tf.cast(x_mask[gpu_idx], 'float') x_len = tf.reduce_sum(tf.cast(x_mask[gpu_idx], 'int32'), axis=1) with tf.variable_scope("lstm1"): cell_fw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True) cell_bw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True) d_cell_fw1 = tf.contrib.rnn.DropoutWrapper(cell_fw1, input_keep_prob=input_keep_prob) d_cell_bw1 = tf.contrib.rnn.DropoutWrapper(cell_bw1, input_keep_prob=input_keep_prob) (fw_h1, bw_h1), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw1, d_cell_bw1, Wx, sequence_length=x_len, dtype='float', scope='lstm1') h1 = tf.concat([fw_h1, bw_h1], 2) with tf.variable_scope("lstm2"): cell_fw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True) cell_bw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True) d_cell_fw2 = tf.contrib.rnn.DropoutWrapper(cell_fw2, input_keep_prob=input_keep_prob) d_cell_bw2 = tf.contrib.rnn.DropoutWrapper(cell_bw2, input_keep_prob=input_keep_prob) (fw_h2, bw_h2), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw2, d_cell_bw2, h1, sequence_length=x_len, dtype='float', scope='lstm2') h = tf.concat([fw_h2, bw_h2], 2) with tf.variable_scope("local_attention"): W_att_local = tf.get_variable("W_att_local", shape=[2*hidden_size, 1], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*10)) flat_h = tf.reshape(h, [batch_size*max_sent_size, tf.shape(h)[2]]) h_tanh = tf.tanh(flat_h) u_flat = tf.matmul(h_tanh, W_att_local) u_local = tf.reshape(u_flat, [batch_size, max_sent_size]) final_u = (tf.cast(x_mask[gpu_idx], 'float') -1)*VERY_BIG_NUMBER + u_local c_local = tf.map_fn(lambda i:tf.reduce_sum(tf.multiply(h[:, tf.maximum(0, i-width-1):tf.minimum(1+width+i, max_sent_size)], tf.expand_dims(tf.nn.softmax(final_u[:, tf.maximum(0, i-width-1):tf.minimum(1+width+i, max_sent_size)], 1), 2)), axis=1), tf.range(max_sent_size), dtype=tf.float32) c_local = tf.transpose(c_local, perm=[1,0,2]) h_final = tf.concat([tf.multiply(c_local, tf.expand_dims(float_x_mask, 2)), h], 2) flat_h_final = tf.reshape(h_final, [-1, tf.shape(h_final)[2]]) with tf.variable_scope("hidden_layer"): W = tf.get_variable("W", shape=[4*hidden_size, 2*hidden_size], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*20)) b = tf.get_variable("b", shape=[2*hidden_size], initializer=tf.zeros_initializer()) drop_flat_h_final = tf.nn.dropout(flat_h_final, input_keep_prob) flat_hl = tf.matmul(drop_flat_h_final, W) + b with tf.variable_scope("softmax_layer"): W = tf.get_variable("W", shape=[2*hidden_size, num_senses], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*20)) b = tf.get_variable("b", shape=[num_senses], initializer=tf.zeros_initializer()) drop_flat_hl = tf.nn.dropout(flat_hl, input_keep_prob) flat_logits_sense = tf.matmul(drop_flat_hl, W) + b logits = tf.reshape(flat_logits_sense, [batch_size, max_sent_size, num_senses]) predictions.append(tf.argmax(logits, 2)) with tf.variable_scope("softmax_layer_pos"): W = tf.get_variable("W", shape=[2*hidden_size, num_pos], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*30)) b = tf.get_variable("b", shape=[num_pos], initializer=tf.zeros_initializer()) flat_h1 = tf.reshape(h1, [-1, tf.shape(h1)[2]]) drop_flat_hl = tf.nn.dropout(flat_hl, input_keep_prob) flat_logits_pos = tf.matmul(drop_flat_hl, W) + b logits_pos = tf.reshape(flat_logits_pos, [batch_size, max_sent_size, num_pos]) predictions_pos.append(tf.argmax(logits_pos, 2)) float_sense_mask = tf.cast(sense_mask[gpu_idx], 'float') loss = tf.contrib.seq2seq.sequence_loss(logits, y[gpu_idx], float_sense_mask, name="loss") loss_pos = tf.contrib.seq2seq.sequence_loss(logits_pos, y_pos[gpu_idx], float_x_mask, name="loss_") l2_loss = l2_lambda * tf.losses.get_regularization_loss() total_loss = tf.cond(pretrain, lambda:loss_pos, lambda:loss + loss_pos + l2_loss) summaries.append(tf.summary.scalar("loss_{}".format(gpu_idx), loss)) summaries.append(tf.summary.scalar("loss_pos_{}".format(gpu_idx), loss_pos)) summaries.append(tf.summary.scalar("total_loss_{}".format(gpu_idx), total_loss)) optimizer = tf.train.AdamOptimizer(learning_rate) grads_vars = optimizer.compute_gradients(total_loss) clipped_grads = grads_vars if(clipping == True): clipped_grads = [(tf.clip_by_norm(grad, clip_norm), var) for grad, var in clipped_grads] tower_grads.append(clipped_grads) losses.append(total_loss) with tf.device('/gpu:0'): tower_grads = average_gradients(tower_grads) losses = tf.add_n(losses)/len(losses) apply_grad_op = optimizer.apply_gradients(tower_grads, global_step=global_step) summaries.append(tf.summary.scalar('total_loss', losses)) summaries.append(tf.summary.scalar('learning_rate', learning_rate)) variable_averages = tf.train.ExponentialMovingAverage(moving_avg_deacy, global_step) variables_averages_op = variable_averages.apply(tf.trainable_variables()) train_op = tf.group(apply_grad_op, variables_averages_op) saver = tf.train.Saver(tf.global_variables()) summary = tf.summary.merge(summaries) os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152 os.environ["CUDA_VISIBLE_DEVICES"]="1,2" # print (device_lib.list_local_devices()) config = tf.ConfigProto() config.gpu_options.allow_growth = True config.allow_soft_placement = True sess = tf.Session(config=config) sess.run(tf.global_variables_initializer()) # For initializing all the variables summary_writer = tf.summary.FileWriter(log_dir, sess.graph) # For writing Summaries save_period = 100 log_period = 100 def model(xx, yy, yy_pos, mask, smask, train_cond=True, pretrain_cond=False): num_batches = int(len(xx)/(batch_size*num_gpus)) _losses = 0 temp_loss = 0 preds_sense = [] true_sense = [] preds_pos = [] true_pos = [] for j in range(num_batches): s = j * batch_size * num_gpus e = (j+1) * batch_size * num_gpus xx_re = xx[s:e].reshape([num_gpus, batch_size, -1]) yy_re = yy[s:e].reshape([num_gpus, batch_size, -1]) yy_pos_re = yy_pos[s:e].reshape([num_gpus, batch_size, -1]) mask_re = mask[s:e].reshape([num_gpus, batch_size, -1]) smask_re = smask[s:e].reshape([num_gpus, batch_size, -1]) feed_dict = {x:xx_re, y:yy_re, y_pos:yy_pos_re, x_mask:mask_re, sense_mask:smask_re, pretrain:pretrain_cond, is_train:train_cond, input_keep_prob:keep_prob, word_emb_mat:word_embedding} if(train_cond==True): _, _loss, step, _summary = sess.run([train_op, losses, global_step, summary], feed_dict) summary_writer.add_summary(_summary, step) temp_loss += _loss if((j+1)%log_period==0): print("Steps: {}".format(step), "Loss:{0:.4f}".format(temp_loss/log_period), ", Current Loss: {0:.4f}".format(_loss)) temp_loss = 0 if((j+1)%save_period==0): saver.save(sess, save_path=save_dir) else: _loss, pred, pred_pos = sess.run([total_loss, predictions, predictions_pos], feed_dict) for i in range(num_gpus): preds_sense.append(pred[i][smask_re[i]]) true_sense.append(yy_re[i][smask_re[i]]) preds_pos.append(pred_pos[i][mask_re[i]]) true_pos.append(yy_pos_re[i][mask_re[i]]) _losses +=_loss if(train_cond==False): sense_preds = [] sense_true = [] pos_preds = [] pos_true = [] for preds in preds_sense: for ps in preds: sense_preds.append(ps) for trues in true_sense: for ts in trues: sense_true.append(ts) for preds in preds_pos: for ps in preds: pos_preds.append(ps) for trues in true_pos: for ts in trues: pos_true.append(ts) return _losses/num_batches, sense_preds, sense_true, pos_preds, pos_true return _losses/num_batches, step def eval_score(yy, pred, yy_pos, pred_pos): f1 = f1_score(yy, pred, average='macro') accu = accuracy_score(yy, pred) f1_pos = f1_score(yy_pos, pred_pos, average='macro') accu_pos = accuracy_score(yy_pos, pred_pos) return f1*100, accu*100, f1_pos*100, accu_pos*100 x_id_train = train_data['x'] mask_train = train_data['x_mask'] sense_mask_train = train_data['sense_mask'] y_train = train_data['y'] y_pos_train = train_data['pos'] x_id_val = val_data['x'] mask_val = val_data['x_mask'] sense_mask_val = val_data['sense_mask'] y_val = val_data['y'] y_pos_val = val_data['pos'] def testing(): start_time = time.time() val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False) f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos) time_taken = time.time() - start_time print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken)) return f1_, accu_, f1_pos_, accu_pos_ def training(current_epoch, pre_train_cond): random = np.random.choice(len(y_train), size=(len(y_train)), replace=False) x_id_train_tmp = x_id_train[random] y_train_tmp = y_train[random] mask_train_tmp = mask_train[random] sense_mask_train_tmp = sense_mask_train[random] y_pos_train_tmp = y_pos_train[random] start_time = time.time() train_loss, step = model(x_id_train_tmp, y_train_tmp, y_pos_train_tmp, mask_train_tmp, sense_mask_train_tmp, pretrain_cond=pre_train_cond) time_taken = time.time() - start_time print("Epoch: {}".format(current_epoch+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken)) saver.save(sess, save_path=save_dir) print("Model Saved") return [step, train_loss] loss_collection = [] val_collection = [] num_epochs = 20 val_period = 2 # Pretraining POS Tags training(0, True) training(1, True) testing() for i in range(num_epochs): loss_collection.append(training(i, False)) if((i+1)%val_period==0): val_collection.append(testing()) loss_collection = [] val_collection = [] num_epochs = 20 val_period = 2 for i in range(num_epochs): loss_collection.append(training(i, False)) if((i+1)%val_period==0): val_collection.append(testing()) loss_collection = [] val_collection = [] num_epochs = 20 val_period = 2 for i in range(num_epochs): loss_collection.append(training(i, False)) if((i+1)%val_period==0): val_collection.append(testing()) loss_collection = [] val_collection = [] num_epochs = 20 val_period = 2 for i in range(num_epochs): loss_collection.append(training(i, False)) if((i+1)%val_period==0): val_collection.append(testing()) loss_collection = [] val_collection = [] num_epochs = 20 val_period = 2 for i in range(num_epochs): loss_collection.append(training(i, False)) if((i+1)%val_period==0): val_collection.append(testing()) start_time = time.time() train_loss, train_pred, train_true, train_pred_pos, train_true_pos = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, train_cond=False) f1_, accu_, f1_pos_, accu_pos_ = etrain_score(train_true, train_pred, train_true_pos, train_pred_pos) time_taken = time.time() - start_time print("train: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken)) saver.restore(sess, save_dir) ```
github_jupyter
``` #default_exp foundation #export from fastcore.imports import * from fastcore.test import * from nbdev.showdoc import * ``` # Core > Basic functions used in the fastai library ``` # export defaults = SimpleNamespace() ``` ## Metaclasses See this [blog post](https://realpython.com/python-metaclasses/) for more information about metaclasses. - `PrePostInitMeta` ensures that the classes defined with it run `__pre_init__` and `__post_init__` (without having to write `self.__pre_init__()` and `self.__post_init__()` in the actual `init` - `NewChkMeta` gives the `PrePostInitMeta` functionality and ensures classes defined with it don't re-create an object of their type whenever it's passed to the constructor - `BypassNewMeta` ensures classes defined with it can easily be casted form objects they subclass. ``` #export def _rm_self(sig): sigd = dict(sig.parameters) sigd.pop('self') return sig.replace(parameters=sigd.values()) #export class FixSigMeta(type): "A metaclass that fixes the signature on classes that override __new__" def __new__(cls, name, bases, dict): res = super().__new__(cls, name, bases, dict) if res.__init__ is not object.__init__: res.__signature__ = _rm_self(inspect.signature(res.__init__)) return res ``` When you inherit from a class that defines `__new__`, or a metaclass that defines `__call__`, the signature of your `__init__` method is obfuscated such that tab completion no longer works. `FixSigMeta` fixes this issue and restores signatures. #### Example You can inspect the signature of an object with `inspect.signature`: ``` class T: def __init__(self, a, b, c): pass inspect.signature(T) ``` This corresponds to tab completion working in the normal way: ![screenshot](attachment:Screen%20Shot%202020-08-22%20at%208.14.20%20PM.png) However, when you inherhit from a class that defines `__new__` or a metaclass that defines `__call__` this obfuscates the signature by overriding your class with the signature of `__new__`, which prevents tab completion from displaying useful information: ``` class Foo: def __new__(self, **args): pass class Bar(Foo): def __init__(self, d, e, f): pass inspect.signature(Bar) ``` ![screenshot](attachment:Screen%20Shot%202020-08-22%20at%208.18.01%20PM.png) Finally, the signature and tab completion can be restored by inheriting from the metaclass `FixSigMeta` as shown below: ``` class Bar(Foo, metaclass=FixSigMeta): def __init__(self, d, e, f): pass test_sig(Bar, '(d, e, f)') inspect.signature(Bar) ``` ![screenshot](attachment:Screen%20Shot%202020-08-22%20at%208.23.32%20PM.png) If you need to define a metaclass that overrides `__call__` (as done in `PrePostInitMeta`), you need to inherit from `FixSigMeta` instead of `type` when constructing the metaclass to preserve the signature in `__init__`. Be careful not to override `__new__` when doing this: ``` class TestMeta(FixSigMeta): # __new__ comes from FixSigMeta def __call__(cls, *args, **kwargs): pass class T(metaclass=TestMeta): def __init__(self, a, b): pass test_sig(T, '(a, b)') ``` On the other hand, if you fail to inherit from `FixSigMeta` when inheriting from a metaclass that overrides `__call__`, your signature will reflect that of `__call__` instead (which is often undesirable): ``` class GenericMeta(type): "A boilerplate metaclass that doesn't do anything for testing." def __new__(cls, name, bases, dict): return super().__new__(cls, name, bases, dict) def __call__(cls, *args, **kwargs): pass class T2(metaclass=GenericMeta): def __init__(self, a, b): pass # We can avoid this by inheriting from the metaclass `FixSigMeta` test_sig(T2, '(*args, **kwargs)') #export class PrePostInitMeta(FixSigMeta): "A metaclass that calls optional `__pre_init__` and `__post_init__` methods" def __call__(cls, *args, **kwargs): res = cls.__new__(cls) if type(res)==cls: if hasattr(res,'__pre_init__'): res.__pre_init__(*args,**kwargs) res.__init__(*args,**kwargs) if hasattr(res,'__post_init__'): res.__post_init__(*args,**kwargs) return res show_doc(PrePostInitMeta, title_level=3) class _T(metaclass=PrePostInitMeta): def __pre_init__(self): self.a = 0; assert self.a==0 def __init__(self,b=0): self.a += 1; assert self.a==1 def __post_init__(self): self.a += 1; assert self.a==2 t = _T() test_eq(t.a, 2) #export class NewChkMeta(FixSigMeta): "Metaclass to avoid recreating object passed to constructor" def __call__(cls, x=None, *args, **kwargs): if not args and not kwargs and x is not None and isinstance(x,cls): x._newchk = 1 return x res = super().__call__(*((x,) + args), **kwargs) res._newchk = 0 return res class _T(metaclass=NewChkMeta): "Testing" def __init__(self, o=None, b=1): self.foo = getattr(o,'foo',0) + 1 self.b = b class _T2(): def __init__(self, o): self.foo = getattr(o,'foo',0) + 1 t = _T(1) test_eq(t.foo,1) t2 = _T(t) test_eq(t2.foo,1) test_is(t,t2) t3 = _T(t, b=2) test_eq(t3.b, 2) assert not t3 is t t = _T2(1) test_eq(t.foo,1) t2 = _T2(t) test_eq(t2.foo,2) test_eq(_T.__doc__, "Testing") test_eq(str(inspect.signature(_T)), '(o=None, b=1)') #export class BypassNewMeta(FixSigMeta): "Metaclass: casts `x` to this class if it's of type `cls._bypass_type`, initializing with `_new_meta` if available" def __call__(cls, x=None, *args, **kwargs): if hasattr(cls, '_new_meta'): x = cls._new_meta(x, *args, **kwargs) elif not isinstance(x,getattr(cls,'_bypass_type',object)) or len(args) or len(kwargs): x = super().__call__(*((x,)+args), **kwargs) if cls!=x.__class__: x.__class__ = cls return x class T0: pass class _T(T0, metaclass=BypassNewMeta): _bypass_type=T0 def __init__(self,x): self.x=x t = T0() t.a = 1 t2 = _T(t) test_eq(type(t2), _T) test_eq(t2.a,1) test_is(t2,t) t = _T(2) t.x = 2 ``` ## Foundational functions ``` #export def copy_func(f): "Copy a non-builtin function (NB `copy.copy` does not work for this)" if not isinstance(f,FunctionType): return copy(f) fn = FunctionType(f.__code__, f.__globals__, f.__name__, f.__defaults__, f.__closure__) fn.__dict__.update(f.__dict__) return fn #export def patch_to(cls, as_prop=False, cls_method=False): "Decorator: add `f` to `cls`" if not isinstance(cls, (tuple,list)): cls=(cls,) def _inner(f): for c_ in cls: nf = copy_func(f) # `functools.update_wrapper` when passing patched function to `Pipeline`, so we do it manually for o in functools.WRAPPER_ASSIGNMENTS: setattr(nf, o, getattr(f,o)) nf.__qualname__ = f"{c_.__name__}.{f.__name__}" if cls_method: setattr(c_, f.__name__, MethodType(nf, c_)) else: setattr(c_, f.__name__, property(nf) if as_prop else nf) return f return _inner class _T3(int): pass @patch_to(_T3) def func1(x, a): return x+a @patch_to(_T3, cls_method=True) def from_func2(cls, a): return cls(a) t = _T3(1) test_eq(t.func1(2), 3) t2 = _T3(2) test_eq(t2, 2) ``` If `cls` is a tuple, `f` is added to all types in the tuple. ``` class _T4(int): pass @patch_to((_T3,_T4)) def func2(x, a): return x+2*a t = _T3(1) test_eq(t.func2(1), 3) t = _T4(1) test_eq(t.func2(1), 3) #export def patch(f): "Decorator: add `f` to the first parameter's class (based on f's type annotations)" cls = next(iter(f.__annotations__.values())) return patch_to(cls)(f) @patch def func(x:_T3, a): "test" return x+a t = _T3(1) test_eq(t.func(3), 4) test_eq(t.func.__qualname__, '_T3.func') ``` If annotation is a tuple, the function is added to all types in the tuple. ``` @patch def func3(x:(_T3,_T4), a): "test" return x+2*a t = _T3(1) test_eq(t.func3(2), 5) test_eq(t.func3.__qualname__, '_T3.func3') t = _T4(1) test_eq(t.func3(2), 5) test_eq(t.func3.__qualname__, '_T4.func3') #export def patch_property(f): "Decorator: add `f` as a property to the first parameter's class (based on f's type annotations)" cls = next(iter(f.__annotations__.values())) return patch_to(cls, as_prop=True)(f) @patch_property def prop(x:_T3): return x+1 t = _T3(1) test_eq(t.prop, 2) #export def _mk_param(n,d=None): return inspect.Parameter(n, inspect.Parameter.KEYWORD_ONLY, default=d) def test_sig(f, b): test_eq(str(inspect.signature(f)), b) #export def use_kwargs_dict(keep=False, **kwargs): "Decorator: replace `**kwargs` in signature with `names` params" def _f(f): sig = inspect.signature(f) sigd = dict(sig.parameters) k = sigd.pop('kwargs') s2 = {n:_mk_param(n,d) for n,d in kwargs.items() if n not in sigd} sigd.update(s2) if keep: sigd['kwargs'] = k f.__signature__ = sig.replace(parameters=sigd.values()) return f return _f @use_kwargs_dict(y=1,z=None) def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, *, y=1, z=None)') #export def use_kwargs(names, keep=False): "Decorator: replace `**kwargs` in signature with `names` params" def _f(f): sig = inspect.signature(f) sigd = dict(sig.parameters) k = sigd.pop('kwargs') s2 = {n:_mk_param(n) for n in names if n not in sigd} sigd.update(s2) if keep: sigd['kwargs'] = k f.__signature__ = sig.replace(parameters=sigd.values()) return f return _f @use_kwargs(['y', 'z']) def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, *, y=None, z=None)') @use_kwargs(['y', 'z'], keep=True) def foo(a, *args, b=1, **kwargs): pass test_sig(foo, '(a, *args, b=1, y=None, z=None, **kwargs)') # export def delegates(to=None, keep=False, but=None): "Decorator: replace `**kwargs` in signature with params from `to`" if but is None: but = [] def _f(f): if to is None: to_f,from_f = f.__base__.__init__,f.__init__ else: to_f,from_f = to,f from_f = getattr(from_f,'__func__',from_f) to_f = getattr(to_f,'__func__',to_f) if hasattr(from_f,'__delwrap__'): return f sig = inspect.signature(from_f) sigd = dict(sig.parameters) k = sigd.pop('kwargs') s2 = {k:v for k,v in inspect.signature(to_f).parameters.items() if v.default != inspect.Parameter.empty and k not in sigd and k not in but} sigd.update(s2) if keep: sigd['kwargs'] = k else: from_f.__delwrap__ = to_f from_f.__signature__ = sig.replace(parameters=sigd.values()) return f return _f def basefoo(e, c=2): pass @delegates(basefoo) def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, c=2)') @delegates(basefoo, keep=True) def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, c=2, **kwargs)') @delegates(basefoo, but= ['c']) def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1)') class _T(): @classmethod def foo(cls, a=1, b=2): pass @classmethod @delegates(foo) def bar(cls, c=3, **kwargs): pass test_sig(_T.bar, '(c=3, a=1, b=2)') class BaseFoo: def __init__(self, e, c=2): pass @delegates() class Foo(BaseFoo): def __init__(self, a, b=1, **kwargs): super().__init__(**kwargs) test_sig(Foo, '(a, b=1, c=2)') #export def method(f): "Mark `f` as a method" # `1` is a dummy instance since Py3 doesn't allow `None` any more return MethodType(f, 1) #export def _funcs_kwargs(cls, as_method): old_init = cls.__init__ def _init(self, *args, **kwargs): for k in cls._methods: arg = kwargs.pop(k,None) if arg is not None: if as_method: arg = method(arg) if isinstance(arg,MethodType): arg = MethodType(arg.__func__, self) setattr(self, k, arg) old_init(self, *args, **kwargs) functools.update_wrapper(_init, old_init) cls.__init__ = use_kwargs(cls._methods)(_init) if hasattr(cls, '__signature__'): cls.__signature__ = _rm_self(inspect.signature(cls.__init__)) return cls #export def funcs_kwargs(as_method=False): "Replace methods in `cls._methods` with those from `kwargs`" if callable(as_method): return _funcs_kwargs(as_method, False) return partial(_funcs_kwargs, as_method=as_method) @funcs_kwargs class T: _methods=['b'] def __init__(self, f=1, **kwargs): assert not kwargs def a(self): return 1 def b(self): return 2 t = T() test_eq(t.a(), 1) test_eq(t.b(), 2) t = T(b = lambda:3) test_eq(t.b(), 3) test_sig(T, '(f=1, *, b=None)') test_fail(lambda: T(a = lambda:3)) def _f(self,a=1): return a+1 @funcs_kwargs(True) class T: _methods=['b'] t = T(b = _f) test_eq(t.b(2), 3) class T2(T): def __init__(self,a): super().__init__(b = lambda self:3) self.a=a t = T2(a=1) test_eq(t.b(), 3) test_sig(T2, '(a)') def _g(a=1): return a+1 class T3(T): b = staticmethod(_g) t = T3() test_eq(t.b(2), 3) #hide #test it works with PrePostInitMeta class A(metaclass=PrePostInitMeta): pass @funcs_kwargs class B(A): _methods = ['m1'] def __init__(self, **kwargs): pass test_sig(B, '(*, m1=None)') ``` Runtime type checking is handy, so let's make it easy! ``` @contextmanager def working_directory(path): "Change working directory to `path` and return to previous on exit." prev_cwd = Path.cwd() os.chdir(path) try: yield finally: os.chdir(prev_cwd) #export def add_docs(cls, cls_doc=None, **docs): "Copy values from `docs` to `cls` docstrings, and confirm all public methods are documented" if cls_doc is not None: cls.__doc__ = cls_doc for k,v in docs.items(): f = getattr(cls,k) if hasattr(f,'__func__'): f = f.__func__ # required for class methods f.__doc__ = v # List of public callables without docstring nodoc = [c for n,c in vars(cls).items() if callable(c) and not n.startswith('_') and c.__doc__ is None] assert not nodoc, f"Missing docs: {nodoc}" assert cls.__doc__ is not None, f"Missing class docs: {cls}" #export def docs(cls): "Decorator version of `add_docs`, using `_docs` dict" add_docs(cls, **cls._docs) return cls class _T: def f(self): pass @classmethod def g(cls): pass add_docs(_T, "a", f="f", g="g") test_eq(_T.__doc__, "a") test_eq(_T.f.__doc__, "f") test_eq(_T.g.__doc__, "g") #export def custom_dir(c, add:list): "Implement custom `__dir__`, adding `add` to `cls`" return dir(type(c)) + list(c.__dict__.keys()) + add show_doc(is_iter) assert is_iter([1]) assert not is_iter(array(1)) assert is_iter(array([1,2])) assert (o for o in range(3)) #export class _Arg: def __init__(self,i): self.i = i arg0 = _Arg(0) arg1 = _Arg(1) arg2 = _Arg(2) arg3 = _Arg(3) arg4 = _Arg(4) #export class bind: "Same as `partial`, except you can use `arg0` `arg1` etc param placeholders" def __init__(self, fn, *pargs, **pkwargs): self.fn,self.pargs,self.pkwargs = fn,pargs,pkwargs self.maxi = max((x.i for x in pargs if isinstance(x, _Arg)), default=-1) def __call__(self, *args, **kwargs): args = list(args) kwargs = {**self.pkwargs,**kwargs} for k,v in kwargs.items(): if isinstance(v,_Arg): kwargs[k] = args.pop(v.i) fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:] return self.fn(*fargs, **kwargs) def myfn(a,b,c,d=1,e=2): return(a,b,c,d,e) test_eq(bind(myfn, arg1, 17, arg0, e=3)(19,14), (14,17,19,1,3)) test_eq(bind(myfn, 17, arg0, e=3)(19,14), (17,19,14,1,3)) test_eq(bind(myfn, 17, e=3)(19,14), (17,19,14,1,3)) test_eq(bind(myfn)(17,19,14), (17,19,14,1,2)) test_eq(bind(myfn, 17,19,14,e=arg0)(3), (17,19,14,1,3)) ``` ## GetAttr - ``` #export class GetAttr: "Inherit from this to have all attr accesses in `self._xtra` passed down to `self.default`" _default='default' def _component_attr_filter(self,k): if k.startswith('__') or k in ('_xtra',self._default): return False xtra = getattr(self,'_xtra',None) return xtra is None or k in xtra def _dir(self): return [k for k in dir(getattr(self,self._default)) if self._component_attr_filter(k)] def __getattr__(self,k): if self._component_attr_filter(k): attr = getattr(self,self._default,None) if attr is not None: return getattr(attr,k) raise AttributeError(k) def __dir__(self): return custom_dir(self,self._dir()) # def __getstate__(self): return self.__dict__ def __setstate__(self,data): self.__dict__.update(data) ``` Inherit from `GetAttr` to have attr access passed down to an instance attribute. This makes it easy to create composites that don't require callers to know about their components. You can customise the behaviour of `GetAttr` in subclasses via; - `_default` - By default, this is set to `'default'`, so attr access is passed down to `self.default` - `_default` can be set to the name of any instance attribute that does not start with dunder `__` - `_xtra` - By default, this is `None`, so all attr access is passed down - You can limit which attrs get passed down by setting `_xtra` to a list of attribute names ``` class _C(GetAttr): # allow all attributes to get passed to `self.default` (by leaving _xtra=None) def __init__(self,a): self.default = a def foo(self): noop t = _C('Hi') test_eq(t.lower(), 'hi') test_eq(t.upper(), 'HI') assert 'lower' in dir(t) assert 'upper' in dir(t) class _C(GetAttr): _xtra = ['lower'] # specify which attributes get passed to `self.default` def __init__(self,a): self.default = a def foo(self): noop t = _C('Hi') test_eq(t.default, 'Hi') test_eq(t.lower(), 'hi') test_fail(lambda: t.upper()) assert 'lower' in dir(t) assert 'upper' not in dir(t) class _C(GetAttr): _default = '_data' # use different component name; `self._data` rather than `self.default` def __init__(self,a): self._data = a def foo(self): noop t = _C('Hi') test_eq(t._data, 'Hi') test_eq(t.lower(), 'hi') test_eq(t.upper(), 'HI') assert 'lower' in dir(t) assert 'upper' in dir(t) class _C(GetAttr): _default = 'data' # use a bad component name; i.e. self.data does not exist def __init__(self,a): self.default = a def foo(self): noop # TODO: should we raise an error when we create a new instance ... t = _C('Hi') test_eq(t.default, 'Hi') # ... or is it enough for all GetAttr features to raise errors test_fail(lambda: t.data) test_fail(lambda: t.lower()) test_fail(lambda: t.upper()) test_fail(lambda: dir(t)) #hide # I don't think this test is essential to the docs but it probably makes sense to # check that everything works when we set both _xtra and _default to non-default values class _C(GetAttr): _xtra = ['lower', 'upper'] _default = 'data' def __init__(self,a): self.data = a def foo(self): noop t = _C('Hi') test_eq(t.data, 'Hi') test_eq(t.lower(), 'hi') test_eq(t.upper(), 'HI') assert 'lower' in dir(t) assert 'upper' in dir(t) #hide # when consolidating the filter logic, I choose the previous logic from # __getattr__ k.startswith('__') rather than # _dir k.startswith('_'). class _C(GetAttr): def __init__(self): self.default = type('_D', (), {'_under': 1, '__dunder': 2})() t = _C() test_eq(t.default._under, 1) test_eq(t._under, 1) # _ prefix attr access is allowed on component assert '_under' in dir(t) test_eq(t.default.__dunder, 2) test_fail(lambda: t.__dunder) # __ prefix attr access is not allowed on component assert '__dunder' not in dir(t) assert t.__dir__ is not None # __ prefix attr access is allowed on composite assert '__dir__' in dir(t) class B: def __init__(self): self.a = A() @funcs_kwargs class A(GetAttr): wif=after_iter= noops _methods = 'wif after_iter'.split() _default = 'dataset' def __init__(self, **kwargs): pass a = A() b = A(wif=a.wif) #Failing test. TODO Jeremy, not sure what you were testing here #a = A() #b = A(wif=a.wif) #tst = pickle.dumps(b) #c = pickle.loads(tst) #export def delegate_attr(self, k, to): "Use in `__getattr__` to delegate to attr `to` without inheriting from `GetAttr`" if k.startswith('_') or k==to: raise AttributeError(k) try: return getattr(getattr(self,to), k) except AttributeError: raise AttributeError(k) from None class _C: f = 'Hi' def __getattr__(self, k): return delegate_attr(self, k, 'f') t = _C() test_eq(t.lower(), 'hi') ``` ## L - ``` #export def _is_array(x): return hasattr(x,'__array__') or hasattr(x,'iloc') def _listify(o): if o is None: return [] if isinstance(o, list): return o if isinstance(o, str) or _is_array(o): return [o] if is_iter(o): return list(o) return [o] # export def coll_repr(c, max_n=10): "String repr of up to `max_n` items of (possibly lazy) collection `c`" return f'(#{len(c)}) [' + ','.join(itertools.islice(map(repr,c), max_n)) + ( '...' if len(c)>10 else '') + ']' test_eq(coll_repr(range(1000), 5), '(#1000) [0,1,2,3,4...]') # export def mask2idxs(mask): "Convert bool mask or index list to index `L`" if isinstance(mask,slice): return mask mask = list(mask) if len(mask)==0: return [] it = mask[0] if hasattr(it,'item'): it = it.item() if isinstance(it,(bool,NoneType,np.bool_)): return [i for i,m in enumerate(mask) if m] return [int(i) for i in mask] # just for tests import torch test_eq(mask2idxs([False,True,False,True]), [1,3]) test_eq(mask2idxs(array([False,True,False,True])), [1,3]) test_eq(mask2idxs(torch.tensor([False,True,False,True])), [1,3]) test_eq(mask2idxs(array([1,2,3])), [1,2,3]) #export listable_types = typing.Collection,Generator,map,filter,zip #export class CollBase: "Base class for composing a list of `items`" def __init__(self, items): self.items = items def __len__(self): return len(self.items) def __getitem__(self, k): return self.items[list(k) if isinstance(k,CollBase) else k] def __setitem__(self, k, v): self.items[list(k) if isinstance(k,CollBase) else k] = v def __delitem__(self, i): del(self.items[i]) def __repr__(self): return self.items.__repr__() def __iter__(self): return self.items.__iter__() #export def cycle(o): "Like `itertools.cycle` except creates list of `None`s if `o` is empty" o = _listify(o) return itertools.cycle(o) if o is not None and len(o) > 0 else itertools.cycle([None]) test_eq(itertools.islice(cycle([1,2,3]),5), [1,2,3,1,2]) test_eq(itertools.islice(cycle([]),3), [None]*3) test_eq(itertools.islice(cycle(None),3), [None]*3) test_eq(itertools.islice(cycle(1),3), [1,1,1]) #export def zip_cycle(x, *args): "Like `itertools.zip_longest` but `cycle`s through elements of all but first argument" return zip(x, *map(cycle,args)) test_eq(zip_cycle([1,2,3,4],list('abc')), [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'a')]) #export def is_indexer(idx): "Test whether `idx` will index a single item in a list" return isinstance(idx,int) or not getattr(idx,'ndim',1) #export def negate_func(f): "Create new function that negates result of `f`" def _f(*args, **kwargs): return not f(*args, **kwargs) return _f def f(a): return a>0 test_eq(f(1),True) test_eq(negate_func(f)(1),False) test_eq(negate_func(f)(a=-1),True) #export class L(CollBase, metaclass=NewChkMeta): "Behaves like a list of `items` but can also index with list of indices or masks" _default='items' def __init__(self, items=None, *rest, use_list=False, match=None): if rest: items = (items,)+rest if items is None: items = [] if (use_list is not None) or not _is_array(items): items = list(items) if use_list else _listify(items) if match is not None: if is_coll(match): match = len(match) if len(items)==1: items = items*match else: assert len(items)==match, 'Match length mismatch' super().__init__(items) @property def _xtra(self): return None def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs) def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None) def copy(self): return self._new(self.items.copy()) def _get(self, i): if is_indexer(i) or isinstance(i,slice): return getattr(self.items,'iloc',self.items)[i] i = mask2idxs(i) return (self.items.iloc[list(i)] if hasattr(self.items,'iloc') else self.items.__array__()[(i,)] if hasattr(self.items,'__array__') else [self.items[i_] for i_ in i]) def __setitem__(self, idx, o): "Set `idx` (can be list of indices, or mask, or int) items to `o` (which is broadcast if not iterable)" if isinstance(idx, int): self.items[idx] = o else: idx = idx if isinstance(idx,L) else _listify(idx) if not is_iter(o): o = [o]*len(idx) for i,o_ in zip(idx,o): self.items[i] = o_ def __iter__(self): return iter(self.items.itertuples() if hasattr(self.items,'iloc') else self.items) def __contains__(self,b): return b in self.items def __reversed__(self): return self._new(reversed(self.items)) def __invert__(self): return self._new(not i for i in self) def __eq__(self,b): return False if isinstance(b, (str,dict,set)) else all_equal(b,self) def __repr__(self): return repr(self.items) if _is_array(self.items) else coll_repr(self) def __mul__ (a,b): return a._new(a.items*b) def __add__ (a,b): return a._new(a.items+_listify(b)) def __radd__(a,b): return a._new(b)+a def __addi__(a,b): a.items += list(b) return a def sorted(self, key=None, reverse=False): if isinstance(key,str): k=lambda o:getattr(o,key,0) elif isinstance(key,int): k=itemgetter(key) else: k=key return self._new(sorted(self.items, key=k, reverse=reverse)) @classmethod def split(cls, s, sep=None, maxsplit=-1): return cls(s.split(sep,maxsplit)) @classmethod def range(cls, a, b=None, step=None): if is_coll(a): a = len(a) return cls(range(a,b,step) if step is not None else range(a,b) if b is not None else range(a)) def map(self, f, *args, **kwargs): g = (bind(f,*args,**kwargs) if callable(f) else f.format if isinstance(f,str) else f.__getitem__) return self._new(map(g, self)) def filter(self, f, negate=False, **kwargs): if kwargs: f = partial(f,**kwargs) if negate: f = negate_func(f) return self._new(filter(f, self)) def argwhere(self, f, negate=False, **kwargs): if kwargs: f = partial(f,**kwargs) if negate: f = negate_func(f) return self._new(i for i,o in enumerate(self) if f(o)) def unique(self): return L(dict.fromkeys(self).keys()) def enumerate(self): return L(enumerate(self)) def val2idx(self): return {v:k for k,v in self.enumerate()} def itemgot(self, *idxs): x = self for idx in idxs: x = x.map(itemgetter(idx)) return x def attrgot(self, k, default=None): return self.map(lambda o: o.get(k,default) if isinstance(o, dict) else getattr(o,k,default)) def cycle(self): return cycle(self) def map_dict(self, f=noop, *args, **kwargs): return {k:f(k, *args,**kwargs) for k in self} def starmap(self, f, *args, **kwargs): return self._new(itertools.starmap(partial(f,*args,**kwargs), self)) def zip(self, cycled=False): return self._new((zip_cycle if cycled else zip)(*self)) def zipwith(self, *rest, cycled=False): return self._new([self, *rest]).zip(cycled=cycled) def map_zip(self, f, *args, cycled=False, **kwargs): return self.zip(cycled=cycled).starmap(f, *args, **kwargs) def map_zipwith(self, f, *rest, cycled=False, **kwargs): return self.zipwith(*rest, cycled=cycled).starmap(f, **kwargs) def concat(self): return self._new(itertools.chain.from_iterable(self.map(L))) def shuffle(self): it = copy(self.items) random.shuffle(it) return self._new(it) def append(self,o): return self.items.append(o) def remove(self,o): return self.items.remove(o) def count (self,o): return self.items.count(o) def reverse(self ): return self.items.reverse() def pop(self,o=-1): return self.items.pop(o) def clear(self ): return self.items.clear() def index(self, value, start=0, stop=sys.maxsize): return self.items.index(value, start, stop) def sort(self, key=None, reverse=False): return self.items.sort(key=key, reverse=reverse) def reduce(self, f, initial=None): return reduce(f, self) if initial is None else reduce(f, self, initial) def sum(self): return self.reduce(operator.add) def product(self): return self.reduce(operator.mul) #export _docs = {o:"Passthru to `list` method" for o in 'append count remove reverse sort pop clear index'.split()} add_docs(L, __getitem__="Retrieve `idx` (can be list of indices, or mask, or int) items", range="Same as `range`, but returns an `L`. Can pass a collection for `a`, to use `len(a)`", split="Same as `str.split`, but returns an `L`", copy="Same as `list.copy`, but returns an `L`", sorted="New `L` sorted by `key`. If key is str then use `attrgetter`. If key is int then use `itemgetter`", unique="Unique items, in stable order", val2idx="Dict from value to index", filter="Create new `L` filtered by predicate `f`, passing `args` and `kwargs` to `f`", argwhere="Like `filter`, but return indices for matching items", map="Create new `L` with `f` applied to all `items`, passing `args` and `kwargs` to `f`", map_dict="Like `map`, but creates a dict from `items` to function results", starmap="Like `map`, but use `itertools.starmap`", itemgot="Create new `L` with item `idx` of all `items`", attrgot="Create new `L` with attr `k` of all `items`, if `items` contains dicts, then `L` will contain corresponding values for key `k` for each dict.", cycle="Same as `itertools.cycle`", enumerate="Same as `enumerate`", zip="Create new `L` with `zip(*items)`", zipwith="Create new `L` with `self` zip with each of `*rest`", map_zip="Combine `zip` and `starmap`", map_zipwith="Combine `zipwith` and `starmap`", concat="Concatenate all elements of list", shuffle="Same as `random.shuffle`, but not inplace", reduce="Wrapper for `functools.reduce`", sum="Sum of the items", product="Product of the items", **_docs) #export Sequence.register(L); ``` You can create an `L` from an existing iterable (e.g. a list, range, etc) and access or modify it with an int list/tuple index, mask, int, or slice. All `list` methods can also be used with `L`. ``` t = L(range(12)) test_eq(t, list(range(12))) test_ne(t, list(range(11))) t.reverse() test_eq(t[0], 11) t[3] = "h" test_eq(t[3], "h") t[3,5] = ("j","k") test_eq(t[3,5], ["j","k"]) test_eq(t, L(t)) test_eq(L(L(1,2),[3,4]), ([1,2],[3,4])) t ``` Any `L` is a `Sequence` so you can use it with methods like `random.sample`: ``` assert isinstance(t, Sequence) import random random.sample(t, 3) #hide # test set items with L of collections x = L([[1,2,3], [4,5], [6,7]]) x[0] = [1,2] test_eq(x, L([[1,2], [4,5], [6,7]])) ``` There are optimized indexers for arrays, tensors, and DataFrames. ``` #hide import pandas as pd arr = np.arange(9).reshape(3,3) t = L(arr, use_list=None) test_eq(t[1,2], arr[[1,2]]) arr = np.arange(9).reshape(3,3) t = L(arr, use_list=None) test_eq(t[1,2], arr[[1,2]]) df = pd.DataFrame({'a':[1,2,3]}) t = L(df, use_list=None) test_eq(t[1,2], L(pd.DataFrame({'a':[2,3]}, index=[1,2]), use_list=None)) ``` You can also modify an `L` with `append`, `+`, and `*`. ``` t = L() test_eq(t, []) t.append(1) test_eq(t, [1]) t += [3,2] test_eq(t, [1,3,2]) t = t + [4] test_eq(t, [1,3,2,4]) t = 5 + t test_eq(t, [5,1,3,2,4]) test_eq(L(1,2,3), [1,2,3]) test_eq(L(1,2,3), L(1,2,3)) t = L(1)*5 t = t.map(operator.neg) test_eq(t,[-1]*5) test_eq(~L([True,False,False]), L([False,True,True])) t = L(range(4)) test_eq(zip(t, L(1).cycle()), zip(range(4),(1,1,1,1))) t = L.range(100) test_shuffled(t,t.shuffle()) def _f(x,a=0): return x+a t = L(1)*5 test_eq(t.map(_f), t) test_eq(t.map(_f,1), [2]*5) test_eq(t.map(_f,a=2), [3]*5) ``` An `L` can be constructed from anything iterable, although tensors and arrays will not be iterated over on construction, unless you pass `use_list` to the constructor. ``` test_eq(L([1,2,3]),[1,2,3]) test_eq(L(L([1,2,3])),[1,2,3]) test_ne(L([1,2,3]),[1,2,]) test_eq(L('abc'),['abc']) test_eq(L(range(0,3)),[0,1,2]) test_eq(L(o for o in range(0,3)),[0,1,2]) test_eq(L(array(0)),[array(0)]) test_eq(L([array(0),array(1)]),[array(0),array(1)]) test_eq(L(array([0.,1.1]))[0],array([0.,1.1])) test_eq(L(array([0.,1.1]), use_list=True), [array(0.),array(1.1)]) # `use_list=True` to unwrap arrays/arrays ``` If `match` is not `None` then the created list is same len as `match`, either by: - If `len(items)==1` then `items` is replicated, - Otherwise an error is raised if `match` and `items` are not already the same size. ``` test_eq(L(1,match=[1,2,3]),[1,1,1]) test_eq(L([1,2],match=[2,3]),[1,2]) test_fail(lambda: L([1,2],match=[1,2,3])) ``` If you create an `L` from an existing `L` then you'll get back the original object (since `L` uses the `NewChkMeta` metaclass). ``` test_is(L(t), t) ``` An `L` is considred equal to a list if they have the same elements. It's never considered equal to a `str` a `set` or a `dict` even if they have the same elements/keys. ``` test_eq(L(['a', 'b']), ['a', 'b']) test_ne(L(['a', 'b']), 'ab') test_ne(L(['a', 'b']), {'a', 'b'}) test_ne(L(['a', 'b']), {'a':1, 'b':2}) ``` ### Methods ``` show_doc(L.__getitem__) t = L(range(12)) test_eq(t[1,2], [1,2]) # implicit tuple test_eq(t[[1,2]], [1,2]) # list test_eq(t[:3], [0,1,2]) # slice test_eq(t[[False]*11 + [True]], [11]) # mask test_eq(t[array(3)], 3) show_doc(L.__setitem__) t[4,6] = 0 test_eq(t[4,6], [0,0]) t[4,6] = [1,2] test_eq(t[4,6], [1,2]) show_doc(L.unique) test_eq(L(1,2,3,4,4).unique(), [1,2,3,4]) show_doc(L.val2idx) test_eq(L(1,2,3).val2idx(), {3:2,1:0,2:1}) show_doc(L.filter) list(t) test_eq(t.filter(lambda o:o<5), [0,1,2,3,1,2]) test_eq(t.filter(lambda o:o<5, negate=True), [5,7,8,9,10,11]) show_doc(L.argwhere) test_eq(t.argwhere(lambda o:o<5), [0,1,2,3,4,6]) show_doc(L.map) test_eq(L.range(4).map(operator.neg), [0,-1,-2,-3]) ``` If `f` is a string then it is treated as a format string to create the mapping: ``` test_eq(L.range(4).map('#{}#'), ['#0#','#1#','#2#','#3#']) ``` If `f` is a dictionary (or anything supporting `__getitem__`) then it is indexed to create the mapping: ``` test_eq(L.range(4).map(list('abcd')), list('abcd')) ``` If the special argument `_arg` is passed, then that is the kwarg used in the map. ``` #What is this? TODO Jeremy: fix #L.range(4).map(f, b=arg0) def f(a=None,b=None): return b test_eq(L.range(4).map(f, b=arg0), range(4)) show_doc(L.map_dict) test_eq(L(range(1,5)).map_dict(), {1:1, 2:2, 3:3, 4:4}) test_eq(L(range(1,5)).map_dict(operator.neg), {1:-1, 2:-2, 3:-3, 4:-4}) show_doc(L.zip) t = L([[1,2,3],'abc']) test_eq(t.zip(), [(1, 'a'),(2, 'b'),(3, 'c')]) t = L([[1,2,3,4],['a','b','c']]) test_eq(t.zip(cycled=True ), [(1, 'a'),(2, 'b'),(3, 'c'),(4, 'a')]) test_eq(t.zip(cycled=False), [(1, 'a'),(2, 'b'),(3, 'c')]) show_doc(L.map_zip) t = L([1,2,3],[2,3,4]) test_eq(t.map_zip(operator.mul), [2,6,12]) show_doc(L.zipwith) b = [[0],[1],[2,2]] t = L([1,2,3]).zipwith(b) test_eq(t, [(1,[0]), (2,[1]), (3,[2,2])]) show_doc(L.map_zipwith) test_eq(L(1,2,3).map_zipwith(operator.mul, [2,3,4]), [2,6,12]) show_doc(L.itemgot) test_eq(t.itemgot(1), b) show_doc(L.attrgot) # Example when items are not a dict a = [SimpleNamespace(a=3,b=4),SimpleNamespace(a=1,b=2)] test_eq(L(a).attrgot('b'), [4,2]) #Example of when items are a dict b =[{'id': 15, 'name': 'nbdev'}, {'id': 17, 'name': 'fastcore'}] test_eq(L(b).attrgot('id'), [15, 17]) show_doc(L.sorted) test_eq(L(a).sorted('a').attrgot('b'), [2,4]) show_doc(L.split) test_eq(L.split('a b c'), list('abc')) show_doc(L.range) test_eq_type(L.range([1,1,1]), L(range(3))) test_eq_type(L.range(5,2,2), L(range(5,2,2))) show_doc(L.concat) test_eq(L([0,1,2,3],4,L(5,6)).concat(), range(7)) show_doc(L.copy) t = L([0,1,2,3],4,L(5,6)).copy() test_eq(t.concat(), range(7)) ``` # Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
``` import numpy as np import pandas as pd from sklearn.externals import joblib from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.svm import LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import recall_score, precision_score, confusion_matrix from tabulate import tabulate pd.set_option('display.max_columns', 100) pd.set_option('display.width', 500) # import sys # sys.path.append('michael/deeplearn.det/deeplearn') # from model import evaluation_print # from michael//deeplearn.det//deeplearn//model import evaluation_print def string_preproc(a): """ input string output string without not relevant stuff example: input "mailservice_nimol_server_qmgr.nimol.hist" output "mailservice\_server\_qmgr" """ spl = a.split('_') return '\_'.join([spl[0], spl[2], spl[3].split('.')[0]]) def top_n_predicted(probs, true_labels, top_n_value): """ input: probabilites per 'point', and true_labels, and top_n value output: labels for predictions. If the true probabilit are within top_n values it returns the true label in its output, otherwise it returns the argmax. """ predicted_labels_top = [] for line, true_label in zip(probs, true_labels): sorted_probs = np.argsort(line)[::-1][:top_n_value] if true_label in sorted_probs: predicted_labels_top.append(true_label) else: predicted_labels_top.append(sorted_probs[0]) return predicted_labels_top def top_n(vals, names, top_n_val): top_sorted_indexes = np.argsort(vals)[::-1][:top_n_val] return [names[i] for i in top_sorted_indexes] def evaluation_print(true_labels, prediction, process_names, top_n_value=1): # if top_n > 1: # # print prediction # predicted_labels = top_n_predicted(prediction, true_labels, top_n_value=top_n_value) # # predicted_labels # elif top_n_value == 1: # predicted_labels = prediction predicted_labels = prediction np.set_printoptions(precision=6, suppress=True) np.set_printoptions(threshold=10000, linewidth=1000) conf_matrix = confusion_matrix(true_labels, predicted_labels) process_names = [string_preproc(nam) for nam in process_names] print process_names print 'top_n = ', top_n_value # print conf_matrix n = conf_matrix.shape[0] print pd.DataFrame(conf_matrix, index=range(n), columns=range(n)) print 'macro average recall', '%.2f' % recall_score(true_labels, predicted_labels, average='macro') print 'macro average precision', '%.2f' % precision_score(true_labels, predicted_labels, average='macro') print '\\begin{tabular} {rrr}' print '\\hline' for i, (name, recall, precision) in enumerate(zip( process_names, recall_score(true_labels, predicted_labels, average=None), precision_score(true_labels, predicted_labels, average=None) )): print i, name, '&', '%.4f' % recall, '&', '%.4f' % precision, '\\\\' # print name, '%.4f' % recall, '%.4f' % precision print '\\hline' print '\\end{tabular}' print '\n' * 2 conf_matrix = np.hstack([np.array(range(n), ndmin=2).T, conf_matrix]) conf_matrix = np.vstack([[0] + range(n), conf_matrix]) print(tabulate(conf_matrix, tablefmt="latex")) %ls data/ %time data, proc_names = joblib.load('data/data_docker.pkl') data_blocks = [d.T for _, d in data] labels = [np.argmax(l) for l, _ in data] labels = [[l] * 10 for l in labels] labels = [item for sublist in labels for item in sublist] data_sum = [d.sum(axis=1) for d in data_blocks] data_concat = [d.ravel() for d in data_blocks] data_lines = np.vstack(data_blocks) data_lines.shape # train_data, test_data, train_label, test_label = train_test_split(data_sum, labels, test_size=.2, random_state=100) train_data, test_data, train_label, test_label = train_test_split(data_lines, labels, test_size=.2, random_state=100) len(test_label) r, p = [], [] for i in range(10): # clf = LogisticRegression(n_jobs=8) clf = LinearSVC(C=1) # clf = RandomForestClassifier() clf.fit(train_data, train_label) prediction = clf.predict(test_data) # print confusion_matrix(prediction, test_label) r.append(recall_score(test_label, prediction, average='macro')) p.append(precision_score(test_label, prediction, average='macro')) print i np.set_printoptions(suppress=True, precision=3) print '%.3f' % np.mean(p), np.std(p), '%.3f' % np.mean(r), np.std(r) ```
github_jupyter
``` from __future__ import print_function import tensorflow as tf from tensorflow.contrib import rnn # Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("../data/", one_hot=True) # Training Parameters learning_rate = 0.001 training_steps = 10000 batch_size = 128 display_step = 200 # Network Parameters num_input = 28 # MNIST data input (img shape: 28*28) timesteps = 28 # timesteps num_hidden = 128 # hidden layer num of features num_classes = 10 # MNIST total classes (0-9 digits) # tf Graph input X = tf.placeholder("float", [None, timesteps, num_input]) Y = tf.placeholder("float", [None, num_classes]) # Define weights weights = { 'out': tf.Variable(tf.random_normal([num_hidden, num_classes])) } biases = { 'out': tf.Variable(tf.random_normal([num_classes])) } def RNN(x, weights, biases): # Prepare data shape to match `rnn` function requirements # Current data input shape: (batch_size, timesteps, n_input) # Required shape: 'timesteps' tensors list of shape (batch_size, n_input) # Unstack to get a list of 'timesteps' tensors of shape (batch_size, n_input) x = tf.unstack(x, timesteps, 1) # Define a lstm cell with tensorflow lstm_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0) # Get lstm cell output outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32) # Linear activation, using rnn inner loop last output return tf.matmul(outputs[-1], weights['out']) + biases['out'] logits = RNN(X, weights, biases) prediction = tf.nn.softmax(logits) # Define loss and optimizer loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=Y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) train_op = optimizer.minimize(loss_op) # Evaluate model (with test logits, for dropout to be disabled) correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # Start training with tf.Session() as sess: # Run the initializer sess.run(init) for step in range(1, training_steps+1): batch_x, batch_y = mnist.train.next_batch(batch_size) # Reshape data to get 28 seq of 28 elements batch_x = batch_x.reshape((batch_size, timesteps, num_input)) # Run optimization op (backprop) sess.run(train_op, feed_dict={X: batch_x, Y: batch_y}) if step % display_step == 0 or step == 1: # Calculate batch loss and accuracy loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x, Y: batch_y}) print("Step " + str(step) + ", Minibatch Loss= " + \ "{:.4f}".format(loss) + ", Training Accuracy= " + \ "{:.3f}".format(acc)) print("Optimization Finished!") # Calculate accuracy for 128 mnist test images test_len = 128 test_data = mnist.test.images[:test_len].reshape((-1, timesteps, num_input)) test_label = mnist.test.labels[:test_len] print("Testing Accuracy:", \ sess.run(accuracy, feed_dict={X: test_data, Y: test_label})) ```
github_jupyter
``` import torch print(torch.__version__) ``` # 2.3 自动求梯度 ## 2.3.1 概念 上一节介绍的`Tensor`是这个包的核心类,如果将其属性`.requires_grad`设置为`True`,它将开始追踪(track)在其上的所有操作。完成计算后,可以调用`.backward()`来完成所有梯度计算。此`Tensor`的梯度将累积到`.grad`属性中。 > 注意在调用`.backward()`时,如果`Tensor`是标量,则不需要为`backward()`指定任何参数;否则,需要指定一个求导变量。 如果不想要被继续追踪,可以调用`.detach()`将其从追踪记录中分离出来,这样就可以防止将来的计算被追踪。此外,还可以用`with torch.no_grad()`将不想被追踪的操作代码块包裹起来,这种方法在评估模型的时候很常用,因为在评估模型时,我们并不需要计算可训练参数(`requires_grad=True`)的梯度。 `Function`是另外一个很重要的类。`Tensor`和`Function`互相结合就可以构建一个记录有整个计算过程的非循环图。每个`Tensor`都有一个`.grad_fn`属性,该属性即创建该`Tensor`的`Function`(除非用户创建的`Tensor`s时设置了`grad_fn=None`)。 下面通过一些例子来理解这些概念。 ## 2.3.2 `Tensor` ``` x = torch.ones(2, 2, requires_grad=True) print(x) print(x.grad_fn) import numpy as np y = x + 2 print(y) print(y.grad_fn) ``` 注意x是直接创建的,所以它没有`grad_fn`, 而y是通过一个加法操作创建的,所以它有一个为`<AddBackward>`的`grad_fn`。 ``` print(x.is_leaf, y.is_leaf) z = y * y * 3 out = z.mean() print(z, out) ``` 通过`.requires_grad_()`来用in-place的方式改变`requires_grad`属性: ``` a = torch.randn(2, 2) # 缺失情况下默认 requires_grad = False a = ((a * 3) / (a - 1)) print(a.requires_grad) # False a.requires_grad_(True) print(a.requires_grad) # True b = (a * a).sum() print(b.grad_fn) ``` ## 2.3.3 梯度 因为`out`是一个标量,所以调用`backward()`时不需要指定求导变量: ``` out.backward() # 等价于 out.backward(torch.tensor(1.)) print(x.grad) ``` 我们令`out`为 $o$ , 因为 $$ o=\frac14\sum_{i=1}^4z_i=\frac14\sum_{i=1}^43(x_i+2)^2 $$ 所以 $$ \frac{\partial{o}}{\partial{x_i}}\bigr\rvert_{x_i=1}=\frac{9}{2}=4.5 $$ 所以上面的输出是正确的。 数学上,如果有一个函数值和自变量都为向量的函数 $\vec{y}=f(\vec{x})$, 那么 $\vec{y}$ 关于 $\vec{x}$ 的梯度就是一个雅可比矩阵(Jacobian matrix): $$ J=\left(\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\ \vdots & \ddots & \vdots\\ \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} \end{array}\right) $$ 而``torch.autograd``这个包就是用来计算一些雅克比矩阵的乘积的。例如,如果 $v$ 是一个标量函数的 $l=g\left(\vec{y}\right)$ 的梯度: $$ v=\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right) $$ 那么根据链式法则我们有 $l$ 关于 $\vec{x}$ 的雅克比矩阵就为: $$ v \cdot J=\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right) \left(\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\ \vdots & \ddots & \vdots\\ \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} \end{array}\right)=\left(\begin{array}{ccc}\frac{\partial l}{\partial x_{1}} & \cdots & \frac{\partial l}{\partial x_{n}}\end{array}\right) $$ 注意:grad在反向传播过程中是累加的(accumulated),这意味着每一次运行反向传播,梯度都会累加之前的梯度,所以一般在反向传播之前需把梯度清零。 ``` # 再来反向传播一次,注意grad是累加的 out2 = x.sum() out2.backward() print(x.grad) out3 = x.sum() x.grad.data.zero_() out3.backward() print(x.grad) x = torch.tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True) y = 2 * x z = y.view(2, 2) print(z) ``` 现在 `y` 不是一个标量,所以在调用`backward`时需要传入一个和`y`同形的权重向量进行加权求和得到一个标量。 ``` v = torch.tensor([[1.0, 0.1], [0.01, 0.001]], dtype=torch.float) z.backward(v) print(x.grad) ``` 再来看看中断梯度追踪的例子: ``` x = torch.tensor(1.0, requires_grad=True) y1 = x ** 2 with torch.no_grad(): y2 = x ** 3 y3 = y1 + y2 print(x, x.requires_grad) print(y1, y1.requires_grad) print(y2, y2.requires_grad) print(y3, y3.requires_grad) y3.backward() print(x.grad) ``` 为什么是2呢?$ y_3 = y_1 + y_2 = x^2 + x^3$,当 $x=1$ 时 $\frac {dy_3} {dx}$ 不应该是5吗?事实上,由于 $y_2$ 的定义是被`torch.no_grad():`包裹的,所以与 $y_2$ 有关的梯度是不会回传的,只有与 $y_1$ 有关的梯度才会回传,即 $x^2$ 对 $x$ 的梯度。 上面提到,`y2.requires_grad=False`,所以不能调用 `y2.backward()`。 ``` # y2.backward() # 会报错 RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn ``` 如果我们想要修改`tensor`的数值,但是又不希望被`autograd`记录(即不会影响反向传播),那么我么可以对`tensor.data`进行操作. ``` x = torch.ones(1,requires_grad=True) print(x.data) # 还是一个tensor print(x.data.requires_grad) # 但是已经是独立于计算图之外 y = 2 * x x.data *= 100 # 只改变了值,不会记录在计算图,所以不会影响梯度传播 y.backward() print(x) # 更改data的值也会影响tensor的值 print(x.grad) ```
github_jupyter
## Importing Libraries ``` import datetime import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import tensorflow.keras from tensorflow.keras import activations from tensorflow.keras.datasets import cifar10 from tensorflow.keras.regularizers import l2 from tensorflow.keras.layers import Activation, Add, AveragePooling2D, BatchNormalization, Conv2D, Conv2DTranspose, Dense, Flatten, Input, Reshape, ZeroPadding2D from tensorflow.keras.applications import ResNet101 from PIL import Image from google.colab import drive ``` ## Initializing GPU Runtime ``` device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) ``` ## Creating Required Layer Blocks ``` def dres_conv(x, s, filters): # here the input size changes x_skip = x f1, f2 = filters # third block x = Conv2DTranspose(f2, kernel_size=(1, 1), strides=(1, 1), padding='valid')(x) x = BatchNormalization()(x) x = Activation(activations.relu)(x) # second block x = Conv2DTranspose(f1, kernel_size=(3, 3), strides=(1, 1), padding='same')(x) x = BatchNormalization()(x) x = Activation(activations.relu)(x) # third block x = Conv2DTranspose(f1, kernel_size=(1, 1), strides=(s, s), padding='valid')(x) # when s = 2 then it is like downsizing the feature map x = BatchNormalization()(x) # shortcut x_skip = Conv2DTranspose(f1, kernel_size=(1, 1), strides=(s, s), padding='valid')(x_skip) x_skip = BatchNormalization()(x_skip) # add x = Add()([x, x_skip]) x = Activation(activations.relu)(x) return x def dres_identity(x, filters): # resnet block where dimension doesnot change. # The skip connection is just simple identity conncection # There will be 3 blocks and then input will be added x_skip = x # this will be used for addition with the residual block f1, f2 = filters # first block x = Conv2DTranspose(f2, kernel_size=(1, 1), strides=(1, 1), padding='valid')(x) x = BatchNormalization()(x) x = Activation(activations.relu)(x) # second block # bottleneck (but size kept same with padding) x = Conv2DTranspose(f1, kernel_size=(3, 3), strides=(1, 1), padding='same')(x) x = BatchNormalization()(x) x = Activation(activations.relu)(x) # third block activation used after adding the input x = Conv2DTranspose(f1, kernel_size=(1, 1), strides=(1, 1), padding='valid')(x) x = BatchNormalization()(x) # add the input x = Add()([x, x_skip]) x = Activation(activations.relu)(x) return x ``` ## Loading Data ``` (x_train, _), (x_test, _) = cifar10.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. print(f"Shape of x_train: {x_train.shape}") print(f"Shape of x_test: {x_test.shape}") ``` ## Creating Encoder ``` input_im = Input(shape=(x_train.shape[1], x_train.shape[2], x_train.shape[3])) Encoder = ResNet101(include_top=False, weights='imagenet', input_shape=(x_train.shape[1], x_train.shape[2], x_train.shape[3])) x = Encoder(input_im) x = Flatten()(x) encoding = Dense(2048, kernel_initializer='he_normal')(x) encoder = tf.keras.Model(inputs=input_im, outputs=encoding, name='Encoder') ``` ## Creating Decoder ``` # Decoder dec_input = Input(shape=(2048,)) x = Dense(2 * 2 * 2048, kernel_initializer='he_normal')(dec_input) x = Reshape((2, 2, 2048))(x) x = dres_conv(x, s=2, filters=(512, 2048)) x = dres_identity(x, filters=(512, 2048)) x = dres_identity(x, filters=(512, 2048)) x = dres_conv(x, s=2, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_identity(x, filters=(256, 1024)) x = dres_conv(x, s=2, filters=(128, 512)) x = dres_identity(x, filters=(128, 512)) x = dres_identity(x, filters=(128, 512)) x = dres_identity(x, filters=(128, 512)) x = dres_conv(x, s=1, filters=(64, 256)) x = dres_identity(x, filters=(64, 256)) x = dres_identity(x, filters=(64, 256)) x = Conv2DTranspose(3, kernel_size=(7, 7), strides=(2, 2), padding='same')(x) x = BatchNormalization()(x) decoded = Activation(activations.sigmoid)(x) decoder = tf.keras.Model(inputs=dec_input, outputs=decoded, name='Decoder') ``` ## Creating Auto Encoder ``` enc_input = Input(shape=(x_train.shape[1], x_train.shape[2], x_train.shape[3])) encoding = encoder(enc_input) decoded = decoder(encoding) auto_encoder = tf.keras.Model(inputs=enc_input, outputs=decoded, name='AutoEncoder') auto_encoder.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError()) auto_encoder.summary() ``` ## Training and Saving Model Skip this section if you want to simply load the pretrained model ``` %load_ext tensorboard log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) auto_encoder.fit(x_train, x_train, epochs=10, batch_size=128, shuffle=True, validation_data=(x_test, x_test), callbacks=tensorboard_callback) # %tensorboard --logdir logs # Uncomment and run the above line to start TensorBoard GUI drive.mount('/content/gdrive', force_remount=True) ENC_STORE_PATH = "/content/gdrive/My Drive/Colab/TF_ResNet101_ENC.h5" DEC_STORE_PATH = "/content/gdrive/My Drive/Colab/TF_ResNet101_DEC.h5" AE_STORE_PATH = "/content/gdrive/My Drive/Colab/TF_ResNet101_AE.h5" encoder.save_weights(ENC_STORE_PATH) decoder.save_weights(DEC_STORE_PATH) auto_encoder.save_weights(AE_STORE_PATH) ```
github_jupyter
# D-optimal experiment design: comparing ABPG and Frank-Wolfe Solve the D-Optimal experiment design problem $$ \begin{array}{ll} \textrm{minimize} & F(x):=\log\left(\det\left(\sum_{i=1}^n x_i V_i V_i^T\right)\right) \\ \textrm{subject to} & \sum_{i=1}^n x_i = 1, \\ & x_i\geq 0, \quad i=1,\ldots,n \end{array} $$ where $V_i\in R^m$ for $i=1,\ldots,n$. Methods compared: * Original Frank-Wolfe method * Frank-Wolfe method with away steps * Bregman Proximal Gradient (BPG) method with adaptive line search * Accelerated Bregman Proximal Gradient (ABPG) method with gain adaption ``` cd C:\\github\accbpg import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.rcParams.update({'font.size': 12, 'font.family': 'serif'}) #matplotlib.rcParams.update({'text.usetex': True}) import accbpg # Generate a random problem instance, and Khachiyan initial point f1, h1, L1, x01Kh = accbpg.D_opt_design(30, 1000) # Construct the Kumar-Yildirim initial point x01KY = accbpg.D_opt_KYinit(f1.H) x1FWKh, F1FWKh, _, _, T1FWKh = accbpg.D_opt_FW(f1.H, x01Kh, 1e-8, 1000000, verbskip=100000) x1FWKY, F1FWKY, _, _, T1FWKY = accbpg.D_opt_FW(f1.H, x01KY, 1e-8, 1000000, verbskip=100000) x1WAKh, F1WAKh, _, _, T1WAKh = accbpg.D_opt_FW_away(f1.H, x01Kh, 1e-8, 10000, verbskip=1000) x1WAKY, F1WAKY, _, _, T1WAKY = accbpg.D_opt_FW_away(f1.H, x01KY, 1e-8, 10000, verbskip=1000) # ABPG cannot take initial points on the boundary of simplex, so use a mixture x01Mx = (1-1e-4)*x01KY + 1e-4*x01Kh x1LSKh, F1LSKh, _, T1LSKh = accbpg.BPG(f1, h1, L1, x01Kh, maxitrs=10000, linesearch=True, verbskip=1000) x1LSKY, F1LSKY, _, T1LSKY = accbpg.BPG(f1, h1, L1, x01Mx, maxitrs=10000, linesearch=True, verbskip=1000) x1ABKh, F1ABKh, _, _, _, T1ABKh = accbpg.ABPG_gain(f1, h1, L1, x01Kh, gamma=2, maxitrs=10000, verbskip=1000) x1ABKY, F1ABKY, _, _, _, T1ABKY = accbpg.ABPG_gain(f1, h1, L1, x01Mx, gamma=2, maxitrs=12000, verbskip=1000) # Plot the objective gap and estimated gains for triangle scaling plt.subplots(1, 2, figsize=(11, 4)) y_vals = [F1FWKh, F1FWKY, F1WAKh, F1WAKY, F1LSKh, F1LSKY, F1ABKh, F1ABKY] t_vals = [T1FWKh, T1FWKY, T1WAKh, T1WAKY, T1LSKh, T1LSKY, T1ABKh, T1ABKY] labels = [r"FW-Kha", r"FW-KY", r"WA-Kha", r"WA-KY", r"BPG-LS Kha", r"BPG-LS KY", r"ABPG-g Kha", r"ABPG-g KY"] ax1 = plt.subplot(1, 2, 1) accbpg.plot_comparisons(ax1, y_vals, labels, x_vals=[], plotdiff=True, yscale='log', xlim=[-100, 10000], ylim=[1e-12, 1e2], xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=['k:', 'g-', 'b-.', 'm-', 'k-.', 'c--', 'k-', 'r--']) ax2 = plt.subplot(1, 2, 2) accbpg.plot_comparisons(ax2, y_vals, labels, x_vals=t_vals, plotdiff=True, yscale='log', xlim=[-1, 60], ylim=[1e-12, 1e2], xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=['k:', 'g-', 'b-.', 'm-', 'k-.', 'c--', 'k-', 'r--']) plt.tight_layout(w_pad=4) plt.show() # Generate a random problem instance, and Khachiyan initial point f2, h2, L2, x02Kh = accbpg.D_opt_design(30, 10000) # Construct the Kumar-Yildirim initial point x02KY = accbpg.D_opt_KYinit(f2.H) x2FWKh, F2FWKh, _, _, T2FWKh = accbpg.D_opt_FW(f2.H, x02Kh, 1e-8, 100000, verbskip=10000) x2FWKY, F2FWKY, _, _, T2FWKY = accbpg.D_opt_FW(f2.H, x02KY, 1e-8, 100000, verbskip=10000) x2WAKh, F2WAKh, _, _, T2WAKh = accbpg.D_opt_FW_away(f2.H, x02Kh, 1e-8, 20000, verbskip=1000) x2WAKY, F2WAKY, _, _, T2WAKY = accbpg.D_opt_FW_away(f2.H, x02KY, 1e-8, 20000, verbskip=1000) # ABPG cannot take initial points on the boundary of simplex, so use a mixture x02Mx = (1-1e-3)*x02KY + 1e-3*x02Kh x2LSKh, F2LSKh, _, T2LSKh = accbpg.BPG(f2, h2, L2, x02Kh, maxitrs=10000, linesearch=True, verbskip=1000) x2LSKY, F2LSKY, _, T2LSKY = accbpg.BPG(f2, h2, L2, x02Mx, maxitrs=10000, linesearch=True, verbskip=1000) x2ABKh, F2ABKh, _, _, _, T2ABKh = accbpg.ABPG_gain(f2, h2, L2, x02Kh, gamma=2, maxitrs=10000, verbskip=1000) x2ABKY, F2ABKY, _, _, _, T2ABKY = accbpg.ABPG_gain(f2, h2, L2, x02Mx, gamma=2, maxitrs=10000, verbskip=1000) # Plot the objective gap and estimated gains for triangle scaling plt.subplots(1, 2, figsize=(11, 4)) y_vals = [F2FWKh, F2FWKY, F2WAKh, F2WAKY, F2LSKh, F2LSKY, F2ABKh, F2ABKY] t_vals = [T2FWKh, T2FWKY, T2WAKh, T2WAKY, T2LSKh, T2LSKY, T2ABKh, T2ABKY] labels = [r"FW-Kha", r"FW-KY", r"WA-Kha", r"WA-KY", r"BPG-LS Kha", r"BPG-LS KY", r"ABPG-g Kha", r"ABPG-g KY"] ax1 = plt.subplot(1, 2, 1) accbpg.plot_comparisons(ax1, y_vals, labels, x_vals=[], plotdiff=True, yscale='log', xlim=[-100, 20000], ylim=[1e-12, 1e2], xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=['k:', 'g-', 'b-.', 'm-', 'k-.', 'c--', 'k-', 'r--']) ax2 = plt.subplot(1, 2, 2) accbpg.plot_comparisons(ax2, y_vals, labels, x_vals=t_vals, plotdiff=True, yscale='log', xlim=[-10, 600], ylim=[1e-12, 1e2], xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=['k:', 'g-', 'b-.', 'm-', 'k-.', 'c--', 'k-', 'r--']) plt.tight_layout(w_pad=4) plt.show() import imp imp.reload(accbpg) # Generate a random problem instance, and Khachiyan initial point f3, h3, L3, x03Kh = accbpg.D_opt_design(1000, 2000) # Construct the Kumar-Yildirim initial point x03KY = accbpg.D_opt_KYinit(f3.H) x3FWKh, F3FWKh, _, _, T3FWKh = accbpg.D_opt_FW(f3.H, x03Kh, 1e-8, 10000, verbskip=1000) x3FWKY, F3FWKY, _, _, T3FWKY = accbpg.D_opt_FW(f3.H, x03KY, 1e-8, 10000, verbskip=1000) x3WAKh, F3WAKh, _, _, T3WAKh = accbpg.D_opt_FW_away(f3.H, x03Kh, 1e-8, 10000, verbskip=1000) x3WAKY, F3WAKY, _, _, T3WAKY = accbpg.D_opt_FW_away(f3.H, x03KY, 1e-8, 10000, verbskip=1000) # ABPG cannot take initial points on the boundary of simplex, so use a mixture x03Mx = (1-1e-4)*x03KY + 1e-4*x03Kh x3LSKh, F3LSKh, _, T3LSKh = accbpg.BPG(f3, h3, L3, x03Kh, maxitrs=10000, linesearch=True, verbskip=1000) x3LSKY, F3LSKY, _, T3LSKY = accbpg.BPG(f3, h3, L3, x03Mx, maxitrs=10000, linesearch=True, verbskip=1000) x3ABKh, F3ABKh, _, _, _, T3ABKh = accbpg.ABPG_gain(f3, h3, L3, x03Kh, gamma=2, maxitrs=10000, restart=True, verbskip=1000) x3ABKY, F3ABKY, _, _, _, T3ABKY = accbpg.ABPG_gain(f3, h3, L3, x03Mx, gamma=2, maxitrs=10000, restart=True, verbskip=1000) # Plot the objective gap and estimated gains for triangle scaling plt.subplots(1, 2, figsize=(11, 4)) y_vals = [F3FWKh, F3FWKY, F3WAKh, F3WAKY, F3LSKh, F3LSKY, F3ABKh, F3ABKY] t_vals = [T3FWKh, T3FWKY, T3WAKh, T3WAKY, T3LSKh, T3LSKY, T3ABKh, T3ABKY] labels = [r"FW-Kha", r"FW-KY", r"WA-Kha", r"WA-KY", r"BPG-LS Kha", r"BPG-LS KY", r"ABPG-g Kha", r"ABPG-g KY"] ax1 = plt.subplot(1, 2, 1) accbpg.plot_comparisons(ax1, y_vals, labels, x_vals=[], plotdiff=True, yscale='log', xlim=[-200, 10000], ylim=[1e-10, 2], xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=['k:', 'g-', 'b-.', 'm-', 'k-.', 'c--', 'k-', 'r--']) ax2 = plt.subplot(1, 2, 2) accbpg.plot_comparisons(ax2, y_vals, labels, x_vals=t_vals, plotdiff=True, yscale='log', xlim=[-2, 200], ylim=[1e-10, 2], xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="center right", linestyles=['k:', 'g-', 'b-.', 'm-', 'k-.', 'c--', 'k-', 'r--']) plt.tight_layout(w_pad=4) plt.show() ```
github_jupyter
# Loading Pre-Trained Models Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-3e515cac" class="ulab-btn--primary"></button> In this exercise, you'll work to download and load a few of the pre-trained models available in the OpenVINO toolkit. First, you can navigate to the [Pre-Trained Models list](https://software.intel.com/en-us/openvino-toolkit/documentation/pretrained-models) in a separate window or tab, as well as the page that gives all of the model names [here](https://docs.openvinotoolkit.org/latest/_models_intel_index.html). Your task here is to download the below three pre-trained models using the Model Downloader tool, as detailed on the same page as the different model names. Note that you *do not need to download all of the available pre-trained models* - doing so would cause your workspace to crash, as the workspace will limit you to 3 GB of downloaded models. ### Task 1 - Find the Right Models Using the [Pre-Trained Model list](https://software.intel.com/en-us/openvino-toolkit/documentation/pretrained-models), determine which models could accomplish the following tasks (there may be some room here in determining which model to download): - Human Pose Estimation - Text Detection - Determining Car Type & Color ### Task 2 - Download the Models Once you have determined which model best relates to the above tasks, use the Model Downloader tool to download them into the workspace for the following precision levels: - Human Pose Estimation: All precision levels - Text Detection: FP16 only - Determining Car Type & Color: INT8 only **Note**: When downloading the models in the workspace, add the `-o` argument (along with any other necessary arguments) with `/home/workspace` as the output directory. The default download directory will not allow the files to be written there within the workspace, as it is a read-only directory. ### Task 3 - Verify the Downloads You can verify the download of these models by navigating to: `/home/workspace/intel` (if you followed the above note), and checking whether a directory was created for each of the three models, with included subdirectories for each precision, with respective `.bin` and `.xml` for each model. **Hint**: Use the `-h` command with the Model Downloader tool if you need to check out the possible arguments to include when downloading specific models and precisions. <!-- %%ulab_page_divider --><hr/> # Preprocessing Inputs Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-dcdc9e86" class="ulab-btn--primary"></button> Now that we have a few pre-trained models downloaded, it's time to preprocess the inputs to match what each of the models expects as their input. We'll use the same models as before as a basis for determining the preprocessing necessary for each input file. As a reminder, our three models are: - Human Pose Estimation: [human-pose-estimation-0001](https://docs.openvinotoolkit.org/latest/_models_intel_human_pose_estimation_0001_description_human_pose_estimation_0001.html) - Text Detection: [text-detection-0004](http://docs.openvinotoolkit.org/latest/_models_intel_text_detection_0004_description_text_detection_0004.html) - Determining Car Type & Color: [vehicle-attributes-recognition-barrier-0039](https://docs.openvinotoolkit.org/latest/_models_intel_vehicle_attributes_recognition_barrier_0039_description_vehicle_attributes_recognition_barrier_0039.html) **Note:** For ease of use, these models have been added into the `/home/workspace/models` directory. For example, if you need to use the Text Detection model, you could find it at: ```bash /home/workspace/models/text_detection_0004.xml ``` Each link above contains the documentation for the related model. In our case, we want to focus on the **Inputs** section of the page, wherein important information regarding the input shape, order of the shape (such as color channel first or last), and the order of the color channels, is included. Your task is to fill out the code in three functions within `preprocess_inputs.py`, one for each of the three models. We have also included a potential sample image for each of the three models, that will be used with `test.py` to check whether the input for each model has been adjusted as expected for proper model input. Note that each image is **currently loaded as BGR with H, W, C order** in the `test.py` file, so any necessary preprocessing to change that should occur in your three work files. Note that **BGR** order is used, as the OpenCV function we use to read images loads as BGR, and not RGB. When finished, you should be able to run the `test.py` file and pass all three tests. <!-- %%ulab_page_divider --><hr/> # Deploy Your First Edge App Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-60888dc0" class="ulab-btn--primary"></button> So far, you've downloaded some pre-trained models, handled their inputs, and learned how to handle outputs. In this exercise, you'll implement the handling of the outputs of our three models from before, and get to see inference actually performed by adding these models to some example edge applications. There's a lot of code still involved behind the scenes here. With the Pre-Trained Models available with the OpenVINO toolkit, you don't need to worry about the Model Optimizer, but there is still work done to load the model into the Inference Engine. We won't learn about this code until later, so in this case, you'll just need to call your functions to handle the input and output of the model within the app. If you do want a sneak preview of some of the code that interfaces with the Inference Engine, you can check it out in `inference.py`. You'll work out of the `handle_models.py` file, as well as adding functions calls within the edge app in `app.py`. ## TODOs In `handle_models.py`, you will need to implement `handle_pose`, `handle_text`, and `handle_car`. In `app.py`, first, you'll need to use the input shape of the network to call the `preprocessing` function. Then, you need to call `handle_output` with the appropriate model argument in order to get the right handling function. With that function, you can then feed the output of the inference request in in order to extract the output. Note that there is some additional post-processing done for you in `create_output_image` within `app.py` to help display the output back onto the input image. ## Testing the apps To test your implementations, you can use `app.py` to run each edge application, with the following arguments: - `-t`: The model type, which should be one of `"POSE"`, `"TEXT"`, or `"CAR_META"` - `-m`: The location of the model .xml file - `-i`: The location of the input image used for testing - `-c`: A CPU extension file, if applicable. See below for what this is for the workspace. The results of your output will be saved down for viewing in the `outputs` directory. As an example, here is an example of running the app with related arguments: ``` python app.py -i "images/blue-car.jpg" -t "CAR_META" -m "/home/workspace/models/vehicle-attributes-recognition-barrier-0039.xml" -c "/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_sse4.so" ``` ## Model Documentation Once again, here are the links to the models, so you can use the **Output** section to help you get started (there are additional comments in the code to assist): - Human Pose Estimation: [human-pose-estimation-0001](https://docs.openvinotoolkit.org/latest/_models_intel_human_pose_estimation_0001_description_human_pose_estimation_0001.html) - Text Detection: [text-detection-0004](http://docs.openvinotoolkit.org/latest/_models_intel_text_detection_0004_description_text_detection_0004.html) - Determining Car Type & Color: [vehicle-attributes-recognition-barrier-0039](https://docs.openvinotoolkit.org/latest/_models_intel_vehicle_attributes_recognition_barrier_0039_description_vehicle_attributes_recognition_barrier_0039.html) <!-- %%ulab_page_divider --><hr/> # Convert a TensorFlow Model Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-663e2c8b" class="ulab-btn--primary"></button> In this exercise, you'll convert a TensorFlow Model from the Object Detection Model Zoo into an Intermediate Representation using the Model Optimizer. As noted in the related [documentation](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html), there is a difference in method when using a frozen graph vs. an unfrozen graph. Since freezing a graph is a TensorFlow-based function and not one specific to OpenVINO itself, in this exercise, you will only need to work with a frozen graph. However, I encourage you to try to freeze and load an unfrozen model on your own as well. For this exercise, first download the SSD MobileNet V2 COCO model from [here](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz). Use the `tar -xvf` command with the downloaded file to unpack it. From there, find the **Convert a TensorFlow\* Model** header in the documentation, and feed in the downloaded SSD MobileNet V2 COCO model's `.pb` file. If the conversion is successful, the terminal should let you know that it generated an IR model. The locations of the `.xml` and `.bin` files, as well as execution time of the Model Optimizer, will also be output. **Note**: Converting the TF model will take a little over one minute in the workspace. ### Hints & Troubleshooting Make sure to pay attention to the note in this section regarding the `--reverse_input_channels` argument. If you are unsure about this argument, you can read more [here](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html#when_to_reverse_input_channels). There is additional documentation specific to converting models from TensorFlow's Object Detection Zoo [here](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html). You will likely need both the `--tensorflow_use_custom_operations_config` and `--tensorflow_object_detection_api_pipeline_config` arguments fed with their related files. <!-- %%ulab_page_divider --><hr/> # Convert a Caffe Model Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-d0a57724" class="ulab-btn--primary"></button> In this exercise, you'll convert a Caffe Model into an Intermediate Representation using the Model Optimizer. You can find the related documentation [here](https://docs.openvinotoolkit.org/2018_R5/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe.html). For this exercise, first download the SqueezeNet V1.1 model by cloning [this repository](https://github.com/DeepScale/SqueezeNet). Follow the documentation above and feed in the Caffe model to the Model Optimizer. If the conversion is successful, the terminal should let you know that it generated an IR model. The locations of the `.xml` and `.bin` files, as well as execution time of the Model Optimizer, will also be output. ### Hints & Troubleshooting You will need to specify `--input_proto` if the `.prototxt` file is not named the same as the model. There is an important note in the documentation after the section **Supported Topologies** regarding Caffe models trained on ImageNet. If you notice poor performance in inference, you may need to specify mean and scale values in your arguments. ``` python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model squeezenet_v1.1.caffemodel --input_proto deploy.prototxt ``` <!-- %%ulab_page_divider --><hr/> # Convert an ONNX Model Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-0bd71d51" class="ulab-btn--primary"></button> ### Exercise Instructions In this exercise, you'll convert an ONNX Model into an Intermediate Representation using the Model Optimizer. You can find the related documentation [here](https://docs.openvinotoolkit.org/2018_R5/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX.html). For this exercise, first download the bvlc_alexnet model from [here](https://s3.amazonaws.com/download.onnx/models/opset_8/bvlc_alexnet.tar.gz). Use the `tar -xvf` command with the downloaded file to unpack it. Follow the documentation above and feed in the ONNX model to the Model Optimizer. If the conversion is successful, the terminal should let you know that it generated an IR model. The locations of the `.xml` and `.bin` files, as well as execution time of the Model Optimizer, will also be output. ### PyTorch models Note that we will only cover converting directly from an ONNX model here. If you are interested in converting a PyTorch model using ONNX for use with OpenVINO, check out this [link](https://michhar.github.io/convert-pytorch-onnx/) for the steps to do so. From there, you can follow the steps in the rest of this exercise once you have an ONNX model. <!-- %%ulab_page_divider --><hr/> # Custom Layers Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-c7cfa177" class="ulab-btn--primary"></button> This exercise is adapted from [this repository](https://github.com/david-drew/OpenVINO-Custom-Layers). Note that the classroom workspace is running OpenVINO 2019.r3, while this exercise was originally created for 2019.r2. This exercise will work appropriately in the workspace, but there may be some other differences you need to account for if you use a custom layer yourself. The below steps will walk you through the full walkthrough of creating a custom layer; as such, there is not a related solution video. Note that custom layers is an advanced topic, and one that is not expected to be used often (if at all) in most use cases of the OpenVINO toolkit. This exercise is meant to introduce you to the concept, but you won't need to use it again in the rest of this course. ## Example Custom Layer: The Hyperbolic Cosine (cosh) Function We will follow the steps involved for implementing a custom layer using the simple hyperbolic cosine (cosh) function. The cosh function is mathematically calculated as: ``` cosh(x) = (e^x + e^-x) / 2 ``` As a function that calculates a value for the given value x, the cosh function is very simple when compared to most custom layers. Though the cosh function may not represent a "real" custom layer, it serves the purpose of this tutorial as an example for working through the steps for implementing a custom layer. Move to the next page to continue. <!-- %%ulab_page_divider --><hr/> ## Build the Model First, export the below paths to shorten some of what you need to enter later: ``` export CLWS=/home/workspace/cl_tutorial export CLT=$CLWS/OpenVINO-Custom-Layers ``` Then run the following to create the TensorFlow model including the `cosh` layer. ``` mkdir $CLWS/tf_model python $CLT/create_tf_model/build_cosh_model.py $CLWS/tf_model ``` You should receive a message similar to: ``` Model saved in path: /tf_model/model.ckpt ``` ## Creating the *`cosh`* Custom Layer ### Generate the Extension Template Files Using the Model Extension Generator We will use the Model Extension Generator tool to automatically create templates for all the extensions needed by the Model Optimizer to convert and the Inference Engine to execute the custom layer. The extension template files will be partially replaced by Python and C++ code to implement the functionality of `cosh` as needed by the different tools. To create the four extensions for the `cosh` custom layer, we run the Model Extension Generator with the following options: - `--mo-tf-ext` = Generate a template for a Model Optimizer TensorFlow extractor - `--mo-op` = Generate a template for a Model Optimizer custom layer operation - `--ie-cpu-ext` = Generate a template for an Inference Engine CPU extension - `--ie-gpu-ext` = Generate a template for an Inference Engine GPU extension - `--output_dir` = set the output directory. Here we are using `$CLWS/cl_cosh` as the target directory to store the output from the Model Extension Generator. To create the four extension templates for the `cosh` custom layer, given we are in the `$CLWS` directory, we run the command: ``` mkdir cl_cosh ``` ```bash python /opt/intel/openvino/deployment_tools/tools/extension_generator/extgen.py new --mo-tf-ext --mo-op --ie-cpu-ext --ie-gpu-ext --output_dir=$CLWS/cl_cosh ``` The Model Extension Generator will start in interactive mode and prompt us with questions about the custom layer to be generated. Use the text between the `[]`'s to answer each of the Model Extension Generator questions as follows: ``` Enter layer name: [cosh] Do you want to automatically parse all parameters from the model file? (y/n) ... [n] Enter all parameters in the following format: ... Enter 'q' when finished: [q] Do you want to change any answer (y/n) ? Default 'no' [n] Do you want to use the layer name as the operation name? (y/n) [y] Does your operation change shape? (y/n) [n] Do you want to change any answer (y/n) ? Default 'no' [n] ``` When complete, the output text will appear similar to: ``` Stub file for TensorFlow Model Optimizer extractor is in /home/<user>/cl_tutorial/cl_cosh/user_mo_extensions/front/tf folder Stub file for the Model Optimizer operation is in /home/<user>/cl_tutorial/cl_cosh/user_mo_extensions/ops folder Stub files for the Inference Engine CPU extension are in /home/<user>/cl_tutorial/cl_cosh/user_ie_extensions/cpu folder Stub files for the Inference Engine GPU extension are in /home/<user>/cl_tutorial/cl_cosh/user_ie_extensions/gpu folder ``` Template files (containing source code stubs) that may need to be edited have just been created in the following locations: - TensorFlow Model Optimizer extractor extension: - `$CLWS/cl_cosh/user_mo_extensions/front/tf/` - `cosh_ext.py` - Model Optimizer operation extension: - `$CLWS/cl_cosh/user_mo_extensions/ops` - `cosh.py` - Inference Engine CPU extension: - `$CLWS/cl_cosh/user_ie_extensions/cpu` - `ext_cosh.cpp` - `CMakeLists.txt` - Inference Engine GPU extension: - `$CLWS/cl_cosh/user_ie_extensions/gpu` - `cosh_kernel.cl` - `cosh_kernel.xml` Instructions on editing the template files are provided in later parts of this tutorial. For reference, or to copy to make the changes quicker, pre-edited template files are provided by the tutorial in the `$CLT` directory. Move to the next page to continue. <!-- %%ulab_page_divider --><hr/> ## Using Model Optimizer to Generate IR Files Containing the Custom Layer We will now use the generated extractor and operation extensions with the Model Optimizer to generate the model IR files needed by the Inference Engine. The steps covered are: 1. Edit the extractor extension template file (already done - we will review it here) 2. Edit the operation extension template file (already done - we will review it here) 3. Generate the Model IR Files ### Edit the Extractor Extension Template File For the `cosh` custom layer, the generated extractor extension does not need to be modified because the layer parameters are used without modification. Below is a walkthrough of the Python code for the extractor extension that appears in the file `$CLWS/cl_cosh/user_mo_extensions/front/tf/cosh_ext.py`. 1. Using the text editor, open the extractor extension source file `$CLWS/cl_cosh/user_mo_extensions/front/tf/cosh_ext.py`. 2. The class is defined with the unique name `coshFrontExtractor` that inherits from the base extractor `FrontExtractorOp` class. The class variable `op` is set to the name of the layer operation and `enabled` is set to tell the Model Optimizer to use (`True`) or exclude (`False`) the layer during processing. ```python class coshFrontExtractor(FrontExtractorOp): op = 'cosh' enabled = True ``` 3. The `extract` function is overridden to allow modifications while extracting parameters from layers within the input model. ```python @staticmethod def extract(node): ``` 4. The layer parameters are extracted from the input model and stored in `param`. This is where the layer parameters in `param` may be retrieved and used as needed. For the `cosh` custom layer, the `op` attribute is simply set to the name of the operation extension used. ```python proto_layer = node.pb param = proto_layer.attr # extracting parameters from TensorFlow layer and prepare them for IR attrs = { 'op': __class__.op } ``` 5. The attributes for the specific node are updated. This is where we can modify or create attributes in `attrs` before updating `node` with the results and the `enabled` class variable is returned. ```python # update the attributes of the node Op.get_op_class_by_name(__class__.op).update_node_stat(node, attrs) return __class__.enabled ``` ### Edit the Operation Extension Template File For the `cosh` custom layer, the generated operation extension does not need to be modified because the shape (i.e., dimensions) of the layer output is the same as the input shape. Below is a walkthrough of the Python code for the operation extension that appears in the file `$CLWS/cl_cosh/user_mo_extensions/ops/cosh.py`. 1. Using the text editor, open the operation extension source file `$CLWS/cl_cosh/user_mo_extensions/ops/cosh.py` 2. The class is defined with the unique name `coshOp` that inherits from the base operation `Op` class. The class variable `op` is set to `'cosh'`, the name of the layer operation. ```python class coshOp(Op): op = 'cosh' ``` 3. The `coshOp` class initializer `__init__` function will be called for each layer created. The initializer must initialize the super class `Op` by passing the `graph` and `attrs` arguments along with a dictionary of the mandatory properties for the `cosh` operation layer that define the type (`type`), operation (`op`), and inference function (`infer`). This is where any other initialization needed by the `coshOP` operation can be specified. ```python def __init__(self, graph, attrs): mandatory_props = dict( type=__class__.op, op=__class__.op, infer=coshOp.infer ) super().__init__(graph, mandatory_props, attrs) ``` 4. The `infer` function is defined to provide the Model Optimizer information on a layer, specifically returning the shape of the layer output for each node. Here, the layer output shape is the same as the input and the value of the helper function `copy_shape_infer(node)` is returned. ```python @staticmethod def infer(node: Node): # ========================================================== # You should add your shape calculation implementation here # If a layer input shape is different to the output one # it means that it changes shape and you need to implement # it on your own. Otherwise, use copy_shape_infer(node). # ========================================================== return copy_shape_infer(node) ``` ### Generate the Model IR Files With the extensions now complete, we use the Model Optimizer to convert and optimize the example TensorFlow model into IR files that will run inference using the Inference Engine. To create the IR files, we run the Model Optimizer for TensorFlow `mo_tf.py` with the following options: - `--input_meta_graph model.ckpt.meta` - Specifies the model input file. - `--batch 1` - Explicitly sets the batch size to 1 because the example model has an input dimension of "-1". - TensorFlow allows "-1" as a variable indicating "to be filled in later", however the Model Optimizer requires explicit information for the optimization process. - `--output "ModCosh/Activation_8/softmax_output"` - The full name of the final output layer of the model. - `--extensions $CLWS/cl_cosh/user_mo_extensions` - Location of the extractor and operation extensions for the custom layer to be used by the Model Optimizer during model extraction and optimization. - `--output_dir $CLWS/cl_ext_cosh` - Location to write the output IR files. To create the model IR files that will include the `cosh` custom layer, we run the commands: ```bash cd $CLWS/tf_model python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_meta_graph model.ckpt.meta --batch 1 --output "ModCosh/Activation_8/softmax_output" --extensions $CLWS/cl_cosh/user_mo_extensions --output_dir $CLWS/cl_ext_cosh ``` The output will appear similar to: ``` [ SUCCESS ] Generated IR model. [ SUCCESS ] XML file: /home/<user>/cl_tutorial/cl_ext_cosh/model.ckpt.xml [ SUCCESS ] BIN file: /home/<user>/cl_tutorial/cl_ext_cosh/model.ckpt.bin [ SUCCESS ] Total execution time: x.xx seconds. ``` Move to the next page to continue. <!-- %%ulab_page_divider --><hr/> ## Inference Engine Custom Layer Implementation for the Intel® CPU We will now use the generated CPU extension with the Inference Engine to execute the custom layer on the CPU. The steps are: 1. Edit the CPU extension template files. 2. Compile the CPU extension library. 3. Execute the Model with the custom layer. You *will* need to make the changes in this section to the related files. Note that the classroom workspace only has an Intel CPU available, so we will not perform the necessary steps for GPU usage with the Inference Engine. ### Edit the CPU Extension Template Files The generated CPU extension includes the template file `ext_cosh.cpp` that must be edited to fill-in the functionality of the `cosh` custom layer for execution by the Inference Engine. We also need to edit the `CMakeLists.txt` file to add any header file or library dependencies required to compile the CPU extension. In the next sections, we will walk through and edit these files. #### Edit `ext_cosh.cpp` We will now edit the `ext_cosh.cpp` by walking through the code and making the necessary changes for the `cosh` custom layer along the way. 1. Using the text editor, open the CPU extension source file `$CLWS/cl_cosh/user_ie_extensions/cpu/ext_cosh.cpp`. 2. To implement the `cosh` function to efficiently execute in parallel, the code will use the parallel processing supported by the Inference Engine through the use of the Intel® Threading Building Blocks library. To use the library, at the top we must include the header [`ie_parallel.hpp`](https://docs.openvinotoolkit.org/2019_R3.1/ie__parallel_8hpp.html) file by adding the `#include` line as shown below. Before: ```cpp #include "ext_base.hpp" #include <cmath> ``` After: ```cpp #include "ext_base.hpp" #include "ie_parallel.hpp" #include <cmath> ``` 3. The class `coshImp` implements the `cosh` custom layer and inherits from the extension layer base class `ExtLayerBase`. ```cpp class coshImpl: public ExtLayerBase { public: ``` 4. The `coshImpl` constructor is passed the `layer` object that it is associated with to provide access to any layer parameters that may be needed when implementing the specific instance of the custom layer. ```cpp explicit coshImpl(const CNNLayer* layer) { try { ... ``` 5. The `coshImpl` constructor configures the input and output data layout for the custom layer by calling `addConfig()`. In the template file, the line is commented-out and we will replace it to indicate that `layer` uses `DataConfigurator(ConfLayout::PLN)` (plain or linear) data for both input and output. Before: ```cpp ... // addConfig({DataConfigurator(ConfLayout::PLN), DataConfigurator(ConfLayout::PLN)}, {DataConfigurator(ConfLayout::PLN)}); ``` After: ```cpp addConfig(layer, { DataConfigurator(ConfLayout::PLN) }, { DataConfigurator(ConfLayout::PLN) }); ``` 6. The construct is now complete, catching and reporting certain exceptions that may have been thrown before exiting. ```cpp } catch (InferenceEngine::details::InferenceEngineException &ex) { errorMsg = ex.what(); } } ``` 7. The `execute` method is overridden to implement the functionality of the `cosh` custom layer. The `inputs` and `outputs` are the data buffers passed as [`Blob`](https://docs.openvinotoolkit.org/2019_R3.1/_docs_IE_DG_Memory_primitives.html) objects. The template file will simply return `NOT_IMPLEMENTED` by default. To calculate the `cosh` custom layer, we will replace the `execute` method with the code needed to calculate the `cosh` function in parallel using the [`parallel_for3d`](https://docs.openvinotoolkit.org/2019_R3.1/ie__parallel_8hpp.html) function. Before: ```cpp StatusCode execute(std::vector<Blob::Ptr>& inputs, std::vector<Blob::Ptr>& outputs, ResponseDesc *resp) noexcept override { // Add here implementation for layer inference // Examples of implementations you can find in Inference Engine tool samples/extensions folder return NOT_IMPLEMENTED; ``` After: ```cpp StatusCode execute(std::vector<Blob::Ptr>& inputs, std::vector<Blob::Ptr>& outputs, ResponseDesc *resp) noexcept override { // Add implementation for layer inference here // Examples of implementations are in OpenVINO samples/extensions folder // Get pointers to source and destination buffers float* src_data = inputs[0]->buffer(); float* dst_data = outputs[0]->buffer(); // Get the dimensions from the input (output dimensions are the same) SizeVector dims = inputs[0]->getTensorDesc().getDims(); // Get dimensions:N=Batch size, C=Number of Channels, H=Height, W=Width int N = static_cast<int>((dims.size() > 0) ? dims[0] : 1); int C = static_cast<int>((dims.size() > 1) ? dims[1] : 1); int H = static_cast<int>((dims.size() > 2) ? dims[2] : 1); int W = static_cast<int>((dims.size() > 3) ? dims[3] : 1); // Perform (in parallel) the hyperbolic cosine given by: // cosh(x) = (e^x + e^-x)/2 parallel_for3d(N, C, H, [&](int b, int c, int h) { // Fill output_sequences with -1 for (size_t ii = 0; ii < b*c; ii++) { dst_data[ii] = (exp(src_data[ii]) + exp(-src_data[ii]))/2; } }); return OK; } ``` #### Edit `CMakeLists.txt` Because the implementation of the `cosh` custom layer makes use of the parallel processing supported by the Inference Engine, we need to add the Intel® Threading Building Blocks dependency to `CMakeLists.txt` before compiling. We will add paths to the header and library files and add the Intel® Threading Building Blocks library to the list of link libraries. We will also rename the `.so`. 1. Using the text editor, open the CPU extension CMake file `$CLWS/cl_cosh/user_ie_extensions/cpu/CMakeLists.txt`. 2. At the top, rename the `TARGET_NAME` so that the compiled library is named `libcosh_cpu_extension.so`: Before: ```cmake set(TARGET_NAME "user_cpu_extension") ``` After: ```cmake set(TARGET_NAME "cosh_cpu_extension") ``` 3. We modify the `include_directories` to add the header include path for the Intel® Threading Building Blocks library located in `/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/include`: Before: ```cmake include_directories (PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/common ${InferenceEngine_INCLUDE_DIRS} ) ``` After: ```cmake include_directories (PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/common ${InferenceEngine_INCLUDE_DIRS} "/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/include" ) ``` 4. We add the `link_directories` with the path to the Intel® Threading Building Blocks library binaries at `/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib`: Before: ```cmake ... #enable_omp() ``` After: ```cmake ... link_directories( "/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib" ) #enable_omp() ``` 5. Finally, we add the Intel® Threading Building Blocks library `tbb` to the list of link libraries in `target_link_libraries`: Before: ```cmake target_link_libraries(${TARGET_NAME} ${InferenceEngine_LIBRARIES} ${intel_omp_lib}) ``` After: ```cmake target_link_libraries(${TARGET_NAME} ${InferenceEngine_LIBRARIES} ${intel_omp_lib} tbb) ``` ### Compile the Extension Library To run the custom layer on the CPU during inference, the edited extension C++ source code must be compiled to create a `.so` shared library used by the Inference Engine. In the following steps, we will now compile the extension C++ library. 1. First, we run the following commands to use CMake to setup for compiling: ```bash cd $CLWS/cl_cosh/user_ie_extensions/cpu mkdir -p build cd build cmake .. ``` The output will appear similar to: ``` -- Generating done -- Build files have been written to: /home/<user>/cl_tutorial/cl_cosh/user_ie_extensions/cpu/build ``` 2. The CPU extension library is now ready to be compiled. Compile the library using the command: ```bash make -j $(nproc) ``` The output will appear similar to: ``` [100%] Linking CXX shared library libcosh_cpu_extension.so [100%] Built target cosh_cpu_extension ``` Move to the next page to continue. <!-- %%ulab_page_divider --><hr/> ## Execute the Model with the Custom Layer ### Using a C++ Sample To start on a C++ sample, we first need to build the C++ samples for use with the Inference Engine: ```bash cd /opt/intel/openvino/deployment_tools/inference_engine/samples/ ./build_samples.sh ``` This will take a few minutes to compile all of the samples. Next, we will try running the C++ sample without including the `cosh` extension library to see the error describing the unsupported `cosh` operation using the command: ```bash ~/inference_engine_samples_build/intel64/Release/classification_sample_async -i $CLT/pics/dog.bmp -m $CLWS/cl_ext_cosh/model.ckpt.xml -d CPU ``` The error output will be similar to: ``` [ ERROR ] Unsupported primitive of type: cosh name: ModCosh/cosh/Cosh ``` We will now run the command again, this time with the `cosh` extension library specified using the `-l $CLWS/cl_cosh/user_ie_extensions/cpu/build/libcosh_cpu_extension.so` option in the command: ```bash ~/inference_engine_samples_build/intel64/Release/classification_sample_async -i $CLT/pics/dog.bmp -m $CLWS/cl_ext_cosh/model.ckpt.xml -d CPU -l $CLWS/cl_cosh/user_ie_extensions/cpu/build/libcosh_cpu_extension.so ``` The output will appear similar to: ``` Image /home/<user>/cl_tutorial/OpenVINO-Custom-Layers/pics/dog.bmp classid probability ------- ----------- 0 0.9308984 1 0.0691015 total inference time: xx.xxxxxxx Average running time of one iteration: xx.xxxxxxx ms Throughput: xx.xxxxxxx FPS [ INFO ] Execution successful ``` ### Using a Python Sample First, we will try running the Python sample without including the `cosh` extension library to see the error describing the unsupported `cosh` operation using the command: ```bash python /opt/intel/openvino/deployment_tools/inference_engine/samples/python_samples/classification_sample_async/classification_sample_async.py -i $CLT/pics/dog.bmp -m $CLWS/cl_ext_cosh/model.ckpt.xml -d CPU ``` The error output will be similar to: ``` [ INFO ] Loading network files: /home/<user>/cl_tutorial/tf_model/model.ckpt.xml /home/<user>/cl_tutorial/tf_model/model.ckpt.bin [ ERROR ] Following layers are not supported by the plugin for specified device CPU: ModCosh/cosh/Cosh, ModCosh/cosh_1/Cosh, ModCosh/cosh_2/Cosh [ ERROR ] Please try to specify cpu extensions library path in sample's command line parameters using -l or --cpu_extension command line argument ``` We will now run the command again, this time with the `cosh` extension library specified using the `-l $CLWS/cl_cosh/user_ie_extensions/cpu/build/libcosh_cpu_extension.so` option in the command: ```bash python /opt/intel/openvino/deployment_tools/inference_engine/samples/python_samples/classification_sample_async/classification_sample_async.py -i $CLT/pics/dog.bmp -m $CLWS/cl_ext_cosh/model.ckpt.xml -l $CLWS/cl_cosh/user_ie_extensions/cpu/build/libcosh_cpu_extension.so -d CPU ``` The output will appear similar to: ``` Image /home/<user>/cl_tutorial/OpenVINO-Custom-Layers/pics/dog.bmp classid probability ------- ----------- 0 0.9308984 1 0.0691015 ``` **Congratulations!** You have now implemented a custom layer with the Intel® Distribution of OpenVINO™ Toolkit. <!-- %%ulab_page_divider --><hr/> # Feed an IR to the Inference Engine Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-6f2a60e5" class="ulab-btn--primary"></button> Earlier in the course, you were focused on working with the Intermediate Representation (IR) models themselves, while mostly glossing over the use of the actual Inference Engine with the model. Here, you'll import the Python wrapper for the Inference Engine (IE), and practice using different IRs with it. You will first add each IR as an `IENetwork`, and check whether the layers of that network are supported by the classroom CPU. Since the classroom workspace is using an Intel CPU, you will also need to add a CPU extension to the `IECore`. Once you have verified all layers are supported (when the CPU extension is added), you will load the given model into the Inference Engine. Note that the `.xml` file of the IR should be given as an argument when running the script. To test your implementation, you should be able to successfully load each of the three IR model files we have been working with throughout the course so far, which you can find in the `/home/workspace/models` directory. <!-- %%ulab_page_divider --><hr/> # Inference Requests Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-ceb2f99a" class="ulab-btn--primary"></button> In the previous exercise, you loaded Intermediate Representations (IRs) into the Inference Engine. Now that we've covered some of the topics around requests, including the difference between synchronous and asynchronous requests, you'll add additional code to make inference requests to the Inference Engine. Given an `ExecutableNetwork` that is the IR loaded into the Inference Engine, your task is to: 1. Perform a synchronous request 2. Start an asynchronous request given an input image frame 3. Wait for the asynchronous request to complete Note that we'll cover handling the results of the request shortly, so you don't need to worry about that just yet. This will get you practice with both types of requests with the Inference Engine. You will perform the above tasks within `inference.py`. This will take three arguments, one for the model, one for the test image, and the last for what type of inference request should be made. You can use `test.py` afterward to verify your code successfully makes inference requests. <!-- %%ulab_page_divider --><hr/> # Integrate the Inference Engine in An Edge App Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-d44d77ce" class="ulab-btn--primary"></button> You've come a long way from the first lesson where most of the code for working with the OpenVINO toolkit was happening in the background. You worked with pre-trained models, moved up to converting any trained model to an Intermediate Representation with the Model Optimizer, and even got the model loaded into the Inference Engine and began making inference requests. In this final exercise of this lesson, you'll close off the OpenVINO workflow by extracting the results of the inference request, and then integrating the Inference Engine into an existing application. You'll still be given some of the overall application infrastructure, as more that of will come in the next lesson, but all of that is outside of OpenVINO itself. You will also add code allowing you to try out various confidence thresholds with the model, as well as changing the visual look of the output, like bounding box colors. Now, it's up to you which exact model you want to use here, although you are able to just re-use the model you converted with TensorFlow before for an easy bounding box dectector. Note that this application will run with a video instead of just images like we've done before. So, your tasks are to: 1. Convert a bounding box model to an IR with the Model Optimizer. 2. Pre-process the model as necessary. 3. Use an async request to perform inference on each video frame. 4. Extract the results from the inference request. 5. Add code to make the requests and feed back the results within the application. 6. Perform any necessary post-processing steps to get the bounding boxes. 7. Add a command line argument to allow for different confidence thresholds for the model. 8. Add a command line argument to allow for different bounding box colors for the output. 9. Correctly utilize the command line arguments in #3 and #4 within the application. When you are done, feed your model to `app.py`, and it will generate `out.mp4`, which you can download and view. *Note that this app will take a little bit longer to run.* Also, if you need to re-run inference, delete the `out.mp4` file first. You only need to feed the model with `-m` before adding the customization; you should set defaults for any additional arguments you add for the color and confidence so that the user does not always need to specify them. ```bash python app.py -m {your-model-path.xml} ``` <!-- %%ulab_page_divider --><hr/> # Handling Input Streams Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-5de618db" class="ulab-btn--primary"></button> It's time to really get in the think of things for running your app at the edge. Being able to appropriately handle an input stream is a big part of having a working AI or computer vision application. In your case, you will be implementing a function that can handle camera, video or webcam data as input. While unfortunately the classroom workspace won't allow for webcam usage, you can also try that portion of your code out on your local machine if you have a webcam available. As such, the tests here will focus on using a camera image or a video file. You will not need to perform any inference on the input frames, but you will need to do a few other image processing techniques to show you have some of the basics of OpenCV down. Your tasks are to: 1. Implement a function that can handle camera image, video file or webcam inputs 2. Use `cv2.VideoCapture()` and open the capture stream 3. Re-size the frame to 100x100 4. Add Canny Edge Detection to the frame with min & max values of 100 and 200, respectively 5. Save down the image or video output 6. Close the stream and any windows at the end of the application You won't be able to test a webcam input in the workspace unfortunately, but you can use the included video and test image to test your implementations. <!-- %%ulab_page_divider --><hr/> # Processing Model Outputs Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-4fb9f776" class="ulab-btn--primary"></button> Let's say you have a cat and two dogs at your house. If both dogs are in a room together, they are best buds, and everything is going well. If the cat and dog #1 are in a room together, they are also good friends, and everything is fine. However, if the cat and dog #2 are in a room together, they don't get along, and you may need to either pull them apart, or at least play a pre-recorded message from your smart speaker to tell them to cut it out. In this exercise, you'll receive a video where some combination or the cat and dogs may be in view. You also will have an IR that is able to determine which of these, if any, are on screen. While the best model for this is likely an object detection model that can identify different breeds, I have provided you with a very basic (and overfit) model that will return three classes, one for one or less pets on screen, one for the bad combination of the cat and dog #2, and one for the fine combination of the cat and dog #1. This is within the exercise directory - `model.xml`. It is up to you to add code that will print to the terminal anytime the bad combination of the cat and dog #2 are detected together. **Note**: It's important to consider whether you really want to output a warning *every single time* both pets are on-screen - is your warning helpful if it re-starts every 30th of a second, with a video at 30 fps? <!-- %%ulab_page_divider --><hr/> # Server Communications Make sure to click the button below before you get started to source the correct environment. <button id="ulab-button-66f8bc80" class="ulab-btn--primary"></button> In this exercise, you will practice showing off your new server communication skills for sending statistics over MQTT and images with FFMPEG. The application itself is already built and able to perform inference, and a node server is set up for you to use. The main node server is already fully ready to receive communications from MQTT and FFMPEG. The MQTT node server is fully configured as well. Lastly, the ffserver is already configured for FFMPEG too. The current application simply performs inference on a frame, gathers some statistics, and then continues onward to the next frame. ## Tasks Your tasks are to: - Add any code for MQTT to the project so that the node server receives the calculated stats - This includes importing the relevant Python library - Setting IP address and port - Connecting to the MQTT client - Publishing the calculated statistics to the client - Send the output frame (**not** the input image, but the processed output) to the ffserver ## Additional Information Note: Since you are given the MQTT Broker Server and Node Server for the UI, you need certain information to correctly configure, publish and subscribe with MQTT. - The MQTT port to use is 3001 - the classroom workspace only allows ports 3000-3009 - The topics that the UI Node Server is listening to are "class" and "speedometer" - The Node Server will attempt to extract information from any JSON received from the MQTT server with the keys "class_names" and "speed" ## Running the App First, get the MQTT broker and UI installed. - `cd webservice/server` - `npm install` - When complete, `cd ../ui` - And again, `npm install` You will need *four* separate terminal windows open in order to see the results. The steps below should be done in a different terminal based on number. You can open a new terminal in the workspace in the upper left (File>>New>>Terminal). 1. Get the MQTT broker installed and running. - `cd webservice/server/node-server` - `node ./server.js` - You should see a message that `Mosca server started.`. 2. Get the UI Node Server running. - `cd webservice/ui` - `npm run dev` - After a few seconds, you should see `webpack: Compiled successfully.` 3. Start the ffserver - `sudo ffserver -f ./ffmpeg/server.conf` 4. Start the actual application. - First, you need to source the environment for OpenVINO *in the new terminal*: - `source /opt/intel/openvino/bin/setupvars.sh -pyver 3.5` - To run the app, I'll give you two items to pipe in with `ffmpeg` here, with the rest up to you: - `-video_size 1280x720` - `-i - http://0.0.0.0:3004/fac.ffm` Your app should begin running, and you should also see the MQTT broker server noting information getting published. In order to view the output, click on the "Open App" button below in the workspace. <button id="ulab-button-2c5c842f" class="ulab-btn--primary"></button>
github_jupyter